DirectSmile Integration Server installation scenarios

I am very happy to see more and more complex installations of the Directsmile Integration Server (DSMI) out in the field. Although you can fairly run DSMI on a single server, in scenarios where you depend on 100% up-time or need to capture extremely high load peeks it can be interesting for you to think about failover scenarios and scalability. In this blog post I tried to capture all the different scenarios I saw in the field as well as their pros and cons.

DSMI replication master vs. slave

Before we go into detail here’s a short reminder about the replication service and the difference between a DSMI master and a slave.

In a DSMI environment that is using replication you have always one server that is the master and one to many slaves. The difference between master and slave is, that the master is the only DSMI were you can create new users, accounts and upload repository items like sets and documents. Clients can request images and documents from all slaves and the master though. This simple architecture is based on the convention that it is acceptable that you cannot upload items or create new users while the master is getting maintained, but this must not effect client calls and you system keeps to be accessible for client calls.

Read more about the DirectSmile Replication Service in those two blog posts:


The DirectSmile Integration Server is easy to scale. You simply run a base installation of DSMI on, at least, two machines. While the installation you assign one machine as the DSMI master and the other as DSMI slaves (see chapter DSMI replication master vs. slave) . Finally you install the DSMI replication service on one of your machines und your scaled DSMI is ready to rock. The DSMI replication service ensures that changes you make in the master’s file system and the DSMI database are tracked and the slave machines are updated.

To take advantage from your scaled DSMI you place a load balancer in front of your DSMI machines that routes the incoming requests in a schema you define.

Here’s a short graphic showing how such a simple approach would look like.



Based on the architecture shown in the diagram above you see that in this scenario it’s quite easy to run one DSMI as a failover server for the other. From a service client perspective those two DSMI installations look like one. Actually the client can’t access a single machine because the client does it’s calls against a single domain/IP registered on the load balancer. The load balancer then routes the requests to one or the other machine. Theoretically there is no limit in DSMI slave installations in this scenario, typically you would start with one slave and depending on your load you would increase the amount of installations.In a failover scenario it is important that it is ensured that your client requests are served, even if one machine is down.

Single data center installation vs. replication between data centers

Usually you want to install your replicated DSMI in one data center or on your site. Replicating over different data centers can become necessary if you split preview and print data. For example, if you host a web server farm in a data center that only serves preview images and documents for your websites. On the other hand, you have a second DSMI system that is located on-premise in your LAN next to your presses where print data is rendered in high resolution. The advantage of spitting preview and print data reduces the load of data send over the wire from your DSMI servers to your presses.

You can easily escalate such a scenario depending on your failover and load balancing needs. Let’s say you start with a replicated DSMI farm for your previews and a single DSMI on-premise. The on-premise system can consist of several machines again, where one machine would have DSMI installed and the other machines provide just production server instances, like shown in the diagram.


As you can see there is a bit more IT involved here. We have to deal with complex routing, and firewall policies must be set to allow the communication between those two systems. This scenario depends on a good connection between different locations. To ensure that the replication is done in a reasonable amount of time your connection bandwidth must be 2MBit or higher in up- and down streaming.

In the end you would have one machine, the DSMI master (which can be located where ever you want, either on the on-premises system or on the web, that replicates the changes to the other server. Though DirectSmile sets and documents are easy maintained exposed to the public).

On top of that you could certainly replicate the on-premises servers as well again and built a second failover cluster.

Another typical scenario where you want to replicate between datacenters is if your company has different branches in different countries and you need to provide preview and print data in all those different  branches. In this case you would replicate internally and externally but still could take advantage from a single point to upload your items and manage the DSMI users and accounts.

Network storage (NAS) vs. server storage

All scenarios described above have one thing in common, that is that the DSMI user folders, containing sets and documents are decentralized. In other words, the same user folder exists on all DSMI servers. This effects first of all the initial replication when you add a new slave to your system though. But, depending on the amount of items you update or add in daily business, this can easily become a number in the equation regarding the load on the wire.

Using a NAS or SAN solution may help you to overcome the problem of a decentralized storage, by providing a single point to store the DSMI user folders.


Unfortunately there are some limitations when using a centralized storage system. DSMI relies on Windows file sharing and Windows authentication, that’s why the operating system providing the shares must be Windows based. DirectSmile strongly recommends to use Windows Storage Server 2008, which makes it possible to configure a single Windows user account that is used as a service-user for the DSMI services and can read and write to the shared folders.

Providers tend to offer you a standard ISCSI/SAN storage. Those system can’t be mounted simultaneously on more then one Windows Server. This means you will need a Windows Server in between to share the storage. After all you will loose read/write performance and gain a single point of failure.


Running a replicated DSMI cluster for failover and/or deal with a heavy load is possible although the initial setup requires profound understanding of IT related topics like routing and load balancing. Because those scenarios differ strongly based on the network topology it’s important to invest time to plan what’s the best for your situation. Often those scenarios are even bound to the workflows of the campaigns and projects you plan to run.

I hope this article could help you paint an overall picture of what is possible. If you have any questions don’t hesitate to call us. We at DirectSmile are very excited about these kind of projects and love to assist you setting up those installations.


A little round trip using TDD (Part 1)


Since a while now I use unit tests more and more often in my projects. The main reason why I think unit tests are necessary is because they provide an extra control not breaking anything while refactoring code. Especially if you are maintaining your API it can be quite relaxing to see that all your unit tests succeed after you did a lot of changes to your code base.

But, excepting unit tests and doing Test Driven Development are still two different pairs of shoes though. While the classic approach would be to write a class top down and implement all the logic and afterwards write some test code, in TDD you write the tests first. You actually begin to write a test class, writing the first test method that calls the method that is to be tested later. The tested method exists only as a stub in the beginning.

Honestly, I thought WTF? How can such an approach be efficient? I have to write every method twice just to get a good feeling. That looks like a total waste of time and money.

On the other hand, what makes TDD in the long run very sufficient is the combination of well tested, well structured and well documented code. Especially the documentation an specification is a very powerful topic in doing TDD and in my eyes the most reason to do TDD at all.

Having those arguments in mind I made the last Monday a full TDD day. Isn’t the beginning of a new year the best time to test the one or the other paradigm?

With this blog post I start a small series of probably three parts were I walk through my experiences with TDD starting with a small test method, implementing the real method to finally render automated documentation while building the project. I hope this will be fun for you.


I’m using VS 2010 and MS Unit Tests. I’m running my tests using the Resharper test runner just because I like it. I also use Sandcastle Help File Builder to render the API help files based on the xml code documentation.

A simple unit test

Let’s assume you need to write a function that can queue an mail object in a container. The method does some validity checks and if those pass the function returns the Id of the stored queue item. If one of the validity checks fail or the item can’t be queued the function throws an exception.

Our mail object would consist of the following required fields, like subject, body and recipient and probably a foreign key to a user object that initiated the email.

Having those information together I could write a test method like this:

Public Sub Successful_AddEmail()        
   Dim accountId As Long = 1        
   Dim subject As String = "Test Email"        
   Dim body As String = "Hi there, this is a test mail..."        
   Dim recipients As String = ""        
   Dim mBll As New MailBLL         

   Dim result = mBll.AddMail(accountId, subject, body, recipients)

   Assert.IsTrue(result > 0)    
End Sub

First of all the initial <TestMethod()> attribute marks this sub to be a test method. This enables Visual Studio or Resharper to list this method in the Test Explorer. I begin the function name always with Successful or Fail prefix to indicate that the parameters in the method call are all valid and so the method call must be successful.

The test is using the AAA (Arrange, Act and Assert) principal. First we arrange the test scenario by creating parameters and types we need to run the test. Then we make the actual call (just a single call, because we only test just one unit a time) and finally we do our assertions on the result.

To testify if the validity checks work correct and throw the expected exception we write a second unit test, but this time we initialize parameter with a wrong value in the arrange section. in the sample below we pass an empty subject. This in invalid, because our API does not except mail objects without subjects.

  Public Sub Fail_AddEmail_With_Missing_Subject()        
    Dim accountId As Long = 1        
    Dim body As String = "Hi there, this is a test mail..."        
    Dim recipients As String = ""        
    Dim mBll As New MailBLL         

     Dim result = mBll.AddMail(accountId, "", body, recipients)         

    Assert.IsTrue(result = 0)    
End Sub

The method looks quite the same, except that we now annotate our method with the ExpectedException attribute. By passing a specific type the test will also evaluate the thrown exception type.

As you can see the result was successful, the empty subject caused an exception, as expected.


And in the test console we find the correct exception message.

Error: 0 : 14.01.2012 16:48:34 – [MailBLL::AddMail]: You must provide a subject to send an email.

A quick look at the code, especially in the first test method, shows us that we already have written a perfect API client code example. Later on in this series I will show you how we can take advantage from this example code in XML based documentation.

Have fun.

Heilige drei Könige

Unsere französischen Kollegen begehen diesen Tag auf eine ganz besondere Art und Weise. In Frankreich ist es üblich an diesem Feiertag einen herrlichen Kuchen zu verspeisen. Dieser Kuchen zeichnet sich darin aus, dass in einem beliebigen Stück eine kleine Porzellanfigur versteckt ist. Der oder diejenige, die das Stück bekommt, die die Figur enthält wird König oder Königin.

Der Kuchen ist köstlich und unsere Krönung ergab zwei Könige. Gewonnen haben Tobias und Bernie Open-mouthed smile


Wie man auf dem Foto sieht haben sie auch zwei schöne Krönchen auf dem Kopf.

Herzlichen Glückwunsch!