Ever wondered if your BizTalk is all it can be?

Verifying your BizTalk Server installation is not an easy thing to do. So far the BizTalk Server 2009 Performance Optimization Guide is probably your safest bet. The Optimization Guide provides in-depth information for optimizing the performance of a BizTalk Server solution. However,  it won’t help you evaluate you BizTalk installation at runtime. To do this, you’ll have to continue analyzing it using Performance Analysis of Logs (PAL).

I’m not saying these aren’t good tools. In fact they are great. They are however quite extensive, and will ultimately not answer the question: “Do I get the expected workload through BizTalk?”.

Ewan Fairweather, together with some other smart people at Microsoft, have put together a comprehensive study about scaling out BizTalk. The principal is simple, test the same scenarios with different environments and quantify the scale out capabilities of one to four BizTalk servers and one to three message boxes.

The BizTalk Server 2009 Scale Out Testing Study provides sizing and scaling guidance for BizTalk Server. However, you’d find it challenging to compare your environment to these numbers as you haven’t got access to the same testing scenarios. And even if you did, you still couldn’t be sure you’ve configured it the same, and that you have been running the equivalent LoadGen scripts.

Four months ago, I contacted Ewan to ask him if he had some testing scenario I could run to evaluate the environment I was currently working on. He didn’t, but seemed very aware of the lack of such a “tool”. One thing led to another and we came to the conclusion we should make it ourselves.

– Today, four months later, we are happy to announce that BizTalk Benchmark Wizard is publicly available on Codeplex.    

The goal has been to make an easy to install and simple to use, wizard-like application with which one could test a BizTalk environment- and compare the result to the study. One of the challenges where to scope the project, and prevent ourselves from solving problems already addressed in tools such as LoadGen and PAL. For instance, BizTalk Benchmark Wizard is NOT

a load tool

Although it does create load, it only does so against ONE receive host. The application could work against multiple receive hosts, in fact the earlier versions did, but it required a much more complex setup process from the user. We came to the conclusion that if your environment measures up using only one receive host, it most likely would do so using multiple hosts.

By setting these limitations, it also simplifies the comparison of environments and benchmarking them against the result from the Microsoft Study.

an analyzing tool

The tool does not analyze any eventual problems or bottlenecks. Neither does it give any hints or advice of how to solve them. It does however collect Perfmon counter data from each of the servers, both BizTalk and SQL. If your environment fails the test, you can analyze the data using the PAL tool.

How it works:

  1. After the user has started the application and specified the BizTalk Group, the tool analyzes its configuration, finding all the BizTalk servers, Messageboxes etc.
  2. Secondly, the user gets to select one of two scenarios: Messaging or Orchestration. Each scenario has a set of tested environments such as
    • Single server (2*Quad CPU, 4GB RAM)” 
    • “1*BTS (1*Quad CPU.  4GB RAM) + 1*SQL(1*Quad CPU, 8GB RAM)”.
    • “2*BTS (2*Quad CPU.  8GB RAM) + 2*SQL(2*Quad CPU, 16GB RAM)”.
  3. The user selects the environment which most resembles his/her own.
  4. The user then starts the Indigo Service, a console application hosting a service which will be called from the BizTalk Send port.
  5. As the user clicks “Run test”, the tool continues to start ports and orchestrations. It will also start the Perfmon collector sets if the user has chosen to create those.
  6. As the test proceeds the user can monitor the counter values through the gauges (CPU utilization, Received msgs/sec and Processed msgs/sec). The default test duration is 30 minutes, with a warm-up of 2 minutes.
  7. Finally, the user is presented a result, which is either Succeeded or Failed.  


If you pass the test, you can proudly submit your result to the the High Score list. “E.W.N” seams to be the one to beat