I always have a sense of satisfaction when I find out that there is less tweaking or tuning needed to make my applications perform as expected under high throughput workloads (or that the tweaking is easier to do).  I had this sense of satisfaction just this week with some new insights on WCF 4.0 (and while tuning BizTalk Server 2010 using the shiny, new BizTalk Server 2010 Settings Dashboard).

It’s very common for my team to receive questions on why a certain WCF service can’t receive more requests per second, or why more load can’t be pushed through an application exposed as a WCF endpoint.  We documented some of the WCF tunable settings in the 2009 version of the BizTalk Server Performance Optimization Guide sections "Optimizing BizTalk Server WCF Adapter Performance" and "Optimizing WCF Web Service Performance".  While this guidance was done in the context of a BizTalk solution, the WCF-specifics are valid for any WCF application.

The documentation has not caught up to the binaries (yet), but we have it on good authority that we have some new, higher, more dynamic defaults for the ServiceThrottlingBehavior in .NET 4.0 (and that they actually made it into the release).  I also mention new performance counters you can use to diagnose if you are hitting your high watermarks.

ServiceThrottlingBehaviorone of the usual culprits

With .NET 4.0, we’ve made some improvements in WCF so it is a bit more dynamic when it comes to the ServiceThrottlingBehavior.  Directly from the ServiceThrottlingBehavior documentation is the following text:

Use the ServiceThrottlingBehavior class to control various throughput settings that help prevent your application from running out of memory.

The MaxConcurrentCalls property limits the number of messages that currently process across a ServiceHost.

The MaxConcurrentInstances property limits the number of InstanceContext objects that execute at one time across a ServiceHost.

The MaxConcurrentSessions property limits the number of sessions a ServiceHost object can accept.

The key word is highlighted above:  limits.  While limits can be a good thing, when they are set too low they are a distraction and an annoyance.  If it is so easy to tune and diagnose WCF applications, <insert sarcasm here>, why would we need to increase the default limits?  With .NET 4.0, we have not just increased the defaults, we have also made it a bit more dynamic based on the number of processors seen by the OS.  So a more powerful machine will have higher limits.  Here are the old and new defaults:

Property .NET 4.0 Default Previous Default
MaxConcurrentCalls 16 * ProcessorCount 16
MaxConcurrentInstances 116 * ProcessorCount 26
MaxConcurrentSessions 100 * ProcessorCount 10

Note that the documentation has not been updated, yet, but someone is working on that.

Diagnosing ServiceThrottlingBehavior limits

Prior to .NET 4.0, it was a bit of black magic diagnosing if you were hitting your ServiceThrottling limits.  With .NET 4.0 we’ve added some new performance counters to help diagnose this.  In your application’s config file, you have to enable the WCF performance counters.  After doing this, you’ll see some counters that are new to .NET 4.0.  These show up at the Service level under the Performance Counter object "ServiceModelService 4.0.0.0":

  • Percent of Max Concurrent Calls
  • Percent of Max Concurrent Instances
  • Percent of Max Concurrent Sessions

Here’s a screen shot from perfmon:

As an example, if you have your ServiceThrottlingBehavior.MaxConcurrentCalls set to 200, and the counter "Percent of Max Concurrent Calls" is showing "10", then your service currently has 20 concurrent calls (10% of 200).  Once again, the documentation is lagging behind the binariesI’ll see if I can get someone to fix this as well.

The next obvious question is, "What should I use for the ServiceThrottling values?".  The answer is a resounding "it depends"!  As with the maxconnection setting, it depends on your application.  Set it too low, and you will throttle too soon, but set it too high, and you could bring your server to its knees with excessive CPU usage and context switching.  As always, performance test your solutions before going to production.