[Source: http://geekswithblogs.net/EltonStoneman]

In a recent project we had a requirement for a configurable cache, residing on a BizTalk host and storing responses from WCF services which the BizTalk app had brokered. Producing this as a generic cache adapter was the preferred option, but project timescales didn’t allow for it – instead I’ve written the adapter as an open source component which is available on CodePlex: BizTalkCacheAdapter, and which we’re now making use of in the project.

It’s a simple design suitable for any situation where BizTalk is brokering services to consumers. The incoming request from the consumer is hashed to generate a cache key, and the response is stored in the cache against the request key. When future requests exactly match, they will receive the cached response; if the response doesn’t exist in the cache, the service provider is invoked and the response is added to the cache. The adapter is independent of the cache store being used – currently the only option provided is NCache Express, but extending it to use memcached or Velocity will be straightforward.


The adapter uses context properties to store cache configuration, so it’s suitable for orchestration or messaging solutions. Alongside the adapter on CodePlex is a sample project which demonstrates the approaches. The workflow is the same in both cases:

  • The consumer sends a service request via the CacheXMLReceive pipeline, which checks if the message type is configured for caching, and promotes the IsMessageCacheEnabled property;
  • If caching is enabled, a two-way message is sent to the cache adapter. This is a GET request, so the adapter checks the cache using the MessageCacheKey context property – if the context property is not available, the adapter writes it by hashing the contents of the service request in the message body;
  • The response from the cache adapter promotes the IsCached property, which identifies whether the response is in cache – if so, the body of the response message will be the cached service response, if not the body is the original service request which can be passed on to the service;
  • If IsCached is false, the service is invoked and a one-way message containing the response is sent to the cache adapter. This is a PUT request, so the adapter adds the message body to the cache using the MessageCacheKey;
  • The service response (either cached or from the provider) is returned to the consumer.

With the messaging solution, the same workflow is carried out using subscriptions to the promoted cache configuration properties, and the various send ports. This workflow may not be suitable for more complex messaging solutions, where services are invoked through dynamic send ports – there is an alternative workflow which I’ll cover in a separate post.

The only real complexity in the adapter is in putting the service response into the cache – the outgoing service request message has the cache configuration properties promoted, but the incoming response does not. In orchestration solutions this isn’t a problem, as the cache properties can be stored as state in the orchestration instance. In messaging solutions, the cache properties are retained by temporarily adding them to the cache, keyed by the interchange ID of the service instance, and removing them when the response is added to the cache.


The cache adapter can be used with messages of any type, but with XML messages it can employ different cache configurations for different message types – e.g. a service to get current tax rates may configured to live for several days in the cache, while a service to get current exchange rates may use the same cache store, but be configured to expire after an hour. The CacheReceive and CacheReceiveXml pipelines contain properties for configuring the cache – it can be enabled/disabled for all messages, or (for the XML pipeline) only for configured message types:

Message-level configuration uses an SSO application store, which holds a MessageCacheConfigurationCollection object in XML:

<messageCacheConfigurationCollection xmlns=http://schemas.sixeyed.com/CacheAdapter/2009>









Caching is enabled for a message if:

  • IsCacheEnabled is true on the pipeline AND
    • IsMessageTypeConfigurationEnabled is false on the pipeline OR
    • IsMessageTypeConfigurationEnabled is true on the pipeline AND
      • the incoming message type has an entry in the MessageCacheConfigurationCollection in SSO.

The lifespan of the object put into cache will be either the cacheLifespan specified in the MessageCacheConfiguration in SSO, or the defaultLifespan specified in the cache send port:

(The point at which the object is removed from cache will depend on the provider, some will remove it when its lifespan expires, others only when its lifespan expires and the cache is full or the cleanup schedule runs).


For version, the CodePlex project comes with three project releases, which can be used independently or together:

  • Sixeyed.CacheAdapter.msi – a BizTalk application MSI which can be imported and installed to your BizTalk server(s). The application contains the configuration schema and pipelines, so it can be used as-is, or the relevant artifacts can be moved to your own application;
  • Sixeyed.CacheAdapter.Binaries.zip – a ZIP file of the adapter binaries you can deploy manually. Use instead of the CacheAdapter MSI, if you don’t want to deploy the whole BizTalk application;
  • Sixeyed.CacheAdapter.CacheViewer.msi – a Windows forms application which lets you view the contents of a cache store (the app uses the configuration components from the adapter management assembly, so it is also cache-provider agnostic):

– a documentation release:

  • Sixeyed.CacheAdapter.chm – documents the .NET assemblies of the adapter

– and two sample releases which should be deployed together:

  • Sixeyed.CacheAdapterSample.msi – a BizTalk application MSI which has sample usage of the cache adapter, demonstrating orchestration and messaging approaches;
  • Sixeyed.CacheAdapterSample.Service.msi – a WCF MSI which provides a service which the BizTalk app can cache.

Pre-requisite for the initial release is the non-commercial NCache Express to use as the cache provider.

To add the sample message-level cache configuration to SSO, the SSO Config Tool can import the provided .ssoconfig file into SSO.

I’ll update the project to add a memcached provider, and write an updated post when it’s available – and the same when Velocity is released. One advantage in having multiple providers would be in evaluating their performance against your expected caching load, so I’ll look at extending the sample solution and adding some LoadGen scripts.

And as Michael Stephenson commented, a nice alternative would be to move the caching up to the WCF layer as a behavior, so WCF requests would check the cache first. Two options for this – it can either be a client behaviour using a local cache store (which could be more performant and would have less impact on server resources, but requires the cache provider to be deployed client-side), or as an operation behaviour using a shared server-side cache (which could be more performant, depending on the type of messages being cached, would impact on server and network resource, but would make the caching transparent to consumers). The complexity is on caching the response message if not already cached – the cache key from the original message will have been lost, so there needs to be a different approach for maintaining state to correlate the WCF service request and response.