[Source: http://geekswithblogs.net/EltonStoneman]


Cloud services available from Microsoft Azure and Amazon Web Services both offer message queues and data storage, combinations which enable a very simple SOA solution based on Command-Query Separation.

Consumers and service providers communicate through the cloud message queuing service, using a pair of queues. One queue is public, where the service provider listens for request messages which can be sent by any consumer. The second queue is private to a particular consumer, where the consumer listens for responses from the provider:

This is a pattern familiar to the NServiceBus implementation – it is fully asynchronous and is all you need for Command messages – the consumer sends a command request, and continues doing what it does; the provider actions the request and sends a response, which the consumer can act on when its received.

For Query messages, the message pattern is the same, but utilises a separate service for storing and retrieving data. The provider receives a query request message, and as part of actioning it, pushes the requested data into the store. The response message sent to the consumer contains enough detail for the consumer to pull the data from the store:

Important to note that the consumers and service providers are physically as well as logically separated – they can be on completely different networks with no direct link in between. This is also true of the service providers – any number of nodes can subscribe to process messages from any location, and these can be cloud services too. Any component can participate in the solution provided it has Internet access. The implementation of the cloud components can be left abstracted, as a third-party service the actual implementation is not relevant for the design.

Compare this to a typical on-premise service bus implementation, for example using BizTalk with the ESB Toolkit:

Compare the key differences:

  • Endpoints – a single request/response endpoint is used for all consumers. The implementation can be scaled, but the design is inherently less scalable than the multiple response endpoints used in a paired-queue service bus;
  • Communication patterns – the same pattern is used for all service types, the request passing through the ESB to the providers, and the response passing back through the ESB to the consumers. Large query responses and small command responses share the same infrastructure;
  • Locations – although consumers and service providers are logically separated, the components are not physically separated. Consumers need network access to the ESB and the ESB needs network access to the Service Providers – typically all components are on the same domain;
  • ESB implementation – being on-premise, the implementation of the bus is part of the design, so the BizTalk infrastructure needs to be accounted for.

A further advantage of the CQS version is the shared data source. It can use a simple lookup key for query responses, built from the request parameters. Long-lived data can remain in the store – when the provider receives a request for data, it can check the store and if already present, all it needs to do is send the key to the consumer. For even lower latency, the key-generation algorithm can be shared so the consumer can determine the data store key for a given request, allowing it to check the data source before sending a request, which could bypass the bus altogether.

The final advantage is the ease of getting started with a cloud service bus solution – in my next post I’ll walk through a sample implementation which is the product of half a day’s effort.