Introduction to Windows Azure AppFabric Caching CTP

Introduction to Windows Azure AppFabric Caching CTP

This blog post provides a quick introduction to Windows Server AppFabric Caching.  You might also want to watch this short video introduction.

As mentioned in the previous post announcing the Windows Azure AppFabric CTP October Release, we’ve just introduced Windows Azure AppFabric Caching, which provides a distributed, in-memory cache, implemented as a cloud service. 

Earlier this year we delivered Windows Server AppFabric Caching, which is our distributed caching solution for on-premises applications.  But what do you do if you want this capability in your cloud applications?  You could set up a caching technology on instances in the cloud, but you would end up installing, configuring, and managing your cache server instances yourself.  That really defeats one of the main goals of the cloud – to get away from managing all those details.

So as we looked for a caching solution in Windows Azure AppFabric, we wanted to deliver the same capabilities available in Windows Server AppFabric Caching, and in fact the same developer experience and APIs for Windows Azure applications, but in a way that provides the full benefit of cloud computing.  The obvious solution was to deliver Caching as a service. 

To start off, let’s look at how you set up a cache. First you’ll need to go to the Windows Azure AppFabric LABS environment developer portal (http://portal.appfabriclabs.com/ ) to set up a Project, and under that a Service Namespace. Then you simply click the “Cache” link to configure a cache for this namespace.

With no sweat on your part you now have a distributed cache set up for your application.  We take care of all the work of configuring, deploying, and maintaining the instances. 

The next screen gives you the Service URL for your cache and an Authentication token you can copy and paste into your application to grant it access to the cache. 

 

So how do you use Caching in your application?

First, the caching service comes with out-of-the-box ASP.NET providers for both session state and page output caching.  This makes it extremely easy to leverage these providers to quickly speed up your existing ASP.NET applications by simply updating your web.config files.  We even give you the configuration elements in the developer portal (see above) that you can cut and paste into your web.config files. 

You can also programmatically interact with the cache to store and retrieve data, using the same familiar API used in Windows Server AppFabric Caching.  The typical pattern used is called cache-aside, which simply means you check first if the data you need is in the cache before going to the database.  If it’s in the cache, you use it, speeding up your application and alleviating load on the database.  If the data is not in the cache, you retrieve it from the database and store it in the cache so its available next the application needs it.

The delivery of Caching as a service can be seen as our first installment on the promise of AppFabric – to provide a consistent infrastructure for building and running applications whether they are running on-premises or in the cloud. You can expect more cross-pollination between Windows Server AppFabric and Windows Azure AppFabric in the future.

We invite you to play with the Caching CTP and give us feedback on how it works for you and what features you would like to see added.  One great way to give feedback is to fill out this survey on your distributed caching usage and needs.

As we move towards commercial launch, we’ll look to add many of the features that make Windows Server AppFabric Caching extremely popular, such as High Availability, the ability to emit notifications to clients when they need to refresh their local cache, and more.

 

New Services and Enhancements with the Windows Azure AppFabric

Today’s an exciting day!  During the keynote this morning at PDC10, Bob Muglia announced a wave of new building block services and capabilities for the Windows Azure AppFabric.  The purpose of the Windows Azure AppFabric is to provide a comprehensive cloud platform for developing, deploying and managing applications, extending the way you build Windows Azure […]

Changing the game: BizTalk Server 2010 and the Road Ahead

Changing the game: BizTalk Server 2010 and the Road Ahead

What’s next for BizTalk?  As excited as we are about the recent announcement that we shipped BizTalk Server 2010 (see blog post), we know that customers depend upon us to give them visibility into the longer-term roadmap; given the lifespan of their enterprise systems, making an investment in BizTalk Server represents a significant bet and commitment on the Microsoft platform.  While we are currently working thru product planning on BizTalk VNext, we wanted to share some of the early direction to date. 

  • At PDC’09 last year, we discussed at a high-level our strategy for BizTalk betting deeply on AppFabric architecturally so that we can benefit from the application platform-level investments we are making both across on-premises and in the cloud.  This strategy has not changed, and in fact we are accelerating some of our investments; we started this journey even in BizTalk Server 2010 with built-in integration with Windows Server AppFabric for maps and LOB connectivity (a feature called AppFabric Connect).
  • At PDC’10 this week we released-to-web a new innovative BizTalk capability which will allow you to bridge your existing BizTalk Server investments (services, orchestrations) with the Windows Azure AppFabric Service Bus – this new set of simplified tooling will help accelerate hybrid on/off premises composite application scenarios which we believe are critical to enable our customers to start taking advantage of the benefits of cloud computing (see blog post on this capability).
  • Also this week, we disclosed an early peek into our strategy of “Integration as a Service” which begins to shed light on how we will be taking the integration workload to the cloud.  This is a transition we have already made with Windows Server and SQL Server (as we have released Azure flavors of these server products); and we are committed to following this same path with integration. Link to recorded Integration session.

Our plans to deliver a true Integration service – a multi-tenant, highly scalable cloud service built on AppFabric and running on Windows Azure – will be an important and game changing step for BizTalk Server, giving customers a way to consume integration easily without having to deploy extensive infrastructure and systems integration. Due to the agile delivery model afforded by cloud services, we are able to bring early CTPs of this out to customers much more rapidly than traditional server software. We intend to offer a preview release of this Azure-based integration service during CY11, and will update on a regular cadence of roughly 6 month update cycles (similar to how Windows Azure and SQL Azure deliver updates). This will give us the opportunity to rapidly respond to customer feedback and incorporate changes quickly.

However, regardless of the innovative investments we are making in the cloud, we know our BizTalk customers will want to know that these advantages can be applied on-premises (either for existing or new applications).  We are committed to delivering this new “Integration as a Service” capability on-premises on AppFabric server-based architecture.  This will be available in the 2 year cadence that is consistent with previous major releases of BizTalk Server and other Microsoft enterprise server products.

Additionally, knowing well that our existing 10,000+ customers will move to a new version only at their own pace and on their own terms, we are committed to not breaking our customers’ existing applications by providing side-by-side support for the current BizTalk Server 2010 architecture.  We will also continue to provide enhanced integration between BizTalk and AppFabric to enable them to compose well together as part of an end-to-end solution. This will preserve the investments you have made in building on BizTalk Server and enable easy extension into AppFabric (as we have delivered today with pre-built integration with both Windows Server AppFabric and Windows Azure AppFabric).

Another critical element is providing guidance to our customers on how best to deploy BizTalk and AppFabric together, in order to best prepare for the future. At PDC this week we delivered the first CTP of the Patterns and Practices Composite Application Guidance which provides practices and guidance for using BizTalk Server 2010, Windows Server AppFabric and Windows Azure AppFabric together as part of an overall composite application solution. We will also be delivering soon a companion offering from Microsoft Services which will provide the right expertise and strategic consulting on architecture and implementation for BizTalk Server and AppFabric. We will work closely with our Virtual-TS community & Partners to extend similar offerings. We will continue to update both the Composite Application Guidance and consulting offering as we release our next generation integration offerings, to help guide our customers as they move to newer versions of our products and take advantage of our next-generation integration platform built natively on AppFabric architecture.

We are excited to share these plans for the first time and prove our commitment to continue to innovate in the integration space. As BizTalk Server takes a bold step forward in its journey to harness the benefits of a new middleware platform, which will provide cloud and on-premises symmetry, we will make it a lot easier for our customers to build applications targeting cloud and hybrid scenarios. We look forward to delivering the first CTP of integration as a service to market next year!

Balasubramanian Sriram, General Manager, BizTalk Server & Integration

Burley Kawasaki, Director, Product Management

Azure in Action: Large File Transfer using Azure Storage

Azure in Action: Large File Transfer using Azure Storage

This webcast will look at the implementation of a source control plug in for Visual Studio that uses Azure storage as a repository. The use of Azure table storage and blob storage to store the source files and project, file and check-in information. The implementation provides a very cost effective solution that allows for a source control system without any infrastructure.
The webcast is here.


Updating AppFabric Cache via SQL Service Broker External Activator

Updating AppFabric Cache via SQL Service Broker External Activator

 Event Driven Updates


If I can’t have magic then I want something easy. I’d like to have changes that occur on certain tables be reflected in AppFabric Cache. I’d also like this to occur in an event based manner.  “Use SqlDependency and be done with it” you say? That’s certainly one answer but it also creates a coupling between your application and managing cache refreshes or it means you’re deploying yet another domain specific service.


External Activator is an engine designed to invoke external code in response to an event in SQL Server. This is exactly what I want. In this specific case I am using it to update my cache data but it’s not hard to imagine many use cases where this is useful and being able to do it with a simple executable rather than a full blown service has many implications in terms of xcopy deployment etc.


I want to be clear up front that this idea is in the investigatory stage. My results are encouraging but you should expect to do some analysis and tweaking if you attempt the technique for a production system.


Finally, before I jump in this example assumes you have AppFabric Cache installed and working and you have at least read about External Activator. 
If you need more information about  External Activator   and service broker before starting see the SQL Service Broker Team Blog.

 

Creating the Service Broker Queues


The first step to getting going was to download the External Activator from the feature pack  page and diligently follow all of the directions.  Be sure to download the appropriate 64 bit or 32 bit msi for your platform.


Next, I created a database called CacheUpdateSample and made sure to enable Service Broker and grant the service account that the activator would be running under the necessary access.


Once that was done I issued the following commands:

use CacheUpdateSample

CREATE MESSAGE TYPE GenericXml VALIDATION = WELL_FORMED_XML

GO

CREATE CONTRACT GenericContract

(

      GenericXml SENT BY ANY

)

GO   

CREATE QUEUE MessageQueue

GO

CREATE QUEUE UpdateCacheQueue

GO

CREATE SERVICE MessageQueueService ON QUEUE MessageQueue (GenericContract)

GO

CREATE SERVICE UpdateCacheService

ON QUEUE UpdateCacheQueue

(

      [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]

);

GO

CREATE EVENT NOTIFICATION UpdateCacheNotification

ON QUEUE MessageQueue

FOR QUEUE_ACTIVATION

TO SERVICE ‘UpdateCacheService’ , ‘current database’

GO


This set up the service broker infrastructure I needed. Now I had to figure out a way to get messages flowing.

 

Service Broker Shenanigans

The vehicle I chose to drive the updates within the database might be viewed as slightly unconventional:

create procedure [dbo].[uspSendGeneric]

(@xml xml)

as

begin

      DECLARE @dh UNIQUEIDENTIFIER;

      BEGIN DIALOG CONVERSATION  @dh

      FROM SERVICE [MessageQueueService]

      TO SERVICE ‘MessageQueueService’,‘current database’

      ON CONTRACT GenericContract

      WITH ENCRYPTION = OFF;

      SEND ON CONVERSATION @dh MESSAGE TYPE GenericXml(@xml);

end

GO


If you look closely you’ll see that it’s talking to itself!


I did this because it was easy to make the updates fire the way I wanted them to and it was easy to make sure I cleaned up my conversations. There is a lot of material out there on conversation patterns, serializing conversation handles, explaining why both sides should end the conversation and all kinds of things that are very interesting but I just wanted the thing to fire when I wanted it to and go away when I was done and this worked well in my testing.


Once I had my narcissistic procedure done I hooked it up to a simple trigger

create trigger [dbo].[sendGenericTrigger]

on [dbo].[TestTable] FOR Insert, Update

as

begin

      declare @xml xml;

      set @xml = (select * from inserted for xml auto,root(‘SSBData’),elements)

      exec uspSendGeneric @xml

end

GO

 

That Which Is Invoked

Once the database side is all hooked up it’s time to create something for the External Activator to activate. So, I quickly created a small application to drive the cache updates.


The heart of main looks like this:

using (var con = new SqlConnection(ConfigurationManager.ConnectionStrings[dbKey].ConnectionString))

{

    con.Open();

    var clean = ProcessMessages(con);

    EndConversations(con, clean);

    con.Close();

}


The ProcessMessages  function retrieves the messages , populates the cache. And returns the conversation handles. The EndConversation routine spins through the handles and closes them.


When you read the sample code that contains the body of the routines you’ll see they are very simple for demo purposes. In production you *could* develop an elaborate dispatching system using dynamic assembly loading or various validation checks etc. The one thing to keep in mind however is to not affect the simplicity of the design. Prefer creating another event and executable that you map in the config file to a monolithic solution. By doing this you’ll be more able to take advantage of the flexibilty of this aproach.

 Impressions


There were several things I liked about this technique:

  • It was faster than I expected even though External Activator is invoking the executable each time the event fires.
  • I was able to edit and recompile code without restarts/ file locking
  • Using the log file produced by external activator was easy.


One facet I will investigate further to see if there is a more elegant way to do things is having the queue talk to itself. While this makes conversation cleanup easy it does result in an extra invocation of the executable. Another might be a detailed measurment of cache misses/hits % using this method vs. deploying a dedicated service. 

Code for the sample can be found here.

 

 

 

Thanks to teammates Mark Simms, Emil Velinov, Jaime Alva Bravo and James Podgorski for their review

Windows Azure AppFabric SDK October Release available for download

Windows Azure AppFabric SDK October Release available for download

A new version of the Windows Azure AppFabric SDK V1.0 is available for download starting 10/27. The new version introduces a fix to an issue which causes the SDK to rollback on 64 bit Windows Server machines with BizTalk Server installed.

The fix is included in the updated SDK download.  If you are not experiencing this issue, there is no need to download the new version of the SDK.

The Windows Azure AppFabric Team

AppFabric’s AutoStart feature: great news for BizTalk

AppFabric’s AutoStart feature: great news for BizTalk

Any of you who have been writing WCF front-ends for BizTalk services will know about one of ASP.NET’s failings: first request latency. A service (or application) hosted in IIS doesn’t start-up and JIT itself fully until the first request is received.

And if low-latency is important to you, this isn’t acceptable, as the first request can incur delays ranging anywhere from 2 secs to 60 secs!

There are many common ways to…

Server AppFabric for the BizTalk Developer Video Session

If you consider yourself a hard core BizTalk Developer, then this session is for you!

A few weeks ago I presented a session on Server AppFabric to the BizTalk User Group in Sweden.  This session was geared to showing how similar Workflow 4.0 and AppFabric are to BizTalk – conceptually at least.  The goal of this session was to show how us BizTalk guys and gals can quickly pick up Workflow 4 and AppFabric because we already understand the concepts.

I would highly recommend this 60 minute session to all fellow BizTalkers out there!

This session is available on Channel 9 – http://channel9.msdn.com/Blogs/MSCOMSWE/Tech-Overview-WCFWF-Server-AppFabric-BizTalk-Conference-Stockholm

The code and slides are available for download at http://www.biztalkgurus.com/media/p/29973.aspx.

At the same conference, I gave a session which covered Windows Azure Platform AppFabric.  This 30 minute session covers a real-world service bus solution and a walk through of the code behind the solution.  It is available at http://channel9.msdn.com/Blogs/MSCOMSWE/Pattern-5–Remote-Message-Broadcast-BizTalk-Conference-Stockholm and the code and slides can be downloaded at http://www.biztalkgurus.com/media/p/29975.aspx.

 

If you are looking for other sessions from the multi-day conference in Sweden, below is a full list.  We covered a wide-range of technologies from AppFabric to StreamInsight.

Day 1 (Sessions from September 8th, 2010)

Welcome and Introduction

Choosing The Right Tool in the Application Platform
Discuss the challenge of choosing the right technology for a given situation and present a decision framework for guiding evaluation.

Tech Overview: SQL Server
Look at the core components of SQL Server that are used to build applications (e.g. SSIS) and when to use them.

Tech Overview: BizTalk Server
Discuss what BizTalk is and when to use it.

Tech Overview: WCF/WF, Server AppFabric
Highlight key capabilities in WCF and WF and benefits offered by Windows Server AppFabric.

Tech Overview: Windows Azure Platform
Discuss Microsoft’s cloud offering and best usage scenarios.

Pattern #1 – Simple Workflow
Evaluate scenario that involves aggregating data from multiple sources and presenting a unified response.

Day 2 (Sessions from September 9th, 2010)

Pattern #2 – Content Based Routing
Consider options for effectively transmitting data to multiple systems that perform similar functions.

Pattern #3 – Human Workflow with Repair and Resubmit
Showcase using workflow 3.5 to send customer details to an AppFabric hosted workflow 4.0 Workflow Service.  This workflow service controls the payment collection process and allows for updated information on a user to be sent back into the same running workflow instance from SharePoint.

Pattern #4 – Cross Organization Supply Chain
Evaluate how to build a supply chain to integrate systems in a PO scenario.

Pattern #5 – Remote Message Broadcast
Demonstrates a scenario where a traditional polling solution is augmented to support real-time updates.

Pattern #6 – Complex Event Processing
Addresses click stream analysis and creating actionable events from user and system behavior.