Sample on using SAML CredentialType to Authenticate to the Service Bus

To answer a popular request we’ve been getting from our customers, Wade Wegner our AppFabric evangelist has written a blog post that explains how to use the SAML CredentialType to Authenticate to the Service Bus.

Read Wade’s blog post to learn more on this topic: Using the SAML CredentialType to Authenticate to the Service Bus.

If you have other AppFabric related requests please visit our Windows Azure AppFabric Feature Voting Forum.

Basics on how to setup an App Fabric Cache server and API usage

Basics on how to setup an App Fabric Cache server and usage of APIs

At this year’s SQL PASS summit, I had the opportunity to talk with customers about App Fabric Cache; as such, I prepared a demo to show some of the characteristics of the available APIs. The following is an explanation on how to setup the demo, what to expect when running it (as if you had attended), along with notes based on the most frequently asked questions.

The setup

For simplicity, everything will be contained on one machine. Start by downloading the appropriate Windows Server AppFabric bits and installing them, make sure you include the client and administration bits. Run the configuration Wizard and setup it up as with a XML configuration provider (not using SQL only for simplicity, SQL is however the recommended setup) and a small cluster, leave all ports as per defaults, for more details see Install Windows Server AppFabric and Configure Windows Server AppFabric.

Run the “Caching Administration Windows PowerShell” as an Administrator. To start the cache host, run the following command.

>Start-CacheHost

You will be prompted for the hostname and cache port. Give the machine name and port 22233 (since the defaults were taken when running the configuration wizard), you should see the following

HostName : CachePort Service Name Service Status Version Info

——————– ———— ————– ————

jaimelap2:22233 AppFabricCachingService UP 1 [1,1][1,1]

Setup your user account to an allowed client user, in my case I had

>Grant-CacheAllowedClientAccount redmond\jaimeab

Create the cache for the demo application and look at the current statistics

>new-cache default

>get-cachestatistics default

You will see the following stats since nothing has yet taken place:

Size : 0

ItemCount : 0

RegionCount : 0

RequestCount : 0

MissCount : 0

Running the demos

Here is the demo project (built in VS2010 ultimate), make sure the two following references are correctly working otherwise search for them by their name and add then as references (if not found, run the repair setup)

Running the solution will show the following windows form:

 

Demo #1, simple object addition, retrieval and removal

As per the message in the Status text, first start by clicking the “Add” button, this will add a single entry into the cache, the status will show “>>Add-Object Added to Cache [key=LockingDemo]”. Clicking the “Get” button will show the content of the added object which was retrieved by leveraging the Key “lockingDemo”, the status will show “>> Get Object and reference retrieved from cache. [key=LockingDemo, Object=Object stored in cache]”

At this point go back to the “Caching Administration Windows PowerShell” console and type:

>Get-CacheStatistics default

The new added object is shown, make note of the RequestCount been at 1

Hit the “Remove” button and then the “Get” button, you will see the fail status as it indicates that no object was found. The same can be seen if an object is added and retrieved after 15 minutes, which is the default time to keep an item in the cache, this is further explore in the Demo #4 below.

The Add API can be thought of been similar to an insert statement but it only takes a key value pair. Note that the Get API merely retrieved whatever it found in the object in an optimistic manner, it did not care if it was changed by any other process, as it retrieved the latest version.

 

Demo #2, GetIfNewer and Put

When the Get API was executed (in line 134 of form1.cs) a DataCacheItemVersion was also collected, this can be thought of as a type of version for the gotten item. See line 143 in Form1.cs

RetrievedStrFromCache = (string)myDefaultCache.Get(myKey, out myVersionBeforeChange)

For simplicity say it is retrieved as Version1, if another process comes and does a Put against the same item (by using the same key) then it will change its version to say Version2, in the same way the DataCacheItemVersion will get updated. GetIfNewer can be used to leverage this type of versioning, by using the DataCacheItemVersion retrieved in the Get API, in the GetIfNewer API, it will know to return a NULL object if it finds an object with the same DataCacheItemVersion in cache, otherwise it will return the new (updated) object.

Go back to the demo form, click on the “Add” button (if a valid object is still present the Object already exist exception will be raised) then the “GetIfNewer” button and you will get a message saying that a NULL object was retrieved. Now click on the “Put” button (which can be thought as an update statement) and then go back and click on the “GetIfNewer” button, now you will get the object that the Put API just updated, the value string changed from “Object stored in cache” to “MOD 1 – Object stored in cache”. Note that clicking on the “GetIfNewer” button again, it will return a NULL item since the version has not changed from the last time the API was executed, this also shows that the value of the DataCacheItemVersion also gets updated with each execution of the GetIfNewer API.

Similarly, executing GetIfNewer after doing a Remove along with an Add; will also retrieve an object since these two operations will have, in effect, modified the object.

 

Demo #3, Contention APIs: GetAndLock, Unlock and PutAndUnlock

As explained, the Get API does an optimistic retrieval but there are cases when a pessimistic retrieval may be preferred. This is a supported scenario under Resource data (shared reads & writes across multiple users). For those cases, the GetAndLock API will allow locking an object for a given period of time; the demo does it for 5 seconds. Two other things to keep in mind are the key and the DataCacheLockHandle, as these are needed to handle the locking semantics.

Click the “Add” button, if an object already exist an exception will be shown. Then Click on the “GetAndLock” button, this will collect a DataCacheLockHandle object that will be required to gracefully unlock the object (the Put API may also do this without requiring a DataCacheLockHandle). Then clicking the “GetAndLock” button again, an exception explaining that the object is already lock will be generated. Before the 5 seconds for expiration have taken place, hit the “Unlock (Bad)” button, this will generate an exception due to the fact that even though the correct key is used, the incorrect DataCacheLockHandle is employed. If the “GetAndLock” button is pressed right after (before the 5 seconds lock is reached) then the exception showing that the object is still lock will be shown, this since the previous unlock called was unsuccessful.

When the “Unlock (ok)” button is pressed, the object will be gracefully unlock, as this method uses the correct key and DataCacheLockHandle, pressing the “GetAndLock” button will not show an exception but instead will lock the object again for 5 seconds.

The “PutAndUnlock” button will also correctly unlock the object but, as stated by its name, it will update the object at the same time.

NOTE that the “Put” button will override the lock semantics; that is, the object will be unlock and update without requiring a DataCacheLockHandle. This is because the intent with the lock APIs is for all applications to honor it, so in order for the locking to be honored it is incumbent upon the clients to be using the Lock methods since not using them will override the locking guarantees.

 

Demo #4, Enable Local cache

This demo goes over the ability to locally cache objects, this will save a round trip to the cache cluster, and in essence, it will eliminate the network delay to and back from the cache server.

To start the demo start the “Caching Administration Windows PowerShell” tool and run these commands

>restart-cachecluster

>get-cachestatistics default

The purpose is to clean all the objects currently cache; the statistics results should show this (all zeros).

Click the “Add” button, as seen in Demo #1, after adding an object the statistics show a RequestCount of 1. Now, choose “Enabled” from the local cache drop down and click on the “Get” button a few times. Go back to the “Caching Administration Windows PowerShell” tool and run the “Get-cacheStatistics default” command again. The requestCount should still stay at 1, this is because the configuration provider did not register the request since it was locally served. Note that if you do a GetIfNewer the requestCount will increase; since this type of API cannot be locally supplied, as it needs to check with all the other cache hosts.

To achieve these, two DataCacheFactories are leverage – see line 54~59 in form1.cs. The DataCacheFactory named “myCacheFactory” is the one use for non-local cache and the one named “myCacheFactory_LC” is the one setup for local cache, its configuration property is defined in line 57, as follows.

//Create the local cache cacheFactory constructor, kept for 5 min.

configuration.LocalCacheProperties = new DataCacheLocalCacheProperties(1000, new TimeSpan(0, 5, 0), DataCacheLocalCacheInvalidationPolicy.TimeoutBased);

Note that the timeout is setup to 5 minutes, so if after Adding an object (while using the local cache enabled choice) the “Get” button is click after 5 minutes, then the requestCount will go up as the local cache item would have expired.

What is Missing in BizTalk 2010? Best Idea Wins a Signed Copy of Applied Architecture Patterns on the Microsoft Platform

I have another copy of my book “Applied Architecture Patterns on the Microsoft Platform” to give away.  The book is a $60 value and I will cover the shipping costs to the winner any place in the world as long as the US Postal Service ships there.

Contest #2 – What is missing in BizTalk Server 2010?  What feature, concept, UI update, or tooling would make BizTalk a stronger, more robust, simpler, or easier product to work with?  

As a special bonus, I was able to get the book signed by co-author Richard Seroter.  So someday when he becomes Microsoft CEO maybe this will be worth some money :).

How to Enter:
Simply add a comment to this blog post with your idea to enter!  Ensure you are a registered BizTalkGurus.com member so I know your email address if you win OR send an email to contest@biztalkgurus.com with your idea.   Please ensure you see your comment show up on this blog.  Due to spam blocker and caching – it might take up to 24 hours.  If all else fails, just send me an email.

Entries must be received by 11 PM CST Tuesday, December 7th.  The winner will be announced a few days later.

Of course if you do not win a free copy of the book – the book is available on Amazon.com and PacktPub.com.

Looking for more information on our book?  Read a sample chapter online Chapter 12 Debatching Bulk Data or watch The Story Behind The Book on YouTube.

Congratulations to Rohit Sharma who won the Best Pattern Contest a few weeks ago!

Best of luck! 

Keyboard and the Windows Phone 7 emulator

By default the WP7 emulator will not react to you typing on the keyboard, instead you have to use the mouse to press keys on the software keyboard that pops up.

There is however a hidden feature that will let you use your hardware keyboard, just press Page Up when the software keyboard pops up.

And as an extra bonus the three buttons at the bottom of the phone can be activated using F1, F2 and F3 for back, home and search respectively.

 

Cool feature that makes development using the emulator a lot easier

 

Enjoy!

www.TheProblemSolver.nl
Wiki.WindowsWorkflowFoundation.eu

New Azure SDK version 1.3 released

A new big release of Windows Azure SDK V1.3 is available. If you haven’t installed it before follow instructions at bottom of download link. One of features I am interest in is Virtual Machine (VM) Role still in Beta, which allows you to create a custom VHD image using Windows Server 2008 R2 and host it in the cloud. If you think you could host VHD with BizTalk, you could bit please review these considerations recently posted by Sam Vanhoutte.

Technorati:microsoft azure

Hybrid Cloud Solutions With Windows Azure AppFabric Middleware

Abstract

Technical and commercial forces are causing Enterprise Architects to evaluate moving established on-premises applications into the cloud – the Microsoft Windows Azure Platform.

This blog post will demonstrate that there are established application architectural patterns that can get the best of both worlds: applications that continue to live on-premises while interacting with other applications that live in the cloud – the hybrid approach.  In many cases, such hybrid architectures are not just a transition point — but a requirement since certain applications or data is required to remain on-premises largely for security, legal, technical and procedural reasons.

Cloud is new, hybrid cloud even newer. There are a bunch of technologies that have just been released or announced so there is no one book or source for authentic information, especially one that compares, contrasts,  and ties it all together. This blog, and a few more that will follow, is an attempt to demystify and make sense of it all.

This blog begins with a brief review of two prevalent deployment paradigms and their influence on architectural patterns: On-premises and Cloud. From there, we will delve into a discussion around developing the hybrid architecture.

In authoring this blog posting we take an architect’s perspective and discuss the major building block components that compose this hybrid architecture. We will also match requirements against the capabilities of available and announced Windows Azure and Windows Azure AppFabric technologies. Our discussions will also factor in the usage costs and strategies for keeping these costs in check.

The blog concludes with a survey of interesting and relevant Windows Azure technologies announced at Microsoft PDC 2010 – Professional Developer’s Conference during October 2010.

On-Premises Architectural Pattern

On-premises solutions are usually designed to access data (client or browser applications) repositories and services hosted inside of a corporate network. In the graphic below, Web Server and Database Server are within a corporate firewall and the user, via the browser, is authenticated and has access to data.

Figure 1: Typical On-Premises Solution (Source – Microsoft.com)

The Patterns and Practices Application Architecture Guide 2.0 provides an exhaustive survey of the on-premises Applications. This Architecture Guide is a great reference document and helps you understand the underlying architecture and design principles and patterns for developing successful solutions on the Microsoft application platform and the .NET Framework.

The Microsoft Application Platform is composed of products, infrastructure components, run-time services, and the .NET Framework, as detailed in the following table (source: .NET Application Architecture 434 Guide, 2nd Edition).

Table 1: Microsoft Application Platform (Source – Microsoft.com)

The on-premises architectural patterns are well documented (above), so in this blog post we will not dive into prescriptive technology selection guidance here, however let’s briefly review some core concepts since we will reference them while elaborating hybrid cloud architectural patterns.

Hosting The Application

Commonly* on-premises applications (with the business logic) run on IIS (as an IIS w3wp process) or as a Windows Service Application. The recent release of Windows Server AppFabric makes it easier to build, scale, and manage Web and Composite (WCF, SOAP, REST and Workflow Services) Applications that run on IIS. 

* As indicated in Table 1 (above) Microsoft BizTalk Server  is capable of hosting on-premises applications; however, for sake of this blog, we will focus on Windows Server AppFabric

Accessing On-Premises Data

You’re on-premises applications may access data from the local file system or network shares. They may also utilize databases hosted in Microsoft SQL Server or other relational and non-relational data sources. In addition, your applications hosted in IIS may well be leveraging Windows Server AppFabric Cache for your session state, as well as other forms of reference or resource data.

Securing Access

Authenticated access to these data stores are traditionally performed by inspecting certificates, user name and password values, or NTLM/Kerberos credentials. These credentials are either defined in the data source themselves, heterogeneous repositories such as in SQL Logins, local machine accounts, or in directory Services (LDAP) such as Microsoft Windows Server Active Directory, and are generally verifiable within the same network,  but typically not outside of it unless you are using Active Directory Federation Services – ADFS.

Management

Every system exposes a different set of APIs and a different administration console to change its configuration – which obviously adds to the complexity; e.g., Windows Server for configuring network shares, SQL Server Management Studio for managing your SQL Server Databases; and IIS Manager for the Windows Server AppFabric based Services.

Cloud Architectural Pattern

Cloud applications typically access local resources and services, but they can eventually interact with remote services running on-premises or in the cloud. Cloud applications usually hosted by a ‘provider managed’ runtime environment that provides hosting services, computational services, storage services, queues, management services, and load-balancers. In summary, cloud applications consist of two main components: those that execute application code and those that provide data used by the application. To quickly acquaint you with the cloud components, here is a contrast with popular on-premises technologies:

Table 2: Contrast key On-Premises and Cloud Technologies

Category

On-Premises

Cloud

Application Hosting

Windows Server

Windows Azure

Message Queue

MSMQ

Azure Queue

Unstructured Data Store

Windows Folder

Windows Drives

Azure Blobs

Azure Table

Azure Drive (XDrive)

Relational Data Store

SQL Server

SQL Azure

Application Cache

AppFabric Cache

Azure AppFabric Cache

NOTE: This table is not intended to compare and contrast all Azure technologies. Some of them may not have an equivalent on-premise counterpart.

The graphic below presents an architectural solution for ‘Content Delivery’. While Content creation and its management is via on-premises applications; the content storage (Azure Blob Storage) and delivery is via Cloud – Azure Platform infrastructure. In the following sections we will review the components that enable this architecture.

Figure 2: Typical Cloud Solution

Running Applications in Windows Azure

Windows Azure is the underlying operating system for running your cloud services on the Windows Azure AppFabric Platform. The three core services of Windows Azure in brief are as follows:

  1. Compute: The compute or hosting service offers scalable hosting of services on 64-bit Windows Server platform with Hyper-V support. The platform is virtualized and designed to scale dynamically based on demand. The Azure platform runs web roles on Internet Information Server (IIS) and worker roles as Windows Services.
  2. Storage: There are three types of storage supported in Windows Azure: Table Services, Blob Services, and Queue Services. Table Services provide storage capabilities for structured data, whereas Blob Services are designed to store large unstructured file like videos, images, batch files in the cloud. Table Services are not to be confused with SQL Azure; typically you can store the high-volume data in low-cost Azure Storage and use (relatively) expensive SQL Azure to store indexes to this data. Finally, Queue Services are the asynchronous communication channels for connecting between Services and applications not only in Windows Azure but also from on-premise applications. Caching Services, currently available via Windows Azure AppFabric LABS, is another strong storage option.
  3. Management: The management service supports automated infrastructure and service management capabilities to Windows Azure cloud services. These capabilities include automatic and transparent provisioning of virtual machines and deploying services in them, as well as configuring switches, access routers, and load balancers.

A detailed review on the above three core services of Windows Azure is available in the subsequent sections.

Application code execution on Windows Azure is facilitated by Web and Worker roles. Typically, you would host Websites (ASP.NET, MVC2, CGI, etc.) in a Web Role and host background or computational processes in a Worker role. The on-premises equivalent for a Worker role is a Windows Service.

This architecture leaves a gray area — where should you host WCF Services? The answer is – it depends! Let me elaborate, when building on-premises services, you can host these WCF Services in a Windows Service, (e.g., In BizTalk, WCF Receive Locations can be hosted within an in-process host instance that runs as a Windows Service) and in Windows Azure you can host the WCF Services as Worker roles . While you can host a WCF Service (REST or SOAP) in either a Worker Role or Web role, you will typically host these services in a Web role; the one exception to this is when your WCF service specifically needs to communicate via TCP/IP endpoints.  The Worker roles support the ability to expose TCP endpoints, while Web roles are capable of exposing HTTP and HTTP(S) endpoints; Worker roles add the ability to expose TCP endpoints.  

Storing & Retrieving Cloud Hosted Data

Applications that require storing data in the cloud can leverage a variety of technologies including SQL Azure, Windows Azure Storage, Azure AppFabric Cache, Virtual Machine (VM) instance local storage, Instance Local Storage and Azure Drive (XDrive). Determining the right technology is a balancing exercise in managing trade-offs, costs, and capabilities. The following sections will attempt to provide prescriptive guidance on what usage scenario each storage option is best suited for.

The key takeaway is whenever possible co-locate data and the consuming application. Your data in the cloud, stored in SQL Azure or Windows Azure Storage, should be accessible via the ‘local’ Azure “fabric” to your applications. This has positive performance and a beneficial cost model when both the application and the data live within the same datacenter. Co-location is enabled via the ‘Region’ you choose for the Microsoft Data Center.

SQL Azure

SQL Azure provides an RDBMS in the cloud, and functions for all intents and purposes similarly to your on-premises version of SQL Server. This means that your applications can access data in SQL Azure using the Tabular Data Stream (TDS) protocol; or, in other words, your application uses the same data access technologies (e.g. ADO.NET, LINQ to SQL, EF, etc.) used by on-premises applications to access information on a local SQL Server. The only thing you need to do is change the SQL Client connection strings you’ve always been using. You just have to change the values to point to the server database hosted by SQL Azure. Of course I am glossing over some details like the SQL Azure connection string which contains other specific settings such as encryption, explicit username (in very strict format username@servername) and password – I am sure you get the drift here.

SQL Azure Labs demonstrate an alternative mechanism for allowing your application to interact with SQL Azure via OData. OData provides the ability to perform CRUD operations using REST and can retrieve query results formatted as AtomPub or JSON.

Via .NET programming, you can interact at a higher level using a DataServicesContext and LINQ.

Securing Access

When using TDS, access is secured using SQL Accounts (i.e., your SQL Login and password must be in the connection string), however there is one additional layer of security you will need to be aware of. In most on-premises architectures, your SQL Server database lives behind a firewall, or at most in a DMZ, which you would rarely expose. However, no matter where they are located, these repositories are not directly accessible to applications living outside the corporate boundaries. Of course even for on-premises solutions you can use IPSEC to restrict the (range of) machines that can access the SQL Server machine.  Eventually, the access to these databases is mediated and controlled by web services (SOAP, WCF).

What happens when your database lives on Azure, in an environment effectively available to the entire big bad world? SQL Azure also provides a firewall that you can configure to restrict access to a set of IP address ranges, which you would configure to include the addresses used by your on-premises applications, as well as any Microsoft Services (data center infrastructure which enable access from Azure hosted services like your Web Roles). In the graphic below you will notice the ‘Firewall Settings’ tab wherein the IP Address Range is specified.

Figure 3: SQL Azure Firewall Settings

When using OData, access is secured by configuration. You can enable anonymous access where the anonymous account maps to a SQL Login created on your database, or you can map Access Control Service (ACS) identities to specific SQL Logins, whereby ACS takes care of the authentication. In the OData approach, the Firewall access rules are effectively circumvented because only the OData service itself needs to have access to the database hosted by SQL Azure from within the datacenter environment.

Management

You manage your SQL Azure database the same way you might administer an on-premises SQL Server database, by using SQL Server Management Studio (SSMS). Alternately, and provided you allowed Microsoft Services in the SQL Azure firewall, you can use the Database Manager for SQL Azure (codename “Project Houston”) which provides the environment for performing many of the same tasks you would perform via SSMS.

The Database Manager for SQL Azure is a part of the Windows Azure platform developer portal refresh, and will provide a lightweight, web-based database management and querying capability for SQL Azure databases. This capability allows customers to have a streamlined experience within the web browser without having to download any tools.

Figure 4: Database Manager for SQL Azure (Source – PDC 10)

One of the primary technical differences between the traditional SSMS management option is the required use of port 1433; whereas, “Project Houston” is web-based and leverages port 80. An awesome demo on the capabilities of the Database Manager is available here 

Typical Usage

SQL Azure is best suited for storing data that is relational, specifically where your applications expect the database server to perform the join computation and return only the processed results. SQL Azure is a good choice for scenarios with high transaction throughput (view case studies here) since it has a flat-rate pricing structure based on the size of data stored and additionally query evaluation is distributed across multiple nodes.

Windows Azure Storage (Blobs, Tables, Queues)

Windows Azure Storage provides storage services for data in the form of metadata enriched blobs, dictionary-like tables, and simple persistent queues. You access data via HTTP or HTTP(S) by leveraging the StorageClient library which provides 3 different classes, respectively CloudBlobClient, CloudTableClient, CloudQueueClient), to access storage services. For example, CloudTableClient provides a DataServices context enabling use of LINQ for querying table data. All data stored by Windows Azure Storage can also be accessed via REST. AppFabric CAT (team) plans to provide code samples via this blog site to demonstrate this functionality – stay tuned.

Securing Access

Access to any of the Windows Azure Storage blobs, tables, or queues is provided through symmetric key authentication whereby an account name and account key pair must be included with each request. In addition, access to the Azure blobs can be secured via the mechanism known as Shared Access Signature 

Management

Currently there is no single tool for managing Windows Azure Storage. There are quite a few ‘samples’ as well as third-party tools. The graphic below is from the tool – Azure Storage Explorer available for download via Codeplex.

Figure 5: Azure Storage Explorer (Source – Codeplex)

In general, a Windows Azure storage account, which wraps access to all three forms of storage, services) is created through the Windows Azure Portal, but is effectively managed using the API’s.

Typical Usage

Windows Azure storage provides unique value propositions that make it fit within your architecture for diverse reasons.

Although Blob storage is charged both for amount of storage used and also by the number of storage transactions, the pricing model is designed to scale transparently to any size. This makes it best suited to the task of storing files and larger objects (high resolution videos, radiology scans, etc.) that can be cached or are otherwise not frequently accessed.

Table storage is billed the same way as Blob Storage, but its unique approach to indexing at large scale makes it useful in situations where you need to efficiently access data from datasets larger than the 50 GB per database maximum of SQL Azure, and where you don’t expect to be performing joins or are comfortable enabling the client to download a larger chunk of data and perform the joins and computations on the client side.

Queues are best suited for storing pointers to work that needs to be done, due to their limited storage capacity of a maximum of 8K per message, in a manner ensuring ordered access. Often, you will use queues to drive the work performed by your Worker roles, but they can also be used to drive the work performed by your on-premises services. Bottom-line, Queues can be effectively used by a Web Role to exchange control messages in an asynchronous manner with a Worker Role running within the same application, or to exchange messages with on-premises applications

Azure AppFabric Cache

The Azure AppFabric Cache, currently in CTP/Beta as of November 2010, should give you (technology is pre-release and hence the caveat!) the same high-performance, in-memory, distributed cache available with Windows Server AppFabric, as a hosted service. Since this is an Azure AppFabric Service, you are not responsible for managing servers participating in the data cache cluster. Your Windows Azure applications can access the cached data through the client libraries in the Windows Azure AppFabric SDK.

Securing Access

Access to a cache is secured by a combination of a generated authentication tokens with authorization rules (e.g., Read/Write versus Read only) as defined in the Access Control Service. 

Figure 6: Access Control for Cache

Management

At this time, creation of a cache, as well securing access to it, is performed via the Windows Azure AppFabric labs portal. Subsequent to its release this will be available in the commercial Azure AppFabric portal.

Typical Usage

Clearly the cache is best suited for persisting data closer to your application, whether its data stored in SQL Azure, Windows Azure Storage, or the result of a call to another service or a combination of all of them. Generally, this approach is called the cache-aside model, whereby requests for data made by your application first check the cache and only query the actual data source, and subsequently add it to the cache, if it’s not present. Typically we are seeing cached used in the following scenarios:

  • A scalable session store for your web applications running on ASP.NET.
  • Output cache for your web application.
  • Objects created from resultsets from SQL or costly web service calls, stored in cache and called from your web or worker role.
  • Scratch pad in the cloud for your applications to use.

Using VM local storage & Azure Drives

When you create an instance in Windows Azure, you are creating an instance of a Virtual Machine (VM). VM, just like its on-premises counterpart, can make use of locally attached drives for storage or network shares.  Windows Azure Drives are similar, but not exactly the same as, network shares. They are an NTFS formatted drive stored as page blob in Windows Azure Storage, and accessed from your VM instance by drive letter. These drives can be mounted exclusively to a single VM instance as read/write or to multiple VM instances as read-only. When such a drive is mounted, it caches data read in local VM storage, which enhances the read performance of subsequent accesses for the same data.

Securing Access

An Azure Cloud Drive is really just a façade on top of a page blob, so access to the drive effectively amounts to having access to the Blob store by Windows Storage Account credentials (e.g., account name and account key).

Management

As for Blobs, there is limited tooling for managing Azure drives outside of the StorageClient API’s. In fact, there is no portal for creating Azure Cloud drives. However, there are samples on Codeplex that help you with creating and managing the Azure drives.

The Windows Azure MMC Snap-In is available and can be downloaded from here – and the graphic below provides you a quick peek into the familiar look/feel of the MMC.

Figure 7: Windows Azure MMC Snap-In (Source – MSDN)

Typical Usage

Cloud drives have a couple of unique attributes that make them interesting choices in hybrid architecture. To begin with, if you need drive-letter based access to your data from your Windows Azure application and you want it to survive VM instance failure, a cloud drive is your best option. Beyond that, you can mount a cloud drive to instances as read-only. This enables sharing of large reference files across multiple Windows Azure roles. What’s more, you can create a VHD, for example, using Window 7’s Disk Management MMC and load this VHD to the blob storage for use by your VM instances, effectively cloning an on-premises drive and extending its use into the cloud.

Hybrid Architectural Pattern: Exposing On-premises Services To Azure Hosted Services

Hybrid is very different – unlike the ‘pure’ cloud solution, hybrid solutions have significant business process, data stores, and ‘Services’ as on-premises applications possibly due to compliance or deployment restrictions. A hybrid solution is one which has applications of the solution deployed in the cloud while some applications remain deployed on-premises.  

For the purposes of this article, and as illustrated in the diagram, we will focus on the scenario where your on-premises applications are services hosted in Windows Server AppFabric and then communicate to other portions of your hybrid solution running on the cloud

Figure 8: Typical Hybrid Cloud Solution

Traditionally, to connect your on-premises applications to the off-premises applications (cloud or otherwise), you would enable this scenario by “poking” a hole in your firewall and configuring NAT routing so that Internet clients can talk to your services directly. This approach has numerous issues and limitations, not the least of which is the management overhead, security concerns, and configuration challenges.

Connectivity

So the big question here is:  How do I get my on-premises services to talk to my Azure hosted services?

There are two approaches you can take: You can use the Azure AppFabric Service Bus, or you can use Windows Azure Connect.

Using Azure AppFabric Service Bus

If you’re on-premises solution includes WCF Services, WCF Workflow Services, SOAP Services, or REST services that communicate via HTTP(S) or TCP you can use the Service Bus to create an externally accessible endpoint in the cloud through which your services can be reached. Clients of your solution, whether they are  could other Windows Azure hosted services, or Internet clients, simply communicate with that end-point, and the AppFabric Service Bus takes care of relaying traffic securely to your service and returning replies to the client.

The graphic below (from http://www.microsoft.com/en-us/appfabric/azure/middleware-services.aspx#ServiceBus) demonstrates the Service Bus functionality.

Figure 9: Windows Azure AppFabric Service Bus (Source – Microsoft.com)

The key value proposition of leveraging the Service Bus is that it is designed to transparently communicate across firewalls, NAT gateways, or other challenging network boundaries that exist between the client and the on-premises service. You get the following additional benefits:

  • The actual endpoint address of your services is never made available to clients.
  • You can move your services around because the clients are only bound to the Service Bus endpoint address, that is a virtual and not a physical address.
  • If both the client and the service happen to be on the same LAN and could therefore communicate directly, the Service Bus can set them up with a direct link that removes the hop out to the cloud and back and thereby improves throughput and latency.

Securing Access to Service Bus Endpoints

Access to the Service Bus is controlled via the Access Control Service. Applications that use the Windows Azure AppFabric Service Bus are required to perform security tasks for configuration/registration or for invoking service functionality. Security tasks include both authentication and authorization using tokens from the Windows Azure AppFabric Access Control service. When permission to interact with the service has been granted by the AppFabric Service Bus, the service has its own security considerations that are associated with the authentication, authorization, encryption, and signatures required by the message exchange itself. This second set of security issues has nothing to do with the functionality of the AppFabric Service Bus; it is purely a consideration of the service and its clients.

There are four kinds of authentication currently available to secure access to Service Bus:

  • SharedSecret, a slightly more complex but easy-to-use form of username/password authentication.
  • SAML, which can be used to interact with SAML 2.0 authentication systems.
  • SimpleWebToken, uses the OAuth Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT).
  • Unauthenticated, enables interaction with the service endpoint without any authentication behavior.

Selection of the authentication mode is generally dictated by application connecting to the Service Bus. You can read more on this topic here.

Cost considerations

Service Bus usage is charged by the number of concurrent connections to the Service Bus endpoint. This means that when on-premises or cloud-hosted service registers with the Service Bus and opens its listening endpoint it is considered as 1 connection. When a client subsequently connects to that Service Bus endpoint, it’s counted as a second connection. One very important point falls out of this that may affect your architecture: you will want to minimize concurrent connections to Service Bus in order to keep the subscription costs down. It may be more likely that you would want to use the Service Bus in the middle tier more like a VPN to on-premises services rather than allowing unlimited clients to connect through the Service Bus to your on-premises service.

To reiterate, the key value proposition of the Service Bus is Service Orientation; it makes it possible to expose application Services using interoperable protocols with value-added virtualization, discoverability, and security.

Using Windows Azure Connect

Recently announced at PDC 10, and expected for release by the end of 2010, is an alternative means for connecting your cloud services to your on-premises services. Windows Azure Connect effectively offers IP-level, secure, VPN-like connections from your Windows Azure hosted roles to your on-premises services. This service is still not available and of course pricing details have not as yet been released. From the information available to date, you can conclude that Windows Azure Connect would be used for connecting middle-tier services, rather than public clients, to your on-premises solutions. While the Service Bus is focused on connectivity for on-premise Services exposed as a Azure endpoints without having to deal with firewall and NAT setup; Windows Azure Connect provides broad connectivity between your Web/Worker roles and on-premises systems like SQL Server, Active Directory or LOB applications.

Securing the solution

Given the distributed nature of the hybrid (on-premises and cloud) solution, your approach to security should match it – this means your architecture should leverage Federated Identity. This essentially means that you are outsourcing authentication and possibly authorization.  

If you want to flow your authenticated on-premises identities, such as domain credentials, into Azure hosted web sites or services, you will need a local identity (akin to presenting a claim issued by a local security token service) providing a security Identity Provider Security Token Service (IP-STS) such as Active Directory Federation Services 2.0 (ADFS 2.0). Your services, whether on-premises or in the cloud, can then be configured to trust credentials, in the form of claims, issued by the IP-STS. Think of the IP-STS as simply the component that can tell if a username and password credentials are valid. In this approach, clients authenticate against the IP-STS; for example, by sending their Windows credentials to ADFS, and if valid, they receive claims they can subsequently present to your websites or services for access. Your websites or services only have to evaluate these claims when executing authorization logic – Windows Identity Foundation (WIF) provides these facilities.

For additional information, review this session – SIA305 Windows Identity Foundation and Windows Azure for Developers. In this session you can learn how Windows Identity Foundation can be used to secure your Web Roles hosted in Windows Azure, how you can take advantage of existing on-premises identities, and how to make the most of the features such as certificate management and staged environments.

In the next release (ACS v2), a similar approach can be taken by using the Windows Azure AppFabric Access Control Service (ACS) to act as your IP-STS and thereby replacing ADFS. With ACS, you can define service identities that are essentially logins secured by either a username and password, or a certificate. The client calls out to ACS first to authenticate, and the client thereafter presents the claims received by ACS in the call to your service or website.

Finally, more advanced scenarios can be put into place that uses ACS as your Relaying-Party Security Token Service (RP-STS) as the authority for all your apps, both on-premises and in the cloud. ACS is then configured to trust identities issued from other identity providers and convert them to the claims expected by your applications. For example, you can take this approach to enable clients to authenticate using Windows Live ID, Google, and Yahoo, while still working with the same set of claims you built prior to supporting those services.

Port Bridge, a point-to-point tunneling scenario is perfect when the application is not exposed as a WCF service or doesn’t speak HTTP. Port bridge is perfect for protocols such as SMTP, SNMP, POP, IMAP, RDP, TDS and SSH. You can read more about this here 

Optimizing Bandwidth – Data transfer Costs

One huge issue to reconcile in hybrid solutions is the bandwidth – this is unique for hybrid.

The amount of data transferred in and out of a datacenter is billable, so an approach to optimizing your bandwidth cost is to keep as much traffic from leaving the datacenter as possible. This translates into architecting designs that transfer most data across the Azure “fabric” within affinity groups and minimize traffic between disparate data centers and externally. An affinity group ensures that your cloud services and storage are hosted together on the Windows Azure infrastructure. Windows Azure roles, Windows Azure storage, and SQL Azure services can be configured at creation time to live within a specific datacenter. By ensuring the Azure hosted components within your hybrid solution that communicate with each other live in the same data center, you effectively eliminate the data transfer costs.  

The second approach to optimizing bandwidth costs is to use caching. This means leveraging Windows Server AppFabric Cache on premise to minimize calls to SQL Azure or Azure Storage. Similarly, it also means utilizing the Azure AppFabric Cache from Azure roles to minimize calls to SQL Azure, Azure Storage, or to on-premises services via Service Bus.

Optimizing Resources

One of the most significant cost-optimization features of Windows Azure roles is their support for dynamic scale up and down. This means adding more instances as your load increases and removing them as the load decreases. Supporting load-based dynamic scaling can be accomplished using the Windows Azure management API and SQL Azure storage. While there is no out-of-the box support for this, there is a fair amount of guidance on how to implement this within your solutions (see the Additional References section for one such example). The typical approach is to configure your Azure Web and Worker roles to log health metrics (e.g., CPU load, memory usage, etc.) to a table in SQL Azure. You then have another Worker role that is responsible for periodically polling this table and when certain thresholds are reached or exceeded, it would increase or decrease the instance count for the monitored service using the Windows Azure management APIs.  

Monitoring & Diagnostics

For on-premises services, you can use Windows Server AppFabric Monitoring which logs tracking events into your on-premises instance of SQL Server, and this data can be viewed both through IIS Manager and by querying the monitoring database.

For Azure roles in production you will leverage Windows Azure Diagnostics, wherein your instances periodically load batches to Azure storage. For SQL Azure you can examine SQL Azure query performance by querying the related Dynamic Management Views.  

IntelliTrace is very useful for debugging solutions. With IntelliTrace debugging, you can log extensive debugging information for a Role instance while it is running in Windows Azure. If you need to track down a problem, you can then use the IntelliTrace logs to step through your code from within Visual Studio as though it were running in Windows Azure. In effect, IntelliTrace records key code execution and environment data while your service is running, and allows you to replay the recorded data from within Visual Studio. For more information on IntelliTrace, see Debugging with IntelliTrace.

Summary

Based off the discussions above, let’s consolidate our findings into this easy to read table.

Table 3: In Summary

Typical

Scenario

Architectural Pattern

Application Host

Data Store(s)

Access Control

Management

EAI and Process  Control

On Premise

IIS, Windows Server, BizTalk Server

AppFabric Cache, SQL Server

Active Directory, SQL Login

SQL Server Management Studio,

IIS – Manager

Web content delivery

Cloud

Windows Azure – Web or Worker Roles

SQL Azure,

Azure Storage,

AppFabric Cache

Account name/key pair (limited IP Address access),

SQL Login,

Access Control Services

SQL Server Management Studio,

Database Manager,

Windows Azure Portal

Management API.

Multi-location devices/ Appliance Management

Hybrid

IIS, Windows Server, BizTalk Server,

Windows Azure – Web or Worker Roles

Service Bus

Connect

AppFabric Cache, SQL Server, SQL Azure,

Azure Storage,

AppFabric Cache

ADFS 2.0,

Access Control Services

 

SQL Server Management Studio,

Database Manager,

Windows Azure Portal,

IIS – Manager

It’s amply clear that the Hybrid Cloud Architectural pattern is ideally suited for connecting on-premises based appliances and applications with its cloud counterparts. This hybrid pattern is ideal since it leverages the best of both: traditional on-premises applications and new distributed/multi-tenant cloud applications.

Let’s Make Plans To Build This Out Now!

In the above sections we have presented a good survey of available technologies to build out the hybrid applications; recent announcements at PDC 10 are worth considering before you move forward.

First, you should consider the Composition Model within the Windows Azure Composite Applications Environment. The Composition Model ties together your entire Azure hosted services (roles, storage, SQL Azure, etc.) as one composite application enabling you to see the end to end picture in a single designer in Visual Studio.  The Composition Model is a set of .NET Framework extensions and builds on the familiar Azure Service Model concepts and adds new capabilities for describing the application infrastructure, its components, and relationships among the components, in order to treat the composite application as a single logical identity.

Second, if you were making calls back to an on-premises instance of Windows Server AppFabric, Windows Workflow is fully supported on Azure as part of the composition model, and so you may choose to move the relevant Workflows directly into the cloud. This is huge, very huge since it enables develop once and deploy as needed!

Third, there is a new role in town – the Windows Azure Compute VM Role. This enables us to host 3rd party and archaic Line-of-business applications fully participate (wow – without a major code do-over!) in the hybrid cloud solution.

And last but not least, Windows Azure is also providing the Remote Desktop functionality, which enables you to connect to a running instance of your application or service to monitor activity and troubleshoot common problems. The Remote Desktop Protocol Access while super critical to the VM instance is also available to Web and Worker roles too. This in turn enables another related new feature, which is the availability of the full IIS instead of just the core that was available to web roles.  

Additional References

While web links for additional/background information are embedded into the text; the following additional references are provided as resources to move forward on your journey.

·         Windows Azure, SQL Azure & Azure AppFabric

·         SQL Azure Labs

·         Windows Azure AppFabric Service Bus

·         Windows Azure AppFabric Access Control Service

·         Windows Identity Foundation and Windows Azure for Developers

·         Windows Azure Connect talk at PDC 10

·         MSDN Magazine Article on Health Based Scaling

·         Azure AppFabric Cache Talk at PDC 10

·         Identity & Access Control in the Cloud at PDC 10

Stay Tuned!

This is the beginning; our team is currently working on the next blog that builds on this topic. The blog will take a scenario driven approach in building a hybrid Windows Azure Application.

Acknowledgements

Significant contributions from Paolo Salvatori, Valery Mizonov, Keith Bauer and Rama Ramani are acknowledged.

Namaste!

 The PDF Version of this document is attached.

BizTalk Server-distinguished field

The distinguished field is used to expose the value of a node of a message instance to BizTalk Server system. However, if there is a record that has a parent node that has multiple instances, then that record won’t be able to be highlighted as distinguished. An error message of “The node can occur potentially multiple times in the instance. Only nodes which are guaranteed to be unique can be promoted.“ will appear.

This constraint was purposely designed to remove any ambiguity in accessing the data. However, despite that, should the developer still desire to use the node, then the record’s XPath query can still be used. The article Using XPaths in Message Assignments describes how XPath queries can be leveraged in BizTalk Server.

Technorati Tags: BizTalk Server,Distinguished,fields,XPath,queries

Exploring the new Azure property pages in Visual Studio

Great news for the Azure community.  The Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio (November 2010) have been released!  This release contains a lot of new features and most of them were announced at PDC.

The release can be downloaded here.

The overview of new features:

  • Virtual Machine (VM) Role (Beta):Allows you to create a custom VHD image using Windows Server 2008 R2 and host it in the cloud.
  • Remote Desktop Access: Enables connecting to individual service instances using a Remote Desktop client.
  • Full IIS Support in a Web role: Enables hosting Windows Azure web roles in a IIS hosting environment.
  • Elevated Privileges: Enables performing tasks with elevated privileges within a service instance.
  • Virtual Network (CTP): Enables support for Windows Azure Connect, which provides IP-level connectivity between on-premises and Windows Azure resources.
  • Diagnostics: Enhancements to Windows Azure Diagnostics enable collection of diagnostics data in more error conditions.
  • Networking Enhancements: Enables roles to restrict inter-role traffic, fixed ports on InputEndpoints.
  • Performance Improvement: Significant performance improvement local machine deployment.

This blog shows a first highlight of the new features that are part of the new SDK.

Extra small instance size

One of the nice features that many people have been asking for is the introduction of a new, lightweight, VM size: the Extra small instance.  They come with a small amount of memory (786 MB) and a low CPU speed (1Ghz), but at a very low cost (5ct / compute hour).  Configuring this is very easy, through the Role Configuration tab.

Remote desktop features

It is now possible to connect to an Azure instance, by using a Remote Desktop Connection.  The publishing wizard has been updated to enable this easily.  When deploying an application, you can provide a user name and a password for a remote desktop user, together with a certificate that has to be uploaded to the Azure portal, after exporting it with the private key.  (Export certificate – Browse to the portal – Your project – Installed Certificates – Manage – Upload certificate)

Public ports in endpoints

In previous SDK’s, it was unknown until at runtime at what actual port the role endpoints were hosted.  This can now be fixed and configured in the endpoints tab page on an Azure role:

Using Azure Connect

The Azure Connect feature is a very interesting feature that allows to set up a virtual network between Azure roles and the local network.  The property page of the Azure role allows to configure the specific token that is need for the Azure Connect feature.

Sam Vanhoutte, Codit

Windows Azure AppFabric Caching Service – soup to nuts primer

Introduction

One of the important reasons to move an application to the cloud is scalability (outside of other benefits that the cloud provides). When the application is deployed to the cloud, it is critical to maintain performance – the system needs to handle the increase in load and importantly maintain low latency response times. This is currently needed for most applications, be it a website selling books or a large social networking website or a complex map reduce algorithm. Distributed Caching as a technology enables applications to scale elastically while maintaining the application performance. Previously, Windows Server AppFabric enabled you to use distributed caching on-premises. Now, distributed caching is available on the cloud for Azure applications!

Here are some scenarios to leverage the Windows Azure AppFabric Caching Service:

  • ASP.NET application running as a web role needing a scalable session repository
  • Windows Azure hosted application frequently accessing reference data
  • Windows Azure hosted application aggregating objects from various services and data stores

Windows Azure AppFabric Caching is Microsoft’s distributed caching Platform As A
Service that can be easily configured and leveraged by your applications. It provides elastic scale, agile apps development, familiar programming model and best of all, managed & maintained by Microsoft.

At this time, this service is in labs release and can be used for development & test purposes only.

Configuring the Cache endpoint

  1. Go to https://portal.appfabriclabs.com/ and login using your Live ID
  2. Create a Project and then add a unique Service Namespace to it, for instance ‘TestAzureCache1’.

  3. When you click on the Cache endpoint, you should see the service URL and the authentication access token as follows:

At this point, you are done setting up the cache endpoint. Comparing this to installing and configuring the cache cluster on-premises shows the agility that cloud platforms truly provide.

Developing an application

  1. Download the Windows Azure AppFabric SDK to get started.
  2. Copy the app.config or web.config from the portal shown above and use it from your client application

  3. In your solution, add references to Microsoft.ApplicationServer.Caching.Client and Microsoft.ApplicationServer.Caching.Core from C:\Program Files\Windows Azure AppFabric SDK\V2.0\Assemblies\Cache
  4. Add the namespace to the beginning of your source code and develop the application in a similar manner to developing on-premises application

    using Microsoft.ApplicationServer.Caching;

* At this point, Azure AppFabric caching service does not have all the features that Windows Server AppFabric Cache supports. . For more information, see http://msdn.microsoft.com/en-us/library/gg278356.aspx.

  1. If you are developing the application via code instead of config, here are set of code for accessing the caching service. Only the statements highlighted need to be modified as per your configured endpoint.

private
void PrepareClient()

{

// Insert the Service URL from the labs portal


string hostName = “TestAzureCache1.cache.appfabriclabs.com”;


int cachePort = 22233;


List<DataCacheServerEndpoint> server = new
List<DataCacheServerEndpoint>();

server.Add(new
DataCacheServerEndpoint(hostName, cachePort));


DataCacheFactoryConfiguration config = new
DataCacheFactoryConfiguration();


//Insert the Authentication Token from the labs portal


string authenticationToken = <To Be Filled Out>;

config.SecurityProperties = new
DataCacheSecurity(authenticationToken);

config.Servers = server;

config.IsRouting = false;

config.RequestTimeout = new
TimeSpan(0, 0, 45);

config.ChannelOpenTimeout = new
TimeSpan(0, 0, 45);

config.MaxConnectionsToServer = 5;

config.TransportProperties = new
DataCacheTransportProperties() { MaxBufferSize = 100000 };

config.LocalCacheProperties = new
DataCacheLocalCacheProperties(10000, new
TimeSpan(0, 5, 0), DataCacheLocalCacheInvalidationPolicy.TimeoutBased);


DataCacheFactory myCacheFactory = new
DataCacheFactory(config);

myDefaultCache = myCacheFactory.GetCache(“default”);

}

Performance data

By now, you must be thinking if this is indeed this easy, what is the catch? 🙂 🙂 There is none. Here are some performance numbers that I ran, hopefully this makes you feel at ease.

This is an ASP.NET application running as a web role in Windows Azure accessing the cache service. The Object count parameter is used to instantiate an object which has an array of int[] and string[]. For example, when Object count is set to 100 the application will instantiate int[100] and string[100] with each array item is initialized to a string ~6 bytes. So approximately, the parameter ‘Object count’ * 10 is the pre-serialized object size. When storing the object in the caching service, the KEY is generated using RAND and then depending on the Operations dropdown, the workload is run for a set of iterations based on the ‘Iterations’ parameter. Finally, the average latency is computed. The web role is running in South Central US data center in the same DC where the caching service is also deployed in order to avoid any external network latencies. For all these tests, the local cache feature is disabled.

Operation

Object Size

(pre-serialized)

Avg Latency (ms)

PUT

~1KB

4.5 – 6.5

GET

~ 1 KB

4.8 – 5.1

PUT

~10 KB

7.4 – 8.9

GET

~10 KB

7.6 – 10.3

PUT

~100 KB

38.9 – 45.6

GET

~100 KB

4.6 – 7.3

Now that you have some cache in the cloud, get your apps going!

Authored by: Rama Ramani
Reviewed by: Christian Martinez, Jason Roth