Abstract

Technical and commercial forces are causing Enterprise Architects to evaluate moving established on-premises applications into the cloud – the Microsoft Windows Azure Platform.

This blog post will demonstrate that there are established application architectural patterns that can get the best of both worlds: applications that continue to live on-premises while interacting with other applications that live in the cloud – the hybrid approach.  In many cases, such hybrid architectures are not just a transition point — but a requirement since certain applications or data is required to remain on-premises largely for security, legal, technical and procedural reasons.

Cloud is new, hybrid cloud even newer. There are a bunch of technologies that have just been released or announced so there is no one book or source for authentic information, especially one that compares, contrasts,  and ties it all together. This blog, and a few more that will follow, is an attempt to demystify and make sense of it all.

This blog begins with a brief review of two prevalent deployment paradigms and their influence on architectural patterns: On-premises and Cloud. From there, we will delve into a discussion around developing the hybrid architecture.

In authoring this blog posting we take an architect’s perspective and discuss the major building block components that compose this hybrid architecture. We will also match requirements against the capabilities of available and announced Windows Azure and Windows Azure AppFabric technologies. Our discussions will also factor in the usage costs and strategies for keeping these costs in check.

The blog concludes with a survey of interesting and relevant Windows Azure technologies announced at Microsoft PDC 2010 – Professional Developer’s Conference during October 2010.

On-Premises Architectural Pattern

On-premises solutions are usually designed to access data (client or browser applications) repositories and services hosted inside of a corporate network. In the graphic below, Web Server and Database Server are within a corporate firewall and the user, via the browser, is authenticated and has access to data.

Figure 1: Typical On-Premises Solution (Source – Microsoft.com)

The Patterns and Practices Application Architecture Guide 2.0 provides an exhaustive survey of the on-premises Applications. This Architecture Guide is a great reference document and helps you understand the underlying architecture and design principles and patterns for developing successful solutions on the Microsoft application platform and the .NET Framework.

The Microsoft Application Platform is composed of products, infrastructure components, run-time services, and the .NET Framework, as detailed in the following table (source: .NET Application Architecture 434 Guide, 2nd Edition).

Table 1: Microsoft Application Platform (Source – Microsoft.com)

The on-premises architectural patterns are well documented (above), so in this blog post we will not dive into prescriptive technology selection guidance here, however let’s briefly review some core concepts since we will reference them while elaborating hybrid cloud architectural patterns.

Hosting The Application

Commonly* on-premises applications (with the business logic) run on IIS (as an IIS w3wp process) or as a Windows Service Application. The recent release of Windows Server AppFabric makes it easier to build, scale, and manage Web and Composite (WCF, SOAP, REST and Workflow Services) Applications that run on IIS. 

* As indicated in Table 1 (above) Microsoft BizTalk Server  is capable of hosting on-premises applications; however, for sake of this blog, we will focus on Windows Server AppFabric

Accessing On-Premises Data

You’re on-premises applications may access data from the local file system or network shares. They may also utilize databases hosted in Microsoft SQL Server or other relational and non-relational data sources. In addition, your applications hosted in IIS may well be leveraging Windows Server AppFabric Cache for your session state, as well as other forms of reference or resource data.

Securing Access

Authenticated access to these data stores are traditionally performed by inspecting certificates, user name and password values, or NTLM/Kerberos credentials. These credentials are either defined in the data source themselves, heterogeneous repositories such as in SQL Logins, local machine accounts, or in directory Services (LDAP) such as Microsoft Windows Server Active Directory, and are generally verifiable within the same network,  but typically not outside of it unless you are using Active Directory Federation Services – ADFS.

Management

Every system exposes a different set of APIs and a different administration console to change its configuration – which obviously adds to the complexity; e.g., Windows Server for configuring network shares, SQL Server Management Studio for managing your SQL Server Databases; and IIS Manager for the Windows Server AppFabric based Services.

Cloud Architectural Pattern

Cloud applications typically access local resources and services, but they can eventually interact with remote services running on-premises or in the cloud. Cloud applications usually hosted by a ‘provider managed’ runtime environment that provides hosting services, computational services, storage services, queues, management services, and load-balancers. In summary, cloud applications consist of two main components: those that execute application code and those that provide data used by the application. To quickly acquaint you with the cloud components, here is a contrast with popular on-premises technologies:

Table 2: Contrast key On-Premises and Cloud Technologies

Category

On-Premises

Cloud

Application Hosting

Windows Server

Windows Azure

Message Queue

MSMQ

Azure Queue

Unstructured Data Store

Windows Folder

Windows Drives

Azure Blobs

Azure Table

Azure Drive (XDrive)

Relational Data Store

SQL Server

SQL Azure

Application Cache

AppFabric Cache

Azure AppFabric Cache

NOTE: This table is not intended to compare and contrast all Azure technologies. Some of them may not have an equivalent on-premise counterpart.

The graphic below presents an architectural solution for ‘Content Delivery’. While Content creation and its management is via on-premises applications; the content storage (Azure Blob Storage) and delivery is via Cloud – Azure Platform infrastructure. In the following sections we will review the components that enable this architecture.

Figure 2: Typical Cloud Solution

Running Applications in Windows Azure

Windows Azure is the underlying operating system for running your cloud services on the Windows Azure AppFabric Platform. The three core services of Windows Azure in brief are as follows:

  1. Compute: The compute or hosting service offers scalable hosting of services on 64-bit Windows Server platform with Hyper-V support. The platform is virtualized and designed to scale dynamically based on demand. The Azure platform runs web roles on Internet Information Server (IIS) and worker roles as Windows Services.
  2. Storage: There are three types of storage supported in Windows Azure: Table Services, Blob Services, and Queue Services. Table Services provide storage capabilities for structured data, whereas Blob Services are designed to store large unstructured file like videos, images, batch files in the cloud. Table Services are not to be confused with SQL Azure; typically you can store the high-volume data in low-cost Azure Storage and use (relatively) expensive SQL Azure to store indexes to this data. Finally, Queue Services are the asynchronous communication channels for connecting between Services and applications not only in Windows Azure but also from on-premise applications. Caching Services, currently available via Windows Azure AppFabric LABS, is another strong storage option.
  3. Management: The management service supports automated infrastructure and service management capabilities to Windows Azure cloud services. These capabilities include automatic and transparent provisioning of virtual machines and deploying services in them, as well as configuring switches, access routers, and load balancers.

A detailed review on the above three core services of Windows Azure is available in the subsequent sections.

Application code execution on Windows Azure is facilitated by Web and Worker roles. Typically, you would host Websites (ASP.NET, MVC2, CGI, etc.) in a Web Role and host background or computational processes in a Worker role. The on-premises equivalent for a Worker role is a Windows Service.

This architecture leaves a gray area — where should you host WCF Services? The answer is – it depends! Let me elaborate, when building on-premises services, you can host these WCF Services in a Windows Service, (e.g., In BizTalk, WCF Receive Locations can be hosted within an in-process host instance that runs as a Windows Service) and in Windows Azure you can host the WCF Services as Worker roles . While you can host a WCF Service (REST or SOAP) in either a Worker Role or Web role, you will typically host these services in a Web role; the one exception to this is when your WCF service specifically needs to communicate via TCP/IP endpoints.  The Worker roles support the ability to expose TCP endpoints, while Web roles are capable of exposing HTTP and HTTP(S) endpoints; Worker roles add the ability to expose TCP endpoints.  

Storing & Retrieving Cloud Hosted Data

Applications that require storing data in the cloud can leverage a variety of technologies including SQL Azure, Windows Azure Storage, Azure AppFabric Cache, Virtual Machine (VM) instance local storage, Instance Local Storage and Azure Drive (XDrive). Determining the right technology is a balancing exercise in managing trade-offs, costs, and capabilities. The following sections will attempt to provide prescriptive guidance on what usage scenario each storage option is best suited for.

The key takeaway is whenever possible co-locate data and the consuming application. Your data in the cloud, stored in SQL Azure or Windows Azure Storage, should be accessible via the ‘local’ Azure “fabric” to your applications. This has positive performance and a beneficial cost model when both the application and the data live within the same datacenter. Co-location is enabled via the ‘Region’ you choose for the Microsoft Data Center.

SQL Azure

SQL Azure provides an RDBMS in the cloud, and functions for all intents and purposes similarly to your on-premises version of SQL Server. This means that your applications can access data in SQL Azure using the Tabular Data Stream (TDS) protocol; or, in other words, your application uses the same data access technologies (e.g. ADO.NET, LINQ to SQL, EF, etc.) used by on-premises applications to access information on a local SQL Server. The only thing you need to do is change the SQL Client connection strings you’ve always been using. You just have to change the values to point to the server database hosted by SQL Azure. Of course I am glossing over some details like the SQL Azure connection string which contains other specific settings such as encryption, explicit username (in very strict format username@servername) and password – I am sure you get the drift here.

SQL Azure Labs demonstrate an alternative mechanism for allowing your application to interact with SQL Azure via OData. OData provides the ability to perform CRUD operations using REST and can retrieve query results formatted as AtomPub or JSON.

Via .NET programming, you can interact at a higher level using a DataServicesContext and LINQ.

Securing Access

When using TDS, access is secured using SQL Accounts (i.e., your SQL Login and password must be in the connection string), however there is one additional layer of security you will need to be aware of. In most on-premises architectures, your SQL Server database lives behind a firewall, or at most in a DMZ, which you would rarely expose. However, no matter where they are located, these repositories are not directly accessible to applications living outside the corporate boundaries. Of course even for on-premises solutions you can use IPSEC to restrict the (range of) machines that can access the SQL Server machine.  Eventually, the access to these databases is mediated and controlled by web services (SOAP, WCF).

What happens when your database lives on Azure, in an environment effectively available to the entire big bad world? SQL Azure also provides a firewall that you can configure to restrict access to a set of IP address ranges, which you would configure to include the addresses used by your on-premises applications, as well as any Microsoft Services (data center infrastructure which enable access from Azure hosted services like your Web Roles). In the graphic below you will notice the ‘Firewall Settings’ tab wherein the IP Address Range is specified.

Figure 3: SQL Azure Firewall Settings

When using OData, access is secured by configuration. You can enable anonymous access where the anonymous account maps to a SQL Login created on your database, or you can map Access Control Service (ACS) identities to specific SQL Logins, whereby ACS takes care of the authentication. In the OData approach, the Firewall access rules are effectively circumvented because only the OData service itself needs to have access to the database hosted by SQL Azure from within the datacenter environment.

Management

You manage your SQL Azure database the same way you might administer an on-premises SQL Server database, by using SQL Server Management Studio (SSMS). Alternately, and provided you allowed Microsoft Services in the SQL Azure firewall, you can use the Database Manager for SQL Azure (codename “Project Houston”) which provides the environment for performing many of the same tasks you would perform via SSMS.

The Database Manager for SQL Azure is a part of the Windows Azure platform developer portal refresh, and will provide a lightweight, web-based database management and querying capability for SQL Azure databases. This capability allows customers to have a streamlined experience within the web browser without having to download any tools.

Figure 4: Database Manager for SQL Azure (Source – PDC 10)

One of the primary technical differences between the traditional SSMS management option is the required use of port 1433; whereas, “Project Houston” is web-based and leverages port 80. An awesome demo on the capabilities of the Database Manager is available here 

Typical Usage

SQL Azure is best suited for storing data that is relational, specifically where your applications expect the database server to perform the join computation and return only the processed results. SQL Azure is a good choice for scenarios with high transaction throughput (view case studies here) since it has a flat-rate pricing structure based on the size of data stored and additionally query evaluation is distributed across multiple nodes.

Windows Azure Storage (Blobs, Tables, Queues)

Windows Azure Storage provides storage services for data in the form of metadata enriched blobs, dictionary-like tables, and simple persistent queues. You access data via HTTP or HTTP(S) by leveraging the StorageClient library which provides 3 different classes, respectively CloudBlobClient, CloudTableClient, CloudQueueClient), to access storage services. For example, CloudTableClient provides a DataServices context enabling use of LINQ for querying table data. All data stored by Windows Azure Storage can also be accessed via REST. AppFabric CAT (team) plans to provide code samples via this blog site to demonstrate this functionality – stay tuned.

Securing Access

Access to any of the Windows Azure Storage blobs, tables, or queues is provided through symmetric key authentication whereby an account name and account key pair must be included with each request. In addition, access to the Azure blobs can be secured via the mechanism known as Shared Access Signature 

Management

Currently there is no single tool for managing Windows Azure Storage. There are quite a few ‘samples’ as well as third-party tools. The graphic below is from the tool – Azure Storage Explorer available for download via Codeplex.

Figure 5: Azure Storage Explorer (Source – Codeplex)

In general, a Windows Azure storage account, which wraps access to all three forms of storage, services) is created through the Windows Azure Portal, but is effectively managed using the API’s.

Typical Usage

Windows Azure storage provides unique value propositions that make it fit within your architecture for diverse reasons.

Although Blob storage is charged both for amount of storage used and also by the number of storage transactions, the pricing model is designed to scale transparently to any size. This makes it best suited to the task of storing files and larger objects (high resolution videos, radiology scans, etc.) that can be cached or are otherwise not frequently accessed.

Table storage is billed the same way as Blob Storage, but its unique approach to indexing at large scale makes it useful in situations where you need to efficiently access data from datasets larger than the 50 GB per database maximum of SQL Azure, and where you don’t expect to be performing joins or are comfortable enabling the client to download a larger chunk of data and perform the joins and computations on the client side.

Queues are best suited for storing pointers to work that needs to be done, due to their limited storage capacity of a maximum of 8K per message, in a manner ensuring ordered access. Often, you will use queues to drive the work performed by your Worker roles, but they can also be used to drive the work performed by your on-premises services. Bottom-line, Queues can be effectively used by a Web Role to exchange control messages in an asynchronous manner with a Worker Role running within the same application, or to exchange messages with on-premises applications

Azure AppFabric Cache

The Azure AppFabric Cache, currently in CTP/Beta as of November 2010, should give you (technology is pre-release and hence the caveat!) the same high-performance, in-memory, distributed cache available with Windows Server AppFabric, as a hosted service. Since this is an Azure AppFabric Service, you are not responsible for managing servers participating in the data cache cluster. Your Windows Azure applications can access the cached data through the client libraries in the Windows Azure AppFabric SDK.

Securing Access

Access to a cache is secured by a combination of a generated authentication tokens with authorization rules (e.g., Read/Write versus Read only) as defined in the Access Control Service. 

Figure 6: Access Control for Cache

Management

At this time, creation of a cache, as well securing access to it, is performed via the Windows Azure AppFabric labs portal. Subsequent to its release this will be available in the commercial Azure AppFabric portal.

Typical Usage

Clearly the cache is best suited for persisting data closer to your application, whether its data stored in SQL Azure, Windows Azure Storage, or the result of a call to another service or a combination of all of them. Generally, this approach is called the cache-aside model, whereby requests for data made by your application first check the cache and only query the actual data source, and subsequently add it to the cache, if it’s not present. Typically we are seeing cached used in the following scenarios:

  • A scalable session store for your web applications running on ASP.NET.
  • Output cache for your web application.
  • Objects created from resultsets from SQL or costly web service calls, stored in cache and called from your web or worker role.
  • Scratch pad in the cloud for your applications to use.

Using VM local storage & Azure Drives

When you create an instance in Windows Azure, you are creating an instance of a Virtual Machine (VM). VM, just like its on-premises counterpart, can make use of locally attached drives for storage or network shares.  Windows Azure Drives are similar, but not exactly the same as, network shares. They are an NTFS formatted drive stored as page blob in Windows Azure Storage, and accessed from your VM instance by drive letter. These drives can be mounted exclusively to a single VM instance as read/write or to multiple VM instances as read-only. When such a drive is mounted, it caches data read in local VM storage, which enhances the read performance of subsequent accesses for the same data.

Securing Access

An Azure Cloud Drive is really just a façade on top of a page blob, so access to the drive effectively amounts to having access to the Blob store by Windows Storage Account credentials (e.g., account name and account key).

Management

As for Blobs, there is limited tooling for managing Azure drives outside of the StorageClient API’s. In fact, there is no portal for creating Azure Cloud drives. However, there are samples on Codeplex that help you with creating and managing the Azure drives.

The Windows Azure MMC Snap-In is available and can be downloaded from here – and the graphic below provides you a quick peek into the familiar look/feel of the MMC.

Figure 7: Windows Azure MMC Snap-In (Source – MSDN)

Typical Usage

Cloud drives have a couple of unique attributes that make them interesting choices in hybrid architecture. To begin with, if you need drive-letter based access to your data from your Windows Azure application and you want it to survive VM instance failure, a cloud drive is your best option. Beyond that, you can mount a cloud drive to instances as read-only. This enables sharing of large reference files across multiple Windows Azure roles. What’s more, you can create a VHD, for example, using Window 7’s Disk Management MMC and load this VHD to the blob storage for use by your VM instances, effectively cloning an on-premises drive and extending its use into the cloud.

Hybrid Architectural Pattern: Exposing On-premises Services To Azure Hosted Services

Hybrid is very different – unlike the ‘pure’ cloud solution, hybrid solutions have significant business process, data stores, and ‘Services’ as on-premises applications possibly due to compliance or deployment restrictions. A hybrid solution is one which has applications of the solution deployed in the cloud while some applications remain deployed on-premises.  

For the purposes of this article, and as illustrated in the diagram, we will focus on the scenario where your on-premises applications are services hosted in Windows Server AppFabric and then communicate to other portions of your hybrid solution running on the cloud

Figure 8: Typical Hybrid Cloud Solution

Traditionally, to connect your on-premises applications to the off-premises applications (cloud or otherwise), you would enable this scenario by “poking” a hole in your firewall and configuring NAT routing so that Internet clients can talk to your services directly. This approach has numerous issues and limitations, not the least of which is the management overhead, security concerns, and configuration challenges.

Connectivity

So the big question here is:  How do I get my on-premises services to talk to my Azure hosted services?

There are two approaches you can take: You can use the Azure AppFabric Service Bus, or you can use Windows Azure Connect.

Using Azure AppFabric Service Bus

If you’re on-premises solution includes WCF Services, WCF Workflow Services, SOAP Services, or REST services that communicate via HTTP(S) or TCP you can use the Service Bus to create an externally accessible endpoint in the cloud through which your services can be reached. Clients of your solution, whether they are  could other Windows Azure hosted services, or Internet clients, simply communicate with that end-point, and the AppFabric Service Bus takes care of relaying traffic securely to your service and returning replies to the client.

The graphic below (from http://www.microsoft.com/en-us/appfabric/azure/middleware-services.aspx#ServiceBus) demonstrates the Service Bus functionality.

Figure 9: Windows Azure AppFabric Service Bus (Source – Microsoft.com)

The key value proposition of leveraging the Service Bus is that it is designed to transparently communicate across firewalls, NAT gateways, or other challenging network boundaries that exist between the client and the on-premises service. You get the following additional benefits:

  • The actual endpoint address of your services is never made available to clients.
  • You can move your services around because the clients are only bound to the Service Bus endpoint address, that is a virtual and not a physical address.
  • If both the client and the service happen to be on the same LAN and could therefore communicate directly, the Service Bus can set them up with a direct link that removes the hop out to the cloud and back and thereby improves throughput and latency.

Securing Access to Service Bus Endpoints

Access to the Service Bus is controlled via the Access Control Service. Applications that use the Windows Azure AppFabric Service Bus are required to perform security tasks for configuration/registration or for invoking service functionality. Security tasks include both authentication and authorization using tokens from the Windows Azure AppFabric Access Control service. When permission to interact with the service has been granted by the AppFabric Service Bus, the service has its own security considerations that are associated with the authentication, authorization, encryption, and signatures required by the message exchange itself. This second set of security issues has nothing to do with the functionality of the AppFabric Service Bus; it is purely a consideration of the service and its clients.

There are four kinds of authentication currently available to secure access to Service Bus:

  • SharedSecret, a slightly more complex but easy-to-use form of username/password authentication.
  • SAML, which can be used to interact with SAML 2.0 authentication systems.
  • SimpleWebToken, uses the OAuth Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT).
  • Unauthenticated, enables interaction with the service endpoint without any authentication behavior.

Selection of the authentication mode is generally dictated by application connecting to the Service Bus. You can read more on this topic here.

Cost considerations

Service Bus usage is charged by the number of concurrent connections to the Service Bus endpoint. This means that when on-premises or cloud-hosted service registers with the Service Bus and opens its listening endpoint it is considered as 1 connection. When a client subsequently connects to that Service Bus endpoint, it’s counted as a second connection. One very important point falls out of this that may affect your architecture: you will want to minimize concurrent connections to Service Bus in order to keep the subscription costs down. It may be more likely that you would want to use the Service Bus in the middle tier more like a VPN to on-premises services rather than allowing unlimited clients to connect through the Service Bus to your on-premises service.

To reiterate, the key value proposition of the Service Bus is Service Orientation; it makes it possible to expose application Services using interoperable protocols with value-added virtualization, discoverability, and security.

Using Windows Azure Connect

Recently announced at PDC 10, and expected for release by the end of 2010, is an alternative means for connecting your cloud services to your on-premises services. Windows Azure Connect effectively offers IP-level, secure, VPN-like connections from your Windows Azure hosted roles to your on-premises services. This service is still not available and of course pricing details have not as yet been released. From the information available to date, you can conclude that Windows Azure Connect would be used for connecting middle-tier services, rather than public clients, to your on-premises solutions. While the Service Bus is focused on connectivity for on-premise Services exposed as a Azure endpoints without having to deal with firewall and NAT setup; Windows Azure Connect provides broad connectivity between your Web/Worker roles and on-premises systems like SQL Server, Active Directory or LOB applications.

Securing the solution

Given the distributed nature of the hybrid (on-premises and cloud) solution, your approach to security should match it – this means your architecture should leverage Federated Identity. This essentially means that you are outsourcing authentication and possibly authorization.  

If you want to flow your authenticated on-premises identities, such as domain credentials, into Azure hosted web sites or services, you will need a local identity (akin to presenting a claim issued by a local security token service) providing a security Identity Provider Security Token Service (IP-STS) such as Active Directory Federation Services 2.0 (ADFS 2.0). Your services, whether on-premises or in the cloud, can then be configured to trust credentials, in the form of claims, issued by the IP-STS. Think of the IP-STS as simply the component that can tell if a username and password credentials are valid. In this approach, clients authenticate against the IP-STS; for example, by sending their Windows credentials to ADFS, and if valid, they receive claims they can subsequently present to your websites or services for access. Your websites or services only have to evaluate these claims when executing authorization logic – Windows Identity Foundation (WIF) provides these facilities.

For additional information, review this session – SIA305 Windows Identity Foundation and Windows Azure for Developers. In this session you can learn how Windows Identity Foundation can be used to secure your Web Roles hosted in Windows Azure, how you can take advantage of existing on-premises identities, and how to make the most of the features such as certificate management and staged environments.

In the next release (ACS v2), a similar approach can be taken by using the Windows Azure AppFabric Access Control Service (ACS) to act as your IP-STS and thereby replacing ADFS. With ACS, you can define service identities that are essentially logins secured by either a username and password, or a certificate. The client calls out to ACS first to authenticate, and the client thereafter presents the claims received by ACS in the call to your service or website.

Finally, more advanced scenarios can be put into place that uses ACS as your Relaying-Party Security Token Service (RP-STS) as the authority for all your apps, both on-premises and in the cloud. ACS is then configured to trust identities issued from other identity providers and convert them to the claims expected by your applications. For example, you can take this approach to enable clients to authenticate using Windows Live ID, Google, and Yahoo, while still working with the same set of claims you built prior to supporting those services.

Port Bridge, a point-to-point tunneling scenario is perfect when the application is not exposed as a WCF service or doesn’t speak HTTP. Port bridge is perfect for protocols such as SMTP, SNMP, POP, IMAP, RDP, TDS and SSH. You can read more about this here 

Optimizing Bandwidth – Data transfer Costs

One huge issue to reconcile in hybrid solutions is the bandwidth – this is unique for hybrid.

The amount of data transferred in and out of a datacenter is billable, so an approach to optimizing your bandwidth cost is to keep as much traffic from leaving the datacenter as possible. This translates into architecting designs that transfer most data across the Azure “fabric” within affinity groups and minimize traffic between disparate data centers and externally. An affinity group ensures that your cloud services and storage are hosted together on the Windows Azure infrastructure. Windows Azure roles, Windows Azure storage, and SQL Azure services can be configured at creation time to live within a specific datacenter. By ensuring the Azure hosted components within your hybrid solution that communicate with each other live in the same data center, you effectively eliminate the data transfer costs.  

The second approach to optimizing bandwidth costs is to use caching. This means leveraging Windows Server AppFabric Cache on premise to minimize calls to SQL Azure or Azure Storage. Similarly, it also means utilizing the Azure AppFabric Cache from Azure roles to minimize calls to SQL Azure, Azure Storage, or to on-premises services via Service Bus.

Optimizing Resources

One of the most significant cost-optimization features of Windows Azure roles is their support for dynamic scale up and down. This means adding more instances as your load increases and removing them as the load decreases. Supporting load-based dynamic scaling can be accomplished using the Windows Azure management API and SQL Azure storage. While there is no out-of-the box support for this, there is a fair amount of guidance on how to implement this within your solutions (see the Additional References section for one such example). The typical approach is to configure your Azure Web and Worker roles to log health metrics (e.g., CPU load, memory usage, etc.) to a table in SQL Azure. You then have another Worker role that is responsible for periodically polling this table and when certain thresholds are reached or exceeded, it would increase or decrease the instance count for the monitored service using the Windows Azure management APIs.  

Monitoring & Diagnostics

For on-premises services, you can use Windows Server AppFabric Monitoring which logs tracking events into your on-premises instance of SQL Server, and this data can be viewed both through IIS Manager and by querying the monitoring database.

For Azure roles in production you will leverage Windows Azure Diagnostics, wherein your instances periodically load batches to Azure storage. For SQL Azure you can examine SQL Azure query performance by querying the related Dynamic Management Views.  

IntelliTrace is very useful for debugging solutions. With IntelliTrace debugging, you can log extensive debugging information for a Role instance while it is running in Windows Azure. If you need to track down a problem, you can then use the IntelliTrace logs to step through your code from within Visual Studio as though it were running in Windows Azure. In effect, IntelliTrace records key code execution and environment data while your service is running, and allows you to replay the recorded data from within Visual Studio. For more information on IntelliTrace, see Debugging with IntelliTrace.

Summary

Based off the discussions above, let’s consolidate our findings into this easy to read table.

Table 3: In Summary

Typical

Scenario

Architectural Pattern

Application Host

Data Store(s)

Access Control

Management

EAI and Process  Control

On Premise

IIS, Windows Server, BizTalk Server

AppFabric Cache, SQL Server

Active Directory, SQL Login

SQL Server Management Studio,

IIS – Manager

Web content delivery

Cloud

Windows Azure – Web or Worker Roles

SQL Azure,

Azure Storage,

AppFabric Cache

Account name/key pair (limited IP Address access),

SQL Login,

Access Control Services

SQL Server Management Studio,

Database Manager,

Windows Azure Portal

Management API.

Multi-location devices/ Appliance Management

Hybrid

IIS, Windows Server, BizTalk Server,

Windows Azure – Web or Worker Roles

Service Bus

Connect

AppFabric Cache, SQL Server, SQL Azure,

Azure Storage,

AppFabric Cache

ADFS 2.0,

Access Control Services

 

SQL Server Management Studio,

Database Manager,

Windows Azure Portal,

IIS – Manager

It’s amply clear that the Hybrid Cloud Architectural pattern is ideally suited for connecting on-premises based appliances and applications with its cloud counterparts. This hybrid pattern is ideal since it leverages the best of both: traditional on-premises applications and new distributed/multi-tenant cloud applications.

Let’s Make Plans To Build This Out Now!

In the above sections we have presented a good survey of available technologies to build out the hybrid applications; recent announcements at PDC 10 are worth considering before you move forward.

First, you should consider the Composition Model within the Windows Azure Composite Applications Environment. The Composition Model ties together your entire Azure hosted services (roles, storage, SQL Azure, etc.) as one composite application enabling you to see the end to end picture in a single designer in Visual Studio.  The Composition Model is a set of .NET Framework extensions and builds on the familiar Azure Service Model concepts and adds new capabilities for describing the application infrastructure, its components, and relationships among the components, in order to treat the composite application as a single logical identity.

Second, if you were making calls back to an on-premises instance of Windows Server AppFabric, Windows Workflow is fully supported on Azure as part of the composition model, and so you may choose to move the relevant Workflows directly into the cloud. This is huge, very huge since it enables develop once and deploy as needed!

Third, there is a new role in town – the Windows Azure Compute VM Role. This enables us to host 3rd party and archaic Line-of-business applications fully participate (wow – without a major code do-over!) in the hybrid cloud solution.

And last but not least, Windows Azure is also providing the Remote Desktop functionality, which enables you to connect to a running instance of your application or service to monitor activity and troubleshoot common problems. The Remote Desktop Protocol Access while super critical to the VM instance is also available to Web and Worker roles too. This in turn enables another related new feature, which is the availability of the full IIS instead of just the core that was available to web roles.  

Additional References

While web links for additional/background information are embedded into the text; the following additional references are provided as resources to move forward on your journey.

·         Windows Azure, SQL Azure & Azure AppFabric

·         SQL Azure Labs

·         Windows Azure AppFabric Service Bus

·         Windows Azure AppFabric Access Control Service

·         Windows Identity Foundation and Windows Azure for Developers

·         Windows Azure Connect talk at PDC 10

·         MSDN Magazine Article on Health Based Scaling

·         Azure AppFabric Cache Talk at PDC 10

·         Identity & Access Control in the Cloud at PDC 10

Stay Tuned!

This is the beginning; our team is currently working on the next blog that builds on this topic. The blog will take a scenario driven approach in building a hybrid Windows Azure Application.

Acknowledgements

Significant contributions from Paolo Salvatori, Valery Mizonov, Keith Bauer and Rama Ramani are acknowledged.

Namaste!

 The PDF Version of this document is attached.