A Brief History of Cloud-Based Integration in Microsoft Azure

A Brief History of Cloud-Based Integration in Microsoft Azure

In conversations with students and other integration specialists, I’m discovering more and more how confused some people are about the evolution of cloud-based integration technologies. I suspect that cloud-based integration is going to be big business in the coming years, but this confusion will be an impediment to us all.

To address this I want to write a less technical, very casual, blog post explaining where we are today (November of 2015), and generally how we got here. I’ll try to refrain from passing judgement on the technologies that came before and I’ll avoid theorizing on what may come in the future. I simply want to give a timeline that anyone can use to understand this evolution, along with a high-level description of each technology.

I’ll only speak to Microsoft technologies because that’s where my expertise lies, but it’s worth acknowledging that there are alternatives in the marketplace.

If you’d like a more technical write-up of these technologies and how to use them, Richard Seroter has a good article on his blog that can be found here.

Way, way back in October of 2008 Microsoft unveiled Windows Azure (although it wouldn’t be until February of 2010 that Azure went “live”). On that first day, Azure wasn’t nearly the monster it has become.

It provided a service platform for .NET services, SQL Services, and Live Services. Many people were still very skeptical about “the cloud” (if they even knew what that meant). As an industry we were entering a brave new world with many possibilities.

From an integration perspective, Windows Azure .NET Services offered Service Bus as a secure, standards-based messaging infrastructure.

Over the years, Service Bus has been rebranded several times but the core concepts have stayed the same: reduce the barriers for building composite applications, even when their components have to communicate across organizational boundaries. Initially, Service Bus offered Topics/Subscriptions and Queues as a means for systems and services to exchange data reliably through the cloud.

Service Bus Queues are just like any other queueing technology. We have a queue to which any number of clients can post messages. These messages can be received from the queue later by some process. Transactional delivery, message expiry, and ordered delivery are all built-in features.

Sample Service Bus queue

I like to call Topics/Subscriptions “smart queues.” We have concepts similar to queues with the addition of message routing logic. That is, within a Topic I can define one or more Subscription(s). Each Subscription is used to identify messages that meet certain conditions and “grab” them. Clients don’t pick up messages from the Topic, but rather from a Subscription within the Topic. A single message can be routed to multiple Subscriptions once published to the Topic.

Sample Service Bus Topic and Subscriptions

If you have a BizTalk Server background, you can essentially think of each Service Bus Topic as a MessageBox database.

Interacting with Service Bus is easy to do across a variety of clients using the .NET or REST APIs. With the ability to connect on-premises applications to cloud-based systems and services, or even connect cloud services to each other, Service Bus offered the first real “integration” features to Azure.

Since its release, Service Bus has grown to include other messaging features such as Relays, Event Hubs, and Notification Hubs, but at its heart it has remained the same and continues to provide a rock-solid foundation for exchanging messages between systems in a reliable and programmable way. In June of 2015, Service Bus processed over 1 trillion (1,000,000,000,000) messages! (Starts at 1:20)

As integration specialists we know that integration problems are more complex than simply grabbing some data from System A and dumping it in System B.

Message transport is important but it’s not the full story. For us, and the integration applications we build, VETRO (Validate, Enrich, Transform, Route, and Operate) is a way of life. I want to validate my input data. I may need to enrich the data with alternate values or contextual information. I’ll most likely need to transform the data from one format or schema to another. Identifying and routing the message to the correct destination is certainly a requirement. Any integration solution that fails to deliver all of these capabilities probably won’t interest me much.

VETRO Diagram

So, in a world where Service Bus is the only integration tool available to me, do I have VETRO? Not really.

I have a powerful, scalable, reliable, messaging infrastructure that I can use to transport messages, but I cannot transform that data, nor can I manipulate that data in a meaningful way, so I need something more.

I need something that works in conjunction with this messaging engine.

Microsoft’s first attempt at providing a more traditional integration platform that provided VETRO-esque capabilities was Microsoft Azure BizTalk Services (MABS) (to confuse things further, this was originally branded as Windows Azure BizTalk Services, or WABS). You’ll notice that Azure itself has changed its name from Windows Azure to Microsoft Azure, but I digress.

MABS was announced publicly at TechEd 2013.

Despite the name, Microsoft Azure BizTalk Services DOES NOT have a common code-base with Microsoft BizTalk Server (on second thought, perhaps the EDI pieces share some code with BizTalk Server, but that’s about all). In the MABS world we could create itineraries. These itineraries contained connections to source and destination systems (on-premises & cloud) and bridges. Bridges were processing pipelines made up of stages. Each stage could be configured to provide a particular type of VETRO function. For example, the Enrich stage could be used to add properties to the context of the message travelling through the bridge/itinerary.

Stages of a MABS Bridges

Complex integration solutions could be built by chaining multiple bridges together using a single itinerary.

MABS message flow

MABS was our first real shot at building full integration solutions in the cloud, and it was pretty good, but Microsoft wasn’t fully satisfied, and the industry was changing the approach for service-based architectures. Now we want Microservices (more on that in the next section).

The MABS architecture had some shortcomings of its own. For example, there was little or no ability to incorporate custom components into the bridges, and a lack of connectors to source and destination systems.

Over the past couple of years the trending design architecture has been Microservices. For those of you who aren’t already familiar with it, or don’t want to read pages of theory, it boils down to this:

“Architect the application by applying the Scale Cube (specifically y-axis scaling) and functionally decompose the application into a set of collaborating services. Each service implements a set of narrowly related functions. For example, an application might consist of services such as the order management service, the customer management service etc.

Services communicate using either synchronous protocols such as HTTP/REST or asynchronous protocols such as AMQP.

Services are developed and deployed independently of one another.

Each service has its own database in order to be decoupled from other services. When necessary, consistency is between databases is maintained using either database replication mechanisms or application-level events.”

So the shot-callers at Microsoft see this growing trend and want to ensure that the Azure platform is suited to enable this type of application design. At the same time, MABS has been in the wild for just over a year and the team needs to address the issues that exist there. MABS Itineraries are deployed as one big chunk of code, and that does not align well to the Microservices way of doing things. Therefore, need something new but familiar!

Azure App Service is a cloud platform for building powerful web and mobile apps that connect to data anywhere, in the cloud or on-premises. Under the App Service umbrella we have Web Apps, Mobile Apps, API Apps, and Logic Apps.

Azure App Service

I don’t want to get into Web and Mobile Apps. I want to get into API Apps and Logic Apps.

API Apps and logic Apps were publicly unveiled in March of 2015, and are currently still in preview.

API Apps provide capabilities for developing, deploying, publishing, consuming, and managing RESTful web APIs. The simple, less sales-pitch sounding version of that is that I can put RESTful services in the Azure cloud so I can easily use them in other Azure App Service-hosted things, or call the API (you know, since it’s an HTTP service) from anywhere else. Not only is the service hosted in Azure and infinitely scalable, but Azure App Service also provides security and client consumption features.

So, API Apps are HTTP / RESTful services running in the cloud. These API Apps are intended to enable a Microservices architecture. Microsoft offers a bunch of API Apps in Azure App Service already and I have the ability to create my own if I want. Furthermore, to address the integration needs that exist in our application designs, there is a special set of BizTalk API Apps that provide MABS/BizTalk Server style functionality (i.e., VETRO).

What are API Apps?

This is all pretty cool, but I want more. That’s where Logic Apps come in.

Logic Apps are cloud-hosted workflows made up of API Apps. I can use Logic Apps to design workflows that start from a trigger and then execute a series of steps, each invoking an API App whilst the Logic App run-time deals with pesky things like authentication, checkpoints, and durable execution. Plus it has a cool rocket ship logo.

What are Logic Apps?

What does all this mean? How can I use these Azure technologies together to build awesome things today?

Service Bus review

Service Bus provides an awesome way to get messages from one place to another using either Queues or Topics/Subscriptions.

API Apps are cloud-hosted services that do work for me. For example, hit a SaaS provider or talk to an on-premises system (we call these connectors), transform data, change an XML payload to JSON, etc.

Logic Apps are workflows composed of multiple API Apps. So I can create a composite process from a series of Microservices.

Logic App review

But if I were building an entire integration solution, breaking the process across multiple Logic Apps might make great sense. So I use Service Bus to connect the two workflows to each other in a loosely-coupled way.

Logic Apps and Service Bus working together

And as my integration solution becomes more sophisticated, perhaps I have need for more Logic Apps to manage each “step” in the process. I further use the power of Topics to control the workflow to which a message is delivered.

More Logic Apps and Service Bus Topics provide a sophisticated integration solution

In the purest of integration terms, each Logic App serves as its own VETRO (or subset of VETRO features) component. Decomposing a process into several different Logic Apps and then connecting them to each other using Service Bus gives us the ability to create durable, long-running composite processes that remain loosely-coupled.

Doing VERTO using Service Bus and Logic Apps

Today Microsoft Azure offers the most complete story to date for cloud-based integration, and it’s a story that is only getting better and better. The Azure App Service team and the BizTalk Server team are working together to deliver amazing integration technologies. As an integration specialist, you may have been able to ignore the cloud for the past few years, but in the coming years you won’t be able to get away with it.

We’ve all endeavored to eliminate those nasty data islands. We’ve worked to tear down the walls dividing our systems. Today, a new generation of technologies is emerging to solve the problems of the future. We need people like you, the seasoned integration professional, to help direct the technology, and lead the developers using it.

If any of this has gotten you at all excited to dig in and start building great things, you might want to check out QuickLearn Training’s 5-day instructor-led course detailing how to create complete integration solutions using the technologies discussed in this article. Please come join us in class so we can work together to build magical things.

Integration Monday Recap and Push-BUtton Push Trigger Introduction

Integration Monday Recap and Push-BUtton Push Trigger Introduction

This blog post serves as a quick recap of and expansion on my October 19th Integration Monday talk titled Building Push Triggers for Logic Apps. You can view the session and look through the slides over at integrationusergroup.com.

In the talk, I explored the bare minimum requirements for building push triggers, expanding on my AzureCon 2015 talk about a specific push trigger for dealing with NFC tag reads. I showed how you could use the QuickLearn Push Trigger Tools and QuickLearn Push Trigger Client Tools to implement a simple interface for storing callbacks, and build a re-usable set of callback storage mechanisms.

I also introduced the Push-Button Push Trigger. A push trigger that responds to a button press on a Windows 10 IoT device (in this case a Raspberry PI 2), relying on Azure Storage for callback storage. In the remainder of this post, I’m going to show you how  to get your own Push-Button Push Trigger up and running.

Where Do I Get a Push-Button Push Trigger?

At the moment, there are 2 ways that you can get one. You can come out to QuickLearn’s 5-day Cloud-Based Integration using Azure App Service course (or attend remotely), or you can build one for yourself!

Even if you’ve never worked with anything like this before — don’t panic. You can’t really get something simpler than this.

Essentially you’ll need a Windows 10 IoT device (Raspberry Pi 2, DragonBoard 410C, MinnowBoard Max, etc…), a momentary switch (button), some wiring to wire the button up to a GPIO port and ground, and optionally a breadboard for even more fun later. I went with a Raspberry Pi 2 for mine, but if I could do it again, I would have chosen a DragonBoard 410C given its built-in Wi-Fi capabilities that don’t require an additional accessory or external module.

To get started with developing, you will need some software on your own machine (Visual Studio, Windows 10 IoT Project Templates, etc…) and you will need to get Windows 10 IoT Core onto your device. Microsoft has provided a pretty decent write-up of that part of the procedure over here.

Assembling Your PusH-Button Push Trigger

You may or may not have a case to go along with your Push-Button Push Trigger. I ended up buying the cheapest case I could find on the internet. This was likely a poor choice as it quickly disintegrated and pieces chipped off. Since then, I’ve had a lot better luck with this one. Of course, you could always print/build your own custom enclosure as well.

First things first, make sure your device has Windows IoT by inserting a prepared MicroSD card into the device.

Next, place your device in its case (if applicable). Mine has a mini-breadboard mounted on top for easier portability of the device.

Now, on to the wiring. We’re going to wire up a wire to GPIO pin 4 (chosen randomly), and another wire up to ground. Eventually we’re going to put a momentary switch inbetween so that we can quickly toggle that connection between high/low.

Let’s get that switch up on the breadboard (and make sure we put on a nice and colorful cover).

You can see in the image above how the posts are reaching down into the board. To connect wires to those pins, you will simply plug in one of the jumper wires to the same row as one of the pins.

One connection down, one to go.

At this point, everything is wired up, and it’s time to get power and internet to the device.

How Do I Make The Sample Code Work?

First of all, you can find the sample code over at https://github.com/nihaue/PushButtonPushTrigger. You can either download it as a ZIP file if you’re not comfortable with Git, or if you are comfortable with Git, you can clone it directly from here: https://github.com/nihaue/PushButtonPushTrigger.git

Once you have the sample downloaded, you should immediately Build the code to restore NuGet packages, and make all of the references happy. Next, you should take some time to look through the CallbacksController class for the Push Trigger (the part actually hosted in Azure with which the Logic App registers its interest in certain data), and the StartupTask class for the Universal Windows App (the part that actually looks for and handles the button press):

After you have a decent understanding of what’s going on, you’ll realize that the CallbacksController is storing callbacks from interested Logic Apps in Azure Table Storage, and the StartupTask (think background service on the device) is reading callbacks from Azure Table Storage when the button is pressed (moving this code to initialization and caching/polling for updates would be a better choice – and something you’re free to implement). So in order to get this thing working, you’re going to need an Azure Storage account.

If you don’t already have an Azure Storage account, head over to the Azure Portal and create one.

The only thing you need from this storage account will be the connection string, which you can find after it’s created over here:

With those credentials in hand, you’ll need to visit the two files in the solution responsible for storing configuration. They’re both named AzureStorageConfig.cs.

Inside that file, you will see a line of code with a TODO comment indicating that you should paste your connection string for your Azure Storage account in that location. This is indeed your next step (make sure to do it in both the code for the API App that lives in Azure, and the code for the device itself).

Ultimately, this is a terrible way to handle configuration. You can get the sample working with a simple copy/paste in that file, but the intent is that you would simply decide for yourself how you’d like to manage the configuration and creation of the CloudStorageAccount instance, and make that instance available through the StorageAccount property of the AzureStorageConfig class. This instance is used in both the AzureStorageCallbackStore and the AzureStorageClientCallbackStore classes.

Publishing the Code to Azure

We’re now ready to get this code all in place and running. The first step toward that goal will be to publish the API App project. You can do this by right-clicking the QuickLearn.ButtonPress.PushTrigger project, and then clicking Publish.

Make sure to select Microsoft Azure API Apps (Preview) as the target.

In the Microsoft Azure API Apps window, select your Azure Subscription, and then click New… Fill out the form to create a new API App container into which you can deploy your code.

Once the creation of the container is complete, you will see the following status message appear in Visual Studio.

Then, you will once again right-click the project and then click Publish… This time, the form will be pre-filled with the settings from the publish profile of the Azure API App container that you just provisioned. You might find it helpful to deploy the Debug configuration of your API App (Settings > Configuration > Debug – Any CPU), but that is entirely up to you. Once you click Publish, your code will be deployed to the API App container, and the API App will be usable within a Logic App.

Next up, we will deploy code to the device, and configure it to run in the background.

Deploy the Code to the Device

First of all, you will need to edit the project properties for the QuickLearn.ButtonPress.App project so that it attempts deployment to the correct device. In this case, that will mean navigating to the Debug tab, setting the Target device  to Remote machine, the Remote machine to the name of your Windows IoT Core device (default: minwinpc), and then unchecking the Use authentication box.

You will want to make sure to save the project properties, get your device connected to the same network as your development machine (laptop in my case). Next, you can right-click QuickLearn.ButtonPress.App, and click Deploy.

Once deployment is complete, head over to the Windows IoT Core Watcher utility that ended up on your system after installing everything that you needed to get your device setup initially. If you can’t find it, reboot your system and it will be there waiting for you. The Windows IoT Core Watcher utility finds IoT Core devices on your network and provides quick links to gain access and configure them.

In the utility, right-click your device, and then click Web Browser here.

Login using your user name and password (default is Administrator / p@ssw0rd).

Next, head over to the Apps tab, and verify that QuickLearn.ButtonPress shows up in the list. You will want to note the full name as it appears here because you will need it in a few minutes.

Since this app was created as a Startup Task rather than as a graphical application, you will need to register it with the device to be run on startup. At the moment, this is not something that you can accomplish in the browser. Instead, you will need to fire up PowerShell for this next bit.

In PowerShell, you will need to enter a remote session on your device. You can do this using the Enter-PSSession cmdlet like this:

The connection process will take a while. Just get a cup of coffee, and when you have it ready, the session should be connected. Once connected, you are in PowerShell on the device, and are executing commands against the device (not your own local machine).

On the device is a utility called iotstartup. This utility provides access to configure what tasks run at device startup. In this case, you want to configure the device to constantly be running the Push-Button Push Trigger code in the background.

At the prompt, type iotstartup add headless “QuickLearn.ButtonPress.*”

Add Headless Startup Task

Verify that what the app added matches exactly what appeared in the list on the device web page that you examined earlier. At the prompt type, shutdown /r /t 0

This will cause the device to reboot and your application to start up. It may take 60-90 seconds for the reboot to complete.

Building and Testing a Logic App using the Push-Button Push Trigger

In the Azure Portal, create a new empty Logic App in the same resource group in which you deployed the Push Button Push Trigger API App (otherwise the API App won’t be available to select in the designer). In the Logic App designer, in the API Apps pane (which you may have to expand in order to see), click QuickLearn.ButtonPress.PushTrigger.

Configure the Push Trigger as shown, and then click the green check mark to save the settings.

After the push trigger, add any other actions to your Logic App that you wish. Maybe this triggers a build in TFS, maybe it connects to a device that opens a door, maybe it brews you coffee remotely, maybe it posts a message in a chat service, maybe it closes out the latest support ticket that you were working on in your help desk system – it’s up to you. For me, I’m going to add a simple HTTP action (since it’s built into the runtime), and have it POST a message to a requestb.in indicating that the button has been pressed.

Save the Logic App, and it’s all ready to go!

PUSH THE BUTTON

There’s only one thing left to do – push the button. If everything has been setup correctly, the Logic App’s callback should be invoked and magic should happen in the cloud.

What If It Didn’t Work?

Well, there’s a few troubleshooting things you can do. Using the Cloud Explorer window (part of the Azure SDK) in Visual Studio, you can navigate to your API App, right-click and then click Attach Debugger. You can set breakpoints within the callback registration method of the controller class, and step through looking for problems as the Logic App registers the callback. This only happens when the trigger is first added to the Logic App (after clicking Save), and then every hour or so after that (assuming it worked on the first try).

You can see past registrations of the callback by navigating to the trigger history for the Logic App. If you see a string of failures there, it’s likely a bug in the callback registration code, or your storage account credentials.

Clicking any one of those line items will bring up the details (inputs/outputs) for debugging. If you want to attempt a callback registration manually (so that you can do it on demand), you can use the Swagger UI page for the API App, and manually fire the callback registration method.

The above screenshots were generated by replacing the configuration details for the Cloud Storage account with completely invalid data.

If everything looks good as far as the API App in Azure is concerned, you may want to debug the Windows IoT Core task from within Visual Studio. This can be done by right-clicking the project and then clicking Start Debugging (nothing special there).

The End

That’s all for now. Stay-tuned for more samples, and course updates!