by Howard Edidin | Mar 28, 2018 | BizTalk Community Blogs via Syndication
An Azure IoT Hub can store just about any type of data from a Device.
There is support for:
- Sending Device to Cloud messages.
- Invoking direct methods on a device
- Uploading files from a device
- Managing Device Identities
- Scheduling Jobs on single for multiple devices
The following is the List of of built-in endpoints
Custom Endpoints can also be created.
IoT Hub currently supports the following Azure services as additional endpoints:
- Azure Storage containers
- Event Hubs
- Service Bus Queues
- Service Bus Topics
Architecture
If we look through the documentation on the Azure Architecture Center, we can see a list of Architectural Styles.
If we were to design an IoT Solution, we would want to follow Best Practices. We can do this by using the Azure Architectural Style of Event Driven Architecture. Event-driven architectures are central to IoT solutions.
Merging Event Driven Architecture with Microservices can be used to separate the IoT Business Services.
These services include:
- Provisioning
- Management
- Software Updating
- Security
- Logging and Notifications
- Analytics
Creating our services
To create these services, we start by selecting our Compute Options.
App Services
The use of Azure Functions is becoming commonplace. They are an excellent replacement for API Applications. And they can be published to Azure Api Management.
We are able to create a Serverless API, or use Durable Functions that allow us to create workflows and maintain state in a serverless environment.
Logic Apps provide us with the capability of building automated scalable workflows.
Data Store
Having a single data store is usually not the best approach. Instead, it’s often better to store different types of data in different data stores, each focused towards a specific workload or usage pattern. These stores include Key/value stores, Document databases, Graph databases, Column-family databases, Data Analytics, Search Engine databases, Time Series databases, Object storage, and Shared files.
This may hold true for other Architectural Styles. In our Event-driven Architecture, it is ideal to store all data related to IoT Devices in the IoT Hub. This data includes results from all events within the Logic Apps, Function Apps, and Durable Functions.
Which brings us back to our topic… Considering Software as an IoT Device
Since Azure IoT supports the TransportType.Http1
protocol, we can use the Microsoft.Azure.Devices.Client
Library to send Event data to our IoT Hub from any type of software. We also have the capability of receiving configuration data from the IoT Hub.
The following is the source code for our SendEvent Function App.
SendEvent Function App
#region Information
//
// MIT License
//
// Copyright (c) 2018 Howard Edidin
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
#endregion
#region
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Data.Services.Client;
using System.Net;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.Devices.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Azure.WebJobs.Host;
using Newtonsoft.Json;
using TransportType = Microsoft.Azure.Devices.Client.TransportType;
#endregion
namespace IoTHubClient
{
public static class SendEvent
{
private static readonly string IotHubUri = ConfigurationManager.AppSettings["hubEndpoint"];
[FunctionName("SendEventToHub")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "device/{id}/{key:guid}")]
HttpRequestMessage req, string id, Guid key, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
// Get request body
dynamic data = await req.Content.ReadAsAsync<object>();
var deviceId = id;
var deviceKey = key.ToString();
if (string.IsNullOrEmpty(deviceKey) || string.IsNullOrEmpty(deviceId))
return req.CreateResponse(HttpStatusCode.BadRequest, "Please pass a deviceid and deviceKey in the Url");
var telemetry = new Dictionary<Guid, object>();
foreach (var item in data.telemetryData)
{
var telemetryData = new TelemetryData
{
MetricId = item.metricId,
MetricValue = item.metricValue,
MericDateTime = item.metricDateTime,
MetricValueType = item.metricValueType
};
telemetry.Add(Guid.NewGuid(), telemetryData);
}
var deviceData = new DeviceData
{
DeviceId = deviceId,
DeviceName = data.deviceName,
DeviceVersion = data.deviceVersion,
DeviceOperation = data.deviceOperation,
DeviceType = data.deviceType,
DeviceStatus = data.deviceStatus,
DeviceLocation = data.deviceLocation,
SubscriptionId = data.subcriptionId,
ResourceGroup = data.resourceGroup,
EffectiveDateTime = new DateTimeOffset(DateTime.Now),
TelemetryData = telemetry
};
var json = JsonConvert.SerializeObject(deviceData);
var message = new Message(Encoding.ASCII.GetBytes(json));
try
{
var client = DeviceClient.Create(IotHubUri, new DeviceAuthenticationWithRegistrySymmetricKey(deviceId, deviceKey),
TransportType.Http1);
await client.SendEventAsync(message);
return req.CreateResponse(HttpStatusCode.OK);
}
catch (DataServiceClientException e)
{
var resp = new HttpResponseMessage
{
StatusCode = (HttpStatusCode) e.StatusCode,
Content = new StringContent(e.Message)
};
return resp;
}
}
}
public class DeviceData
{
public string DeviceId { get; set; }
public string DeviceName { get; set; }
public string DeviceVersion { get; set; }
public string DeviceType { get; set; }
public string DeviceOperation { get; set; }
public string DeviceStatus { get; set; }
public DeviceLocation DeviceLocation { get; set; }
public string AzureRegion { get; set; }
public string ResourceGroup { get; set; }
public string SubscriptionId { get; set; }
public DateTimeOffset EffectiveDateTime { get; set; }
public Dictionary<Guid, object> TelemetryData { get; set; }
}
public class TelemetryData
{
public string MetricId { get; set; }
public string MetricValueType { get; set; }
public string MetricValue { get; set; }
public DateTime MericDateTime { get; set; }
}
public enum DeviceLocation
{
Cloud,
Container,
OnPremise
}
}
Software Device Properties
The following values are required in the Url Path
Route = "device/{id}/{key:guid}")
Name |
Description |
id |
Device Id (String) |
key |
Device Key (Guid) |
The following are the properties to be sent in the Post Body
Name |
Description |
deviceName |
Device Name |
deviceVersion |
Device version number |
deviceType |
Type of Device |
deviceOperation |
Operation name or type |
deviceStatus |
Default: Active |
deviceLocation |
Cloud Container OnPremise |
subscriptionId |
Azure Subscription Id |
resourceGroup |
Azure Resource group |
azureRegion |
Azure Region |
telemetryData |
Array |
telemetryData.metricId |
Array item id |
telemetryData.metricValueType |
Array item valueType |
telemetryData.metricValue |
Array item value |
telemetryData.metricTimeStamp |
Array item TimeStamp |
Summary
- We can easily add the capability of sending messages and events to our Function and Logic Apps.
- Optionally, we can send the data to an Event Grid.
- We have a single data store for all our IoT events.
- We can identify performance issues within our services.
- Having a single data store makes it easier to perform Analytics.
- We can use a Azure Function App to Send Device to Cloud Messages. In this case our Function App will be also be taking the role of a Device.
by Dan Toomey | Feb 24, 2017 | BizTalk Community Blogs via Syndication
Last week I had the opportunity to attend Microsoft Ignite on the Gold Coast, Australia. Even better – I had a free ticket on account of agreeing to serve as a Technical Learning Guide (TLG) in the hands-on labs. This opportunity is only open to Microsoft Certified Trainers (MCTs) and competition was evidently keen this year – so I am glad to have been chosen. Catching up with fellow MCTs like Mark Daunt and meeting up with new ones such as Michael Schmitz was a real pleasure. Of course the down side was that I missed quite a few breakout sessions during the times I was rostered. Nevertheless, I still got to see some of the most important sessions to me, particularly those that centred around Azure and integration technologies. Please have a read of my summary of these on my employer’s blog.
By and far this was my best Australian Ignite/Tech-Ed event experience for many reasons, including:
- The Pro-Integration team from Redmond came all the way out to Australia show everyone what the product group is doing with Logic Apps, Flow, Service Bus, and BizTalk Server
- I was chosen to present an Instructor-Led Lab in Service Fabric – my first ever speaking engagement at Ignite
- I had the rare opportunity to catch up with some fellow MVPs from Perth and Europe.
It was truly phenomenal to see enterprise integration properly represented at an Australian conference, as it is typically overlooked at these events. In addition to at least four breakout sessions on hybrid integration, Scott Guthrie actually performed a live demo of Logic Apps in his keynote! This was a good shout-out to the product team that has worked so hard to bring this technology up to the usability level it now enjoys. I’m glad that Jim Harrer, Jeff Holland, Jon Fancey and Kevin Lam were there to see it!
Teaching the lab in Service Fabric was a thrilling experience, but not without some challenges. The lab itself was broken and required a re-write of the second half, which I had pre-prepared and uploaded to One-Drive here so the students could progress. The main lab content is only available to Ignite attendees, however if you want to have a go at a similar lab you can try these ones available from Microsoft:
Despite the frustration that some attendees expressed about the lab errata and the poor performance of the environment, I was pleased that all the submitted feedback relating to the speaker was very positive!
Finally, perhaps the best part of events like these is the ability to catch up with old friends and meet some new ones. It was a pleasure to hang out with Azure MVP Martin Abbott from Perth and meet a few of his colleagues. It was also great to see Elder Grootenboer and Steef-Jan Wiggers from the Netherlands, who happened to travel to Australia this month on holidays and to speak at some events. Steef-Jan also took time to include me in a V-Log series he’s been working on with various integration MVPs, recording his 3-minute interview with me at the top of Mount Coot-tha on a sunny Brisbane Saturday! And Mexia’s CEO Dean Robertson & myself got to enjoy a nice dinner out with the Microsoft product group and the MVPs.
All good things must come to an end, but it was definitely a memorable week! Now it’s time to start getting ready for the Brisbane edition of the Global Integration Bootcamp on Saturday, 25th March, to be followed not long after by the Global Azure Bootcamp on Saturday 22nd April! I’ve got a few demos and presentations to prepare – but now with plenty of inspiration from Ignite!
by Rob Callaway | Nov 2, 2015 | BizTalk Community Blogs via Syndication
In conversations with students and other integration specialists, I’m discovering more and more how confused some people are about the evolution of cloud-based integration technologies. I suspect that cloud-based integration is going to be big business in the coming years, but this confusion will be an impediment to us all.
To address this I want to write a less technical, very casual, blog post explaining where we are today (November of 2015), and generally how we got here. I’ll try to refrain from passing judgement on the technologies that came before and I’ll avoid theorizing on what may come in the future. I simply want to give a timeline that anyone can use to understand this evolution, along with a high-level description of each technology.
I’ll only speak to Microsoft technologies because that’s where my expertise lies, but it’s worth acknowledging that there are alternatives in the marketplace.
If you’d like a more technical write-up of these technologies and how to use them, Richard Seroter has a good article on his blog that can be found here.
Way, way back in October of 2008 Microsoft unveiled Windows Azure (although it wouldn’t be until February of 2010 that Azure went “live”). On that first day, Azure wasn’t nearly the monster it has become.
It provided a service platform for .NET services, SQL Services, and Live Services. Many people were still very skeptical about “the cloud” (if they even knew what that meant). As an industry we were entering a brave new world with many possibilities.
From an integration perspective, Windows Azure .NET Services offered Service Bus as a secure, standards-based messaging infrastructure.
Over the years, Service Bus has been rebranded several times but the core concepts have stayed the same: reduce the barriers for building composite applications, even when their components have to communicate across organizational boundaries. Initially, Service Bus offered Topics/Subscriptions and Queues as a means for systems and services to exchange data reliably through the cloud.
Service Bus Queues are just like any other queueing technology. We have a queue to which any number of clients can post messages. These messages can be received from the queue later by some process. Transactional delivery, message expiry, and ordered delivery are all built-in features.
I like to call Topics/Subscriptions “smart queues.” We have concepts similar to queues with the addition of message routing logic. That is, within a Topic I can define one or more Subscription(s). Each Subscription is used to identify messages that meet certain conditions and “grab” them. Clients don’t pick up messages from the Topic, but rather from a Subscription within the Topic. A single message can be routed to multiple Subscriptions once published to the Topic.
Sample Service Bus Topic and Subscriptions
If you have a BizTalk Server background, you can essentially think of each Service Bus Topic as a MessageBox database.
Interacting with Service Bus is easy to do across a variety of clients using the .NET or REST APIs. With the ability to connect on-premises applications to cloud-based systems and services, or even connect cloud services to each other, Service Bus offered the first real “integration” features to Azure.
Since its release, Service Bus has grown to include other messaging features such as Relays, Event Hubs, and Notification Hubs, but at its heart it has remained the same and continues to provide a rock-solid foundation for exchanging messages between systems in a reliable and programmable way. In June of 2015, Service Bus processed over 1 trillion (1,000,000,000,000) messages! (Starts at 1:20)
As integration specialists we know that integration problems are more complex than simply grabbing some data from System A and dumping it in System B.
Message transport is important but it’s not the full story. For us, and the integration applications we build, VETRO (Validate, Enrich, Transform, Route, and Operate) is a way of life. I want to validate my input data. I may need to enrich the data with alternate values or contextual information. I’ll most likely need to transform the data from one format or schema to another. Identifying and routing the message to the correct destination is certainly a requirement. Any integration solution that fails to deliver all of these capabilities probably won’t interest me much.
So, in a world where Service Bus is the only integration tool available to me, do I have VETRO? Not really.
I have a powerful, scalable, reliable, messaging infrastructure that I can use to transport messages, but I cannot transform that data, nor can I manipulate that data in a meaningful way, so I need something more.
I need something that works in conjunction with this messaging engine.
Microsoft’s first attempt at providing a more traditional integration platform that provided VETRO-esque capabilities was Microsoft Azure BizTalk Services (MABS) (to confuse things further, this was originally branded as Windows Azure BizTalk Services, or WABS). You’ll notice that Azure itself has changed its name from Windows Azure to Microsoft Azure, but I digress.
MABS was announced publicly at TechEd 2013.
Despite the name, Microsoft Azure BizTalk Services DOES NOT have a common code-base with Microsoft BizTalk Server (on second thought, perhaps the EDI pieces share some code with BizTalk Server, but that’s about all). In the MABS world we could create itineraries. These itineraries contained connections to source and destination systems (on-premises & cloud) and bridges. Bridges were processing pipelines made up of stages. Each stage could be configured to provide a particular type of VETRO function. For example, the Enrich stage could be used to add properties to the context of the message travelling through the bridge/itinerary.
Complex integration solutions could be built by chaining multiple bridges together using a single itinerary.
MABS was our first real shot at building full integration solutions in the cloud, and it was pretty good, but Microsoft wasn’t fully satisfied, and the industry was changing the approach for service-based architectures. Now we want Microservices (more on that in the next section).
The MABS architecture had some shortcomings of its own. For example, there was little or no ability to incorporate custom components into the bridges, and a lack of connectors to source and destination systems.
Over the past couple of years the trending design architecture has been Microservices. For those of you who aren’t already familiar with it, or don’t want to read pages of theory, it boils down to this:
“Architect the application by applying the Scale Cube (specifically y-axis scaling) and functionally decompose the application into a set of collaborating services. Each service implements a set of narrowly related functions. For example, an application might consist of services such as the order management service, the customer management service etc.
Services communicate using either synchronous protocols such as HTTP/REST or asynchronous protocols such as AMQP.
Services are developed and deployed independently of one another.
Each service has its own database in order to be decoupled from other services. When necessary, consistency is between databases is maintained using either database replication mechanisms or application-level events.”
So the shot-callers at Microsoft see this growing trend and want to ensure that the Azure platform is suited to enable this type of application design. At the same time, MABS has been in the wild for just over a year and the team needs to address the issues that exist there. MABS Itineraries are deployed as one big chunk of code, and that does not align well to the Microservices way of doing things. Therefore, need something new but familiar!
Azure App Service is a cloud platform for building powerful web and mobile apps that connect to data anywhere, in the cloud or on-premises. Under the App Service umbrella we have Web Apps, Mobile Apps, API Apps, and Logic Apps.
I don’t want to get into Web and Mobile Apps. I want to get into API Apps and Logic Apps.
API Apps and logic Apps were publicly unveiled in March of 2015, and are currently still in preview.
API Apps provide capabilities for developing, deploying, publishing, consuming, and managing RESTful web APIs. The simple, less sales-pitch sounding version of that is that I can put RESTful services in the Azure cloud so I can easily use them in other Azure App Service-hosted things, or call the API (you know, since it’s an HTTP service) from anywhere else. Not only is the service hosted in Azure and infinitely scalable, but Azure App Service also provides security and client consumption features.
So, API Apps are HTTP / RESTful services running in the cloud. These API Apps are intended to enable a Microservices architecture. Microsoft offers a bunch of API Apps in Azure App Service already and I have the ability to create my own if I want. Furthermore, to address the integration needs that exist in our application designs, there is a special set of BizTalk API Apps that provide MABS/BizTalk Server style functionality (i.e., VETRO).
This is all pretty cool, but I want more. That’s where Logic Apps come in.
Logic Apps are cloud-hosted workflows made up of API Apps. I can use Logic Apps to design workflows that start from a trigger and then execute a series of steps, each invoking an API App whilst the Logic App run-time deals with pesky things like authentication, checkpoints, and durable execution. Plus it has a cool rocket ship logo.
What does all this mean? How can I use these Azure technologies together to build awesome things today?
Service Bus provides an awesome way to get messages from one place to another using either Queues or Topics/Subscriptions.
API Apps are cloud-hosted services that do work for me. For example, hit a SaaS provider or talk to an on-premises system (we call these connectors), transform data, change an XML payload to JSON, etc.
Logic Apps are workflows composed of multiple API Apps. So I can create a composite process from a series of Microservices.
But if I were building an entire integration solution, breaking the process across multiple Logic Apps might make great sense. So I use Service Bus to connect the two workflows to each other in a loosely-coupled way.
Logic Apps and Service Bus working together
And as my integration solution becomes more sophisticated, perhaps I have need for more Logic Apps to manage each “step” in the process. I further use the power of Topics to control the workflow to which a message is delivered.
More Logic Apps and Service Bus Topics provide a sophisticated integration solution
In the purest of integration terms, each Logic App serves as its own VETRO (or subset of VETRO features) component. Decomposing a process into several different Logic Apps and then connecting them to each other using Service Bus gives us the ability to create durable, long-running composite processes that remain loosely-coupled.
Doing VERTO using Service Bus and Logic Apps
Today Microsoft Azure offers the most complete story to date for cloud-based integration, and it’s a story that is only getting better and better. The Azure App Service team and the BizTalk Server team are working together to deliver amazing integration technologies. As an integration specialist, you may have been able to ignore the cloud for the past few years, but in the coming years you won’t be able to get away with it.
We’ve all endeavored to eliminate those nasty data islands. We’ve worked to tear down the walls dividing our systems. Today, a new generation of technologies is emerging to solve the problems of the future. We need people like you, the seasoned integration professional, to help direct the technology, and lead the developers using it.
If any of this has gotten you at all excited to dig in and start building great things, you might want to check out QuickLearn Training’s 5-day instructor-led course detailing how to create complete integration solutions using the technologies discussed in this article. Please come join us in class so we can work together to build magical things.