BizTalk Server: Teach me something new about Flat Files (or not) video and slides are available at Integration Monday

BizTalk Server: Teach me something new about Flat Files (or not) video and slides are available at Integration Monday

Last Monday I presented, once again, a session in the Integration Monday series. This time the topic was BizTalk Server: Teach me something new about Flat Files (or not). This was my fifth session that I deliver:

And I think will not be the last! However, this time was different for many aspects and in a certain way it was a crazy session… Despite having some post about BizTalk Server: Teach me something new about Flat Files on my blog, I didn’t have time to prepare this session (sent to a crazy mission for a client and also because I had to organize the integration track on TUGA IT event), I had a small problem in my BizTalk Server 2016 machine in which I had to switch to my BizTalk Server 2013 R2 VM, interrupted by the kids in the middle of the session because the girls wanted me to have dinner with them (worthy of being in this series)… but it all ended well and I think it was a very nice session with two great real case samples:

  • Removing headers from a flat file (CSV) using only the schema (without any custom pipeline component)
  • And removing empty lines from a delimited flat file, again, using only the schema (without any custom pipeline component)

For those who were online, I hope you have enjoyed it and sorry for all the confusion. And for those who did not have the chance to be there, you can now view it because the session is recorded and available on the Integration Monday website. I hope you like it!

Session Name: BizTalk Server: Teach me something new about Flat Files (or not)

BizTalk Server: Teach me something new about Flat Files

Session Overview: Despite over the year’s new protocols, formats or patterns emerged like Web Services, WCF RESTful services, XML, JSON, among others. The use of text files (Flat Files ) as CSV (Comma Separated Values) or TXT, one of the oldest common patterns for exchanging messages, still remains today one of the most used standards in systems integration and/or communication with business partners.

While tools like Excel can help us interpret such files, this type of process is always iterative and requires few user tips so that software can determine where there is a need to separate the fields/columns as well the data type of each field. But for a system integration (Enterprise Application Integration) like BizTalk Server, you must reduce any ambiguity, so that these kinds of operations can be performed thousands of times with confidence and without having recourse to a manual operator.

In this session we will first address: How we can easily implement a robust File Transfer integration in BizTalk Server (using Content-Based Routing in BizTalk with retries, backup channel and so on).
And second: How to process Flat Files documents (TXT, CSV …) in BizTalk Server. Addressing what types of flat files are supported? How is the process of transforming text files (also called Flat Files) into XML documents (Syntax Transformations) – where does it happen and which components are needed. How can I perform a flat file validation?

Integration Monday is full of great sessions that you can watch and I will also take this opportunity to invite you all to join us next Monday.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Techorama 2017 Keynote – Recap

Techorama 2017 Keynote – Recap

Techorama is a yearly International Technology Conference which takes place at Metropolis, Antwerp. With 1500+ physical participants across the globe, the stage was all set to witness the intelligence of Azure. Among the thousands of virtual participants, I am happy to document the Keynote presented by Scott Guthrie, Executive Vice President of Cloud and Enterprise Group, Microsoft on Developing with the cloud. The most interesting feature of this demo is, Scott has scaled the whole demo on a Scenario driven approach from the perspective of a common developer. Let me take you through this keynote quickly.

Azure Mobile app

The Inception of cloud inside a mobile! Yes, you heard it right. Microsoft team has come up with Azure App for IOS/Android/Windows to manage all your cloud services. You can now easily manage all your cloud functionalities from Mobile.

AzureMobileApp

Integrated Bashshell Client

Now the Bash Shell is integrated into the azure cloud to manage/retrieve all the azure services with just a type of a command. The Bash Shell client is opened in the browser pop-up and get connected to the cloud without any keys. More of the Automation scripts in future can get executed easily with this Bash in place. Also, it provides a CLI documentation for the list of commands/arguments. You can expect a Powershell client soon! 

Azure Cloud Shell

Application Map

The flow between different cloud services and their status with all diagnostic logs and charts are displayed in the dashboard level. As a top-down approach, you can get to the in-depth level of tracking per instance based on failure/success/slow response scenarios with all diagnostics, stack trace and creation of a work item from the failure stack traces. From the admin/operations perspective, this feature is a great value add.

Application Map

Stack trace with Work item creation

Stack Trace with work item creation

Security Center

Managing the security of the cloud system could be a complex task. With the Security center in place, we can easily manage all the VMs/other cloud services. The machine learning algorithms at the backend will fetch all the possible recommendations for an environment or the services.

Security Center

Recommendations

The possible recommendations for virtual machines are provided with the help of Machine learning Algorithms.

Security Recomendation

Essentials for Mobile success

To deliver a seamless mobile experience to the user, you need to have an interactive user-friendly UI, BTD (Build, Test, Deploy automation) and scalability with the cloud infrastructure. These are the essentials for Mobile success and Microsoft with a Xamarin platform has nailed it.

Mobile Essentials

A favorite area of mine has been added with much needed intelligent feature. Xamarin – VS2017 combo is now makings its step into a real-time debugging!!!

You can pair up your iPhone/any mobile device to the visual studio with the Xamarin Live player which allows you to perform live debugging. Dev-Ops support to Xamarin has now been extended, you can now make a build-test-deploy to any firmware connected to the cloud as like a Continuous Integration Build. Automation in testing and deployment for the mobile framework is the best part. You can get the real-time memory usage statistics for your application on a single window. Also, you can now run VS2017 on IOS as well. 🙂

Xamarin Live Player

The mobile features have not stopped with this. The VS Mobile center is also integrated here to make a staging test with your friend’s community to get feedback on your mobile application before we submit to any mobile stores. Cool, isn’t it.

SQL server 2017

Scott also revealed some features of upcoming SQL server 2017, which has a capability to run on Linux OS and Docker apart from Windows.

The new SQL Server 2017 has got Adaptive Query Processing and Advance Machine Learning features and can offer in-memory support for advanced analytics. Also, SQL server is capable of seamless failovers between on-premise and cloud SQL with no downtime along with Azure Database migration service.

SQL Server2017

Azure Database- SQL Injection Alerts

SQL injection could be the most faced problems of an application. As a remedy, Azure SQL database now can detect the SQL injection by machine learning algorithms. It can send you the alert when an abnormal query gets executed.

Security Alert

Showing the vulnerability in the query

SQL Injection

New Relational Database service

The Relational Database service is now extended to PostgreSQL as a service and MySQL as a service which can seamlessly integrate with your application.

Relational Database

Data at Planet scale: COSMOS-DB

This could be the right statement to explain Cosmos DB. The Azure has come with Globally distributed multi-model database service for higher scalability and geographical access. You can easily replicate/mirror/clone the database based on the user base to any geographical location. To give you an example you can scale from Giga to Petabytes of data and from Hundreds to Millions of transactions with all metrics in place. And this makes the name COSMOS!

Scott has also shown us a video on how a JET online retailer is using cosmosDB and chat bot which runs with the Cosmos DB to answer intelligent human queries. With Cosmos DB and Gremin API you can retrieve the comprehensive graphical analysis of the data. Here, he showed us the Marvel comics characters and friends chart of Mr.Stark, quite cool!

Replicate Data Globally

Convert exist apps to Container based microservice Architecture

You may all wonder how to make your existing application to the Azure container based architecture and here is a solution with the support of Docker. In your existing application project, you can easily add the Docker which makes you run your application on the image of ASP.net with which it can easily get into the services of cloud build-deploy-test framework of continuous integration. A simple addition of Docker metadata file has made the Dev-ops much easier.

Azure stack

There are a lot of case studies which indicates the love towards azure functionalities but enterprises were not able to use it for tailor-made solutions. There comes an Azure-Stack, a private cloud hosting capability for your data center to privatize and use all cloud expertise on your own ground.

Azure Stack

Conclusion

As more features including Azure Functions, Service Fabric, etc. are being introduced, this gist of keynote would have given you the overall view on The Intelligent Cloud and much more to come on the floor: tune to Techorama channel9 for more updates from 2nd-day events. With cloud scaling out with new capabilities, there will never be an application in future without rel on ing cloud services.

Happy Cloud Engineering!!!

Author: Vignesh Sukumar

Vignesh, A Senior BizTalk Developer @BizTalk360 has crossed half a decade of BizTalk Experience. He is passionate about evolving Integration Technologies. Vignesh has worked for several BizTalk Projects on various Integration Patterns and has an expertise on BAM. His Hobbies includes Training, Mentoring and Travelling

Step by step configuration to publish BizTalk operational data on Power BI whitepaper

Step by step configuration to publish BizTalk operational data on Power BI whitepaper

After some request by the community and after I publish in my blog as a season of blog posts, Step by step configuration to publish BizTalk operational data on Power BI is available as a whitepaper!

Recently, the Microsoft Product team released a first feature pack for BizTalk Server 2016 (only available for Enterprise and Developer edition). This whitepaper will help you understand how to install and configure one of the new features of BizTalk Server 2016:

  • Leverage operational data – View operational data from anywhere and with any device using Power BI, works and how we can configure it.

BizTalk operational data on Power BI whitepaper

What to expect about Step by step configuration to publish BizTalk operational data on Power BI

This whitepaper will give a step-by-step explanation of what component or tools you need to install and configure to enable BizTalk operational data to be published in a Power BI report.

Table of Contents

  1. About the Author
  2. Introduction
  3. What is Operational Data?
  4. System Requirements to Enable BizTalk Server 2016 Operational Data
  5. Step-by-step Configuration to Enable BizTalk Server 2016 Operational Data Feed
    1. First step: Install Microsoft Power BI Desktop
    2. Second step: Enable operational data feed
    3. Third step: Use the BizTalk Server Operational Data Power BI template to publish the report to Power BI
    4. Fourth step: Connect Power BI BizTalkOperationalData dataset with your on-premise BizTalk environment

Where I can download it

You can download the whitepaper here:

I would like to take this opportunity also to say thanks to my amazing team of BizTalk360 for the proofreading and for once again join forces with me to publish for free another white paper.

I hope you enjoy reading this paper and any comments or suggestions are welcome.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Customer story: BizTalk management through PowerApps

Customer story: BizTalk management through PowerApps

It’s been a month since Feature Pack 1 was released for BizTalk Server 2016 Enterprise and Developer edition. The Feature pack introduced a set of new features, and helped customers leverage new technologies as well as taking advantage of tools they already used in their organization, in this case to enable a more streamlined management of their BizTalk Installation.

One of these customers were FortisAlbert, an energy company located Alberta, Canada. With the new management REST APIs with full Swagger support they were able to take parts of the operational management of the environments out from the BizTalk Servers and Administration Console and build a PowerApp to create and maintain their applications. Making the 24/7 support of their environment easier for the operational teams.

Having the option to add, update and even start existing artifacts directly from the PowerApp has helped FortisAlbert to enhance their productivity and speed of resolving live incidents.

“FortisAlberta has been using BizTalk since 2006 and is currently migrating to BizTalk 2016 due to its versatility, adaptability and ability to integrate disparate systems with ease.”

Anthony See, FortisAlberta

5-23-2017-9-09-14-am

host-instancessend-ports

SendPort is not showing in BizTalk360

SendPort is not showing in BizTalk360

BizTalk360 v8.4 is now released for public with lots of exciting new features and enhancements. Many of our customers have upgraded to the latest version and started enjoying the new features.  We at BizTalk360 support get a lot of queries in the form of tickets. Many customers are asking for the installation path, few raise some clarifications and others may be issues. We categorize the tickets as a clarification, feature requests, and bugs.

Our support team often get some strange issues which did not belong to either of these categories. I am here to explain about one such interesting case and how we identified the root cause and resolved it. As per the below quote,

“The job isn’t to just fix the problem. It is also to restore the customer’s confidence. DO BOTH!” – Shep Hyken

we, the BizTalk360 support team, always work hard to resolve the customers’ issues and achieve customer satisfaction.

The original case stated by the customer

There was a ticket from the customer stating that “Send port is not showing on BT360 Application portal”. In BizTalk360 console, the artifacts get listed when we navigate to Operations -> Application Support -> Applications. The case was that a send port was not getting listed here. But all the send ports were getting listed at the time of assigning alarm for monitoring activity. There was no issue with the other artifacts and they were getting listed properly.

Sendport information missing in BizTalk360

Backtracking and Analyzing the case with the Network Response

Generally, when there is any issue related to UI, we ask for the JSON response from the Network tab in the Developer’s console of the browser. By pressing F12, we can open the browser console and check for any exceptions in the service calls. In the Network tab of the console, the service calls for each operation gets listed from which we can get the request headers and JSON response. This way we can check for the exception details and work on the same. So, we replied to the ticket asking for the network response. But we did not get the required information from the JSON response. The next step was to go on a call with the customer involving one of our technical team members through the web meeting with a screen sharing session.

In the web meeting, we tried different scenarios to check for the send ports. We tried in the Search Artifacts section and it was getting listed without any problem. But there was a weird thing seen. There were multiple entries for the same send port with different URI configured. This is the first time we have come across such an issue. But will this be the issue for the send port not getting listed? Let’s see what’s happening.

Discrepancy in the Send Port information

We exported the send port data from the customer and checked it.  There were multiples entries for the same send port but with different transport type and protocols configured. But in BizTalk server, it does not allow us to create send ports with duplicate names. Then how come this would happen at the customer end? We started our investigation further. Then we found that the multiple entries were due to the backup transport configured for the send ports.  But this was not the cause of the issue. What a strange issue? Shall we move further with the analysis?

Discrepancy in send port information pattern

Was the DB2 Adapter causing the real problem

On further analysis on this case, we found that DB2 adapter was being used in one of the send ports and it is not a standard BizTalk adapter. The BizTalk Adapter for DB2 is a send and receive adapter that enables BizTalk orchestrations to interact with host systems. Specifically, the adapter enables to send and receive operations over TCP/IP and APPC connections to DB2 databases running on a mainframe, AS/400, and UDB platforms. Based on Host Integration Server (HIS) technology, the adapter uses the Data Access Library to configure DB2 connections, and the Managed Provider for DB2 to issue SQL commands and stored procedures.

The trace logs also indicated a NULL assignment for the Transport type for the send port with this adapter. It’s a prerequisite for BizTalk360, that BizTalk Admin components must be installed in the BizTalk360 server, in case of the BizTalk360 standalone installation. Since DB2 adapter comes with HIS, it was suggested to the customer to install HIS in the BizTalk360 server and observe for the send ports listing. But even after installing HIS, the same issue persisted. We also tried to replicate the same scenario by installing HIS with the DB2 adapter in a BizTalk360 standalone server. The send ports with different combinations of adapters in the transport types were created and tested. But the issue was not reproducible. So, we concluded that the DB2 adapter was not the real cause of the problem.

The Console App made the trick

Sometimes, the issue may seem to be simple. But identifying the root cause of the issue is very difficult. And that too for some strange issues, it would be extremely difficult if the issue is not reproducible. It might not be good to disturb the customers often since they might be busy. Our next plan was to provide a console app to get the complete details of the send ports configured. This app was quite helpful for us to find the root cause. Read further to know the real cause.

The console app was given to the customer to get the complete details of the send ports. The app would give the result in the JSON format with all the details like the name, URI configured, transport type, send handlers etc., The BizTalk application which contained the send port and the database details must be entered in the app to fetch the response.

JSON response with the sendport details for the console app

From the screenshot, we can see that the sendHandler for the secondaryTransport

does not contain the value of transport type. This was the cause for the send port not getting displayed. It was causing the exception.

Finally, the Backup Transport configuration was the cause

We probed further into the case as to why the sendHandler details were not coming up. The Backup transport was configured to “None” in the BizTalk admin console for that send port. Even though it was configured to None, we again asked them to update it again to None and then save it. This time, the issue was resolved and the send port got listed in BizTalk360 UI. It might have happened when importing the send port configuration, back up Transport Type is set to other than “None”. (Type can be empty or NULL).

If Transport type is other than None, then the code will generate the send handler and look for Transport Type. But it could not find the transport type and hence throws an error. The same issue happened in the production environment also and got resolved the same way.

To configure BackupTransport for send port in BizTalk server

Conclusion

When we import the send port configuration, we must make sure that the Backup transport type data is properly set to None. It should not be set to NULL or empty. This way we can make sure that all the send ports are getting listed in the BizTalk360 UI without any problem. We could identify this with the help of the console app.

Author: Praveena Jayanarayanan

I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.”

Microsoft Integration Weekly Update: May 22

Microsoft Integration Weekly Update: May 22

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

Reliably receive SQL data in Logic Apps

Reliably receive SQL data in Logic Apps

Scenario

Let’s discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let’s create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let’s consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][0][‘Id’]

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

Using IoT Hub for Cloud to Device Messaging

Using IoT Hub for Cloud to Device Messaging

In the previous blog posts of this IoT Hub series, we have seen how we can use IoT Hub to administrate our devices, and how to do device to cloud messaging. In this post we will see how we can do cloud to device messaging, something which is much harder when not using Azure IoT Hub. IoT devices will normally be low power, low performance devices, like small footprint devices and purpose-specific devices. This means they are not meant to (and most often won’t be able to) run antivirus applications, firewalls, and other types of protection software. We want to minimize the attack surface they expose, meaning we can’t expose any open ports or other means of remoting into them. IoT Hub uses Service Bus technologies to make sure there is no inbound traffic needed toward the device, but instead uses per-device topics, allowing us to send commands and messages to our devices without the need to make them vulnerable to attacks.

IoT Hub For Cloud To Device Messaging

Send Message To Device

When we want to send one-way notifications or commands to our devices, we can use cloud to device messages. To do this, we will expand on the EngineManagement application we created in our earlier posts, by adding the following controls, which, in our scenario, will allow us to start the fans of the selected engine.

IoT Hub For Cloud To Device Messaging

To be able to communicate to our devices, we will first implement a ServiceClient in our class.

private readonly ServiceClient serviceClient = ServiceClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey"); 

Next we implement the event handler for the Start Fans button. This type of communication targets a specific device by using the DeviceID from the device twin.

private async void ButtonStartFans_Click(object sender, EventArgs e)
{
    var message = new Microsoft.Azure.Devices.Message();
    message.Properties.Add(new KeyValuePair<string, string>("StartFans", "true"));
    message.Ack = DeliveryAcknowledgement.Full; // Used for getting delivery feedback
    await serviceClient.SendAsync(comboBoxSerialNumber.Text, message);
}

Process Message On Device

Once we have sent our message, we will need to process it on our device. For this, we are going to update the client application of our simulated engine (which we also created in the previous blog posts) by adding the following method.

private static async void ReceiveMessageFromCloud(object sender, DoWorkEventArgs e)
{
    // Continuously wait for messages
    while (true)
    {
        var message = await client.ReceiveAsync();

        // Check if message was received
        if (message == null)
        {
            continue;
        }

        try
        {
            if (message.Properties.ContainsKey("StartFans") && message.Properties["StartFans"] == "true")
            {
                // This would start the fans
                Console.WriteLine("Fans started!");

            }

            await client.CompleteAsync(message);
        }
        catch (Exception)
        {
            // Send to deadletter
            await client.RejectAsync(message);
        }
    }
}

We will run this method in the background, so update the Main method, and insert the following code after the call for updating the firmware.

// Wait for messages in background
var backgroundWorker = new BackgroundWorker();
backgroundWorker.DoWork += ReceiveMessageFromCloud;
backgroundWorker.RunWorkerAsync();

Message Feedback

Although cloud to device messages are a one-way communication style, we can request feedback on the delivery of the message, allowing us to invoke retries or start compensation when the message fails to be delivered. To do this, implement the following method in our EngineManagement backend application.

private async void ReceiveFeedback(object sender, DoWorkEventArgs e)
{
    var feedbackReceiver = serviceClient.GetFeedbackReceiver();
    
    while (true)
    {
        var feedbackBatch = await feedbackReceiver.ReceiveAsync();

        // Check if feedback messages were received
        if (feedbackBatch == null)
        {
            continue;
        }

        // Loop through feedback messages
        foreach(var feedback in feedbackBatch.Records)
        {
            if(feedback.StatusCode != FeedbackStatusCode.Success)
            {
                // Handle compensation here
            }
        }

        await feedbackReceiver.CompleteAsync(feedbackBatch);
    }
}

And add the following code to the constructor.

var backgroundWorker = new BackgroundWorker();
backgroundWorker.DoWork += ReceiveFeedback;
backgroundWorker.RunWorkerAsync();

Call Remote Method

Another feature when sending messages from the cloud to our devices is to call a remote method on the device, which we call invoking a direct method. This type of communication is used when we want to have an immediate confirmation of the outcome of the command (unlike setting the desired state and communicating back reported properties, which has been explained in the previous two blog posts). Let’s update the EngineManagement application by adding the following controls, which would allow us to send an alarm message to the engine, sounding the alarm and displaying a message.

IoT Hub For Cloud To Device Messaging

Now add the following event handler for clicking the Send Alarm button.

private async void ButtonSendAlarm_Click(object sender, EventArgs e)
{
    var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) };
    methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text }));

    CloudToDeviceMethodResult response = null;

    try
    {
        response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation);
    }
    catch (IotHubException)
    {
        // Do nothing
    }

    if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>())
    {
        MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
    }
    else
    {
        MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
    }
}

And in our simulated device, implement the SoundAlarm remote method which is being called.

 
private static Task<MethodResponse> SoundAlarm(MethodRequest methodRequest, object userContext)
{
    // On a real engine this would sound the alarm as well as show the message
    Console.ForegroundColor = ConsoleColor.Red;
    Console.WriteLine($"Alarm sounded with message: {JObject.Parse(methodRequest.DataAsJson).GetValue("message").Value<string>()}! Type yes to acknowledge.");
    Console.ForegroundColor = ConsoleColor.White;
    var response = JsonConvert.SerializeObject(new { acknowledged = Console.ReadLine() == "yes" });
    return Task.FromResult(new MethodResponse(Encoding.UTF8.GetBytes(response), 200));
}

And finally, we need to map the SoundAlarm method to the incoming remote method call. To do this, add the following line in the Main method.

client.SetMethodHandlerAsync("SoundAlarm", SoundAlarm, null);

Call Remote Method On Multiple Devices

When invoking direct methods on devices, we can also use jobs to send the command to multiple devices. We can use our custom tags here to broadcast our message to a specific set of devices.
In this case, we will add a filter on the engine type and manufacturer, so we can, for example, send a message to all main engines manufactured by Caterpillar. In our first blog post, we added these properties as tags on the device twin, so we now use these in our filter. Start by adding the following controls to our EngineManagement application.

IoT Hub For Cloud To Device Messaging

Now add a JobClient to the application, which will be used to broadcast and monitor our messages.

 
private readonly JobClient jobClient = JobClient.CreateFromConnectionString("HostName=youriothubname.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=yoursharedaccesskey");

To broadcast our message, update the event handler for the Send Alarm button to the following.

 
private async void ButtonSendAlarm_Click(object sender, EventArgs e)
{
    var methodInvocation = new CloudToDeviceMethod("SoundAlarm") { ResponseTimeout = TimeSpan.FromSeconds(300) };

    methodInvocation.SetPayloadJson(JsonConvert.SerializeObject(new { message = textBoxMessage.Text }));

    if (checkBoxBroadcast.Checked)
    {
        try
        {
            var jobResponse = await jobClient.ScheduleDeviceMethodAsync(Guid.NewGuid().ToString(), $"tags.engineType = '{comboBoxEngineTypeFilter.Text}' and tags.manufacturer = '{textBoxManufacturerFilter.Text}'", methodInvocation, DateTime.Now, 10);
            
            await MonitorJob(jobResponse.JobId);
        }
        catch (IotHubException)
        {
            // Do nothing
        }
    }
    else
    {
        CloudToDeviceMethodResult response = null;

        try
        {
            response = await serviceClient.InvokeDeviceMethodAsync(comboBoxSerialNumber.Text, methodInvocation);
        }
        catch (IotHubException)
        {
            // Do nothing
        }

        if (response != null && JObject.Parse(response.GetPayloadAsJson()).GetValue("acknowledged").Value<bool>())
        {
            MessageBox.Show("Message was acknowledged.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
        }
        else
        {
            MessageBox.Show("Message was not acknowledged!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
        }
    }
}

And finally, add the MonitorJob method with the following implementation.

 
public async Task MonitorJob(string jobId)
{
    JobResponse result;

    do
    {
        result = await jobClient.GetJobAsync(jobId);
        Thread.Sleep(2000);
    }
    while (result.Status != JobStatus.Completed && result.Status != JobStatus.Failed);

    // Check if all devices successful
    if (result.DeviceJobStatistics.FailedCount > 0)
    {
        MessageBox.Show("Not all engines reported success!", "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
    }
    else
    {
        MessageBox.Show("All engines reported success.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
    }
}

Conclusion

By using IoT Hub we have a safe and secure way of communicating from the cloud and our backend to devices out in the field. We have seen how we can use the cloud to device messages in case we want to send one-way messages to our device or use direct methods when we want to be informed of the outcome from our invocation. By using jobs, we can also call out to multiple devices at once, limiting the devices being called by using (custom) properties of the device twin. The code for this post can be found here.

IoT Hub Blog Series

In case you missed the other articles from this IoT Hub series, take a look here.

Blog 1: Device Administration Using Azure IoT Hub
Blog 2: Implementing Device To Cloud Messaging Using IoT Hub
Blog 3: Using IoT Hub for Cloud to Device Messaging

Author: Eldert Grootenboer

Eldert is a Microsoft Integration Architect and Azure MVP from the Netherlands, currently working at Motion10, mainly focused on IoT and BizTalk Server and Azure integration. He comes from a .NET background, and has been in the IT since 2006. He has been working with BizTalk since 2010 and since then has expanded into Azure and surrounding technologies as well. Eldert loves working in integration projects, as each project brings new challenges and there is always something new to learn. In his spare time Eldert likes to be active in the integration community and get his hands dirty on new technologies. He can be found on Twitter at @egrootenboer and has a blog at http://blog.eldert.net/.

Microsoft Integrate 2017 Event

Microsoft Integrate 2017 Event

Microsoft Integrate 2017 Event

Don’t miss the greatest integration event in the world.

Name : Integrate 2017

When : 26-28 June 2017

Where : Kings Place, London

Discount Code : MVPSPEAK2017REF

Link : https://www.biztalk360.com/integrate-2017/

Topics : Microsoft BizTalk, Host Integration Server, Logic Apps, IBM Legacy Integration, and much more

Speakers : Product Group, MVP (Most Valuable Professional)

Audience : From all over the world

Advertisements