Logic Apps sessions on Integrate 2017 Day 2 – Part 1

Logic Apps sessions on Integrate 2017 Day 2 – Part 1

What a day it was at ‘Integrate 2017’ today. For Logic Apps enthusiasts, it was a treat. have you  missed the sessions? don’t worry, I am going write on all that was talked about today on logic apps.

Azure Logic Apps – Microsoft IT journey with Azure Logic Apps – By Divya Swarnkar and Mayank Sharma

Microsoft has a large IT wing to serve its business which is called ‘MSIT’. This team is well known for ‘eating its own dog food’. Mayank and Divya are from MSIT’s integration team. When they started their session by describing the scale of business their team is serving, we were all blown away. Look at the number of business entities they are serving. Around 170 million messages flow through their 175 BizTalk servers serving 1000 plus trading partners across various business entities.

“We are moving all of this Integration to Logic Apps.”

MSIT is modernizing their integration landscape completely. Divya and Mayank made it very clear that they are moving all the BizTalk interfaces to Logic Apps and BizTalk is only going to be used as a proxy to serve existing partner requests. They so far were able to deliver three releases.

  1. Release 1.0 they moved most of their interfaces relying on X12 and AS2, Logic Apps.
  2. Release 1.5 they were able to move interfaces related to EDIFACT to Logic Apps.
  3. Release 2.0 release they moved many of the XML-oriented interfaces.

All these interfaces helped them to achieve following goals.

  • Enable Order to Cash Flow for digital supply chain management.
  • Running trade integrations and all customer declaration transactions.
  • They became ready to retire “Microsoft BizTalk Services” instances by end of July.

Solution Architecture

They then continued to explain their solution architecture. Below is the slide that they presented. Following are some of the important aspects of their solution architecture.

  • Azure API Management: All trading partners send the messages(X12/EDIFACT/XML) through Microsoft’s Gateway store. Azure API management service then routes the message to an appropriate logic app.
  • Integration Account: The Logic apps they have built, make full use of Integration account artefacts such as Trading Partner Agreements, Certificates, Schemas, Transformations etc.
  • On-premises BizTalk: On-premises BizTalk is merely used as a proxy for Line of business applications. This makes sense as they may not want to change all the connections which already exist for Line of Business Applications and also they need to support the continuity of other interfaces. This is the perfect example of how other organizations can start migrating their interfaces to Logic Apps.
  • Logic App Flow: The Logic apps make use of typical VETER pipeline which involves AS2 connector, X12 connector, Transformation, Encoding and HTTP connectors as shown below.
  • OMS for Diagnostics and Monitoring: Operational Management Suits(OMS) is used for collection of diagnostic logs from Integration Accounts, Logic Apps and Azure functions which are part of their solution. Once all the diagnostic data is collected they will be able to query and create nice dashboards for getting analytics on their interfaces. Currently, Integration accounts have their built-in solutions for OMS. Please refer the video http://www.integrationusergroup.com/business-activity-tracking-monitoring-logic-apps/ to know about Diagnostic logs in Logic Apps and Integration accounts.

Fall-back and Production Testing Using APIM

They have scenarios where they want to test the logic apps in production and also want to fall back to previous stable versions of the logic app. They make use of APIM to achieve this requirement. APIM is configured with rules to switch between the logic apps end points.

Disaster Recovery

Business continuity is very important especially for MSIT with the scale of messaging they are handling. In order to achieve the business continuity assurance, they make use of Disaster Recovery feature which comes along with integration account.

The disaster recovery is achieved by creating similar copies of logic apps, integration accounts and azure functions in two different regions. As you can see from the picture they have this replication in both Central US and West US regions. Visit the documentation https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-enterprise-integration-b2b-business-continuity  to know more about disaster recovery feature.

Huge confidence Boost to Customers who are contemplating on moving to Logic Apps

Azure Logic Apps – Advanced integration patterns By Jeff Hollan and Derek Li

I am a big fan of Jeff Hollan. When he is on the stage it’s a treat to listen to him. He brings life into technical talks and involves the audience by leaving a lasting impression. Enough of personifying him. Jeff Hollan and Derek Li were on to the stage to talk about advanced integration patterns in Logic apps.

Internals of Logic Apps Platform

Jeff arrived on the stage with the clear intention of explaining the internal architecture of Logic Apps platform. You might be wondering why we should be knowing about the internals of Logic Apps as it is a PaaS offering and we generally treat them as a black box from the end user perspective. However, he gave three powerful reasons why we should understand the internals.

  • There are some published limits for the Logic apps. We need to understand them in order to design enterprise grade solutions.
  • Understanding the nature of the workflows
  • Internals help us to clearly understand the impact of design on throughput especially when we are working with Long running flows.
  • We will be able to leverage the platform as much as possible for concurrency.
  • Helps us to understand the structure and behavior of our data

Agenda

The agenda was not just talking about the internal architecture of logic apps but also to talk about Parallel Actions, Exception handling, workflow expressions.

Logic Apps Designer

Logic apps designer is apparently a TypeScript/React JS application. All the functionality that we observe in logic apps designer is all self-contained in this application. This is the main reason how they are able to host it in visual studio. This  makes use of Swagger to render the inputs and outputs.  Also as we already aware it generates the workflow definition in JSON.

Logic Apps Runtime

As we know logic apps will have  triggers and actions. When we create a logic app all these will be defined in a JSON file. When we click save button, logic apps runtime handles it as below.

  • Runtime engine reads the workflow definition and breaks down into various tasks and identifies the dependencies. the tasks will not be executed until their dependencies are worked out.
  • It spins distributed workers which coordinate to complete the execution of the tasks. This is very revealing to know that all the workers are distributed which makes the logic app more resilient
  • Runtime engine ensures that all the tasks inside the flow are executed at least once. he announced that in the history of logic apps he has not seen any instance where a task is left unexecuted.
  • There is no limit on the number of threads executing these tasks and hence there is no overhead of managing active threads.

Example logic App

He gave an example of a logic app with a service bus trigger receiving  list of products, and writes each product to a SQL database.

In this example, his main intention was to show how runtime identifies the tasks which can be executed. In this example, a for each loop decides that run time can spin parallel tasks to execute the SQL task.  The workflow orchestrator then completes the message by calling service bus complete connector and ends the workflow.

Parallel action

Now with run times ability to spin parallel tasks,  he showed us how to use parallel action in logic app definition.

From above picture, it is clear that we can add as many parallel actions we want to add by just clicking Plus symbol on the branches.

Exception handling

At this point, Derek Li took over the stage to show some geeky stuff. He started off by creating a logic app in which one of the action fails and when it fails he would send an email to Jeff. To achieve this he puts a scope and adds all the actions required. After the scope, he configured the run after settings for an action. I do not have an exact snapshot from his slide but it was something like below.

With run after configuration for an action,  it is easy to handle the error conditions. Also, he showed how we can set the timeout configuration for an action.

When the timeout expires, we can take some action  again by setting run after configuration to “has time out”

Workflow expressions

He spoke about important aspects of workflow expressions. Following are the highlights.

  • Any input that changes for every run is an expression. He showed some example expressions.
  • He explained the difference between different constructs such as “@”, “{}”,”[]” and “()”.

@ is used for referring a JSON node, {} means a string, [] is used as JSON path and () is used to contain the expressions for evaluation. He also showed the order in which elements of an expression executed.

Summary

As explained earlier it was a real treat for all the logic app enthusiasts and gave a lot of insights into a logic app platform.

  • The first session from Mayank and Divya gave the audience a great level of confidence about going with logic app implementations.
  • The session from Jeff and Derek brought an understanding of logic apps internals and patterns.
Author: Srinivasa Mahendrakar

Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies. View all posts by Srinivasa Mahendrakar

Setting unique Tracking Id in BizTalk Logic Apps send port

Setting unique Tracking Id in BizTalk Logic Apps send port

I was working on a POC which involved sending a message from BizTalk send port to a logic app with message’s HTTP header enriched to have a unique tracking id.  Achieving this was not straight forward. In this article, I will explain the issue I faced and resolution.

Problem explained

I have a simple logic app with an HTTP request trigger and dumps the received message into a Google drive folder.

My BizTalk application has an FTP receive port and a send port configured to use Logic Apps adapter. send port subscribes to messages from FTP receive port and sends them out to a logic app. 

As we are aware logic apps provide an option to send a client tracking id in the form of a custom HTTP header x-ms-client-tracking-id. Refer the article  https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-monitor-your-logic-apps to know more about monitoring and tracking in Logic apps.

The static logic app sends adapter provides an option to configure the custom HTTP headers in the port configuration as shown below.

Since I want to send a unique tracking id per message, I cannot set a static value in port configuration. Hence I did what any other BizTalk developer would do. tried to look for a property schema specific to Logic Apps adapter. However, I could not find one in the list of property schemas deployed in my BizTalk environment. This put me in a situation where I don’t have the option to send unique tracking id per message.

Solution

I started to contemplate on how a dynamic send port would send messages to logic apps without any property schema related to logic app adapter is deployed.  With a little bit of research, I came to know that the Logic Apps adapter internally leverages WCF Web Http binding. This directed me toward WCF property schema.

So I wrote a  HttpHeaders context property in a custom pipeline component in send port.


inMsg.Context.Write("HttpHeaders", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "x-ms-client-tracking-id: " + trackingId);

This made the trick! Now I am able to view the tracking id in my logic apps run history.

However, when I send another message, I saw that google drive connector was failing due to duplicate file name.  I used the tracking Id as the file name. This means somehow tracking id which I have set is same for subsequent runs.  This was again a setback for me as I was still not able to receive unique tracking id per message.

Again with a little bit of research, I understood that this is the normal behavior for a static WCF send port to cache the headers set using context properties. The option was to create a dynamic port. Since I do not want to create a dynamic port, I just tried to set a context property related to dynamic ports. So I added an additional line to my code in pipeline component.

inMsg.Context.Write("HttpHeaders", "http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties", "x-ms-client-tracking-id: " + trackingId);

inMsg.Context.Write("IsDynamicSend", "http://schemas.microsoft.com/BizTalk/2003/system-properties", true);

This solved the issue! I am now able to send a unique tracking id per message using Logic Apps adapter in BizTalk static send port.

Summary

In summary, we need to remember following points

  • Logic app provides an option to send client tracking id using a custom HTTP header “x-ms-client-tracking-id”
  • Logic Apps send adapter leverages WCF web HTTP binding behind the scene.
  • If you want a static Logic Apps port to send the tracking id per message then you need to promote two properties HttpHeader and IsDynamicSend
Author: Srinivasa Mahendrakar

Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies. View all posts by Srinivasa Mahendrakar

BizTalk Application Insights in depth – Part 1

BizTalk Application Insights in depth – Part 1

In my previous blog, I explained about installing BTS 2016 feature pack 1 and configuring it for Application Insights integration. In this article, I want to go a bit deeper and try to demonstrate,

  • Nature of  tracking data sent to Application Insights
  • Structure of the data
  • Querying  data in Application Insights
  • Some practical examples of getting sensible analytics for BizTalk interfaces.

I am hoping to give a jump start to someone who wants to use the Application Insights for BizTalk Server 2016.

Tracking data

As you know the term tracking data in BizTalk refers to different types of data emitted from different artifacts. It could be in/out events from ports and orchestrations, pipeline components, it could be system context properties, could be custom properties tracked from custom property schemas, could be message body in various artifacts, could be events fired from rule engine etc. So we would like to know, whether we will be able to get all this data in Application Insights or is it just a subset. I will try to answer this question based on the POC I have created.

POC I created is pretty simple. It has one receive port which receives an order XML file, processes that in an orchestration and send it to two different send ports. It can be pictorially represented as below.

  • I have enabled pipeline tracking on XML receive and XML transport pipelines.
  • Enabled track message body and analytics in Receive ports (If you want to know about Analytics option please refer to my first article.
  • Enabled the Track Events, Track Message Bodies, and Analytics on orchestration.
  • Enabled the Analytics on Send Port

Note: I enabled a different level of tracking at different artifacts to see if it has an impact on the analytics data sent to Application Insights. Later I realized that different tracking levels do not have any impact on the analytics data.

Analytics Data in Application Insights

I placed a single file into the receive location and started observing the events pushed to Application Insights. In general, Applications integrated with Application Insights can send data belonging to various categories, such as traces, customEvents, pageViews, requests, dependencies, exceptions, availabilityResults, customMetrics, band browserTimings. With BizTalk, I have observed that data belongs to “CustomEvents” category. Following are the custom events which are ingested from my BizTalk interface.

  • There are two events for a receive port.
  • There is an event for every logical port inside the orchestration. And hence we can see three events in total for orchestration.
  • There are two events for each send port.

All these events can be related to events logged into “Tracked events” query results which are shown below.

Structure of a BizTalk custom event

In the previous section, we saw that our BizTalk interface emitted various custom events for ports and orchestration. In this section, we will look into the structure of data which is captured in a custom event.

Event Metadata

Event metadata is the list of values which defines an event. Following are the event metadata in one of the custom events.

Custom Dimensions

Custom dimensions consist of the service instance details and context properties promoted in the messaging instance. Hence we can observe two different kinds of data under custom dimensions.

Service instance properties: These are the values specific to service instance associated with the messaging event.

Context properties:  All the context properties which are non-integer type will be listed under the custom dimensions.

Custom Measurements

As per my observation, custom measurements only contain the context properties of integer type.

Since there is no proper documentation regarding this, I tried to prove this theory by creating three custom properties in a property schema and promoted the fields in the incoming message. Following is the property schema that I defined.

I observed that PartyID and AskPrice properties which are of type string and decimal respectively are moved to Custom Dimensions section. Property Quantity which is of type integer is moved to Custom measurement.

Querying data

As discussed in above section all the BizTalk events are tracked under the customEvents category. Hence our query will start with customEvents.

Query language in Application Insights is very straightforward and yet very powerful. If you want to find out all the construct of this query language please refer this link Application Insights Analytics Reference.

In this section, I would like to cover some concepts or techniques which are relevant for querying BizTalk events.

Convert the context property values to specific types

In Application Insights, the context property values are stored as dynamic types. When you directly use them into queries especially in aggregations, you will receive a type casting exception as shown below.

To overcome this error, you will need to convert the context property to a specific type as shown below.

Easy way to bring a context property key into the query

Since context properties are a combination of namespace and property name, it will be a bit of an effort to type them in the queries that we create. To bring the context property on to the query page easily, follow steps as below.

  • Query for custom events and navigate to the property you are interested in the results section.
  • When you hover the mouse on the desired property, we will get two buttons. ‘+’ for inclusion and ‘-’ for exclusion.

Selecting specific fields

If you already know app insights query language, this tip is not so special. But if you are new to it and trying to find out how to select a column, you will face some difficulty as I did. The main reason for this is there is no construct called “select”. Instead, you will have to use something called “project”. Below is an example query.

Some useful sample queries.

In this section, I will try to list some queries which I found useful.

Message count by port names

query 


customEvents
| where customDimensions.Direction == "Receive" 
| summarize count() by tostring(customDimensions.["PortName (http_//schemas.microsoft.com/BizTalk/2003/messagetracking-properties)"])

Chart

Messaging Volume by schema

Query

customEvents
| where customDimensions.Direction == "Receive" 
| summarize count() by tostring(customDimensions.["MessageType (http_//schemas.microsoft.com/BizTalk/2003/system-properties)"])

Chart

Analytics with custom context properties.

Ability to generate analytics reports based on the custom promoted properties is a very powerful feature which really makes using application insights interesting. As I explained in previous sections I have created a custom property schema to track PartId, Quantity and AskPrice fields. Now we will see some example reports based on this.

Total quantity by part id

Query

customEvents
| where customDimensions.PortType == "ReceivePort" 
| where customDimensions.Direction == "Send" 
|summarize sum(toint(customMeasurements.["Quantity (https_//SampleBizTalkApplication.PropertySchema)"])) by PartId = tostring(customDimensions.["PartID (https_//SampleBizTalkApplication.PropertySchema)"])

Chart

Total sales over period of time

Query

customEvents
| where customDimensions.PortType == "ReceivePort" 
| where customDimensions.Direction == "Send" 
| summarize sum(todouble(customDimensions.["AskPrice (https_//SampleBizTalkApplication.PropertySchema)"])) by bin( timestamp,10m)

Chart

Pinning charts to Azure dashboard

All the charts that you have created can be pinned to an Azure dashboard and you can club these charts with other application dashboards as well. My dashboard with the charts that we created looks as below.

Summary

In summary BizTalk analytics option which is introduced in BizTalk Server 2016 Feature Pack 1 is useful to get analytics out of tracking data. I would like to conclude by stating following points.

  • Only the tracked messaging events, service instance information and context properties of associated service instance are sent to App iInsightsby analytics feature. Message body, pipeline events, business rule engine events etc. are not being pushed out.
  • Under messaging events, I was unable to find the Transmission Failure events for send ports. This will be useful for getting metrics on failure rates. If you agree with this observation please vote here.
  • The different level of tracking on ports and orchestrations does not have an impact on the data being transmitted to app insights.
  • Orchestration failures/suspended events are not pushed to application insights. It would be good if Microsoft provides an extensible feature to push exceptions from orchestrations. If you agree please vote here
  • There is no control on what context properties published to app insights. It is all or nothing scenario. It would be good to have control on it. Especially when you are promoting and tracking business data. If you agree please vote here.
  • Ability to perform analytics based on context property values can turn out to be a powerful feature for BizTalk implementations.
Author: Srinivasa Mahendrakar

Technical Lead at BizTalk360 UK – I am an Integration consultant with more than 11 years of experience in design and development of On-premises and Cloud based EAI and B2B solutions using Microsoft Technologies. View all posts by Srinivasa Mahendrakar