Reliably receive SQL data in Logic Apps

Reliably receive SQL data in Logic Apps

Scenario

Let’s discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let’s create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let’s consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][0][‘Id’]

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

BTS 2016 Feature Pack I – Continuous Deployment

BTS 2016 Feature Pack I – Continuous Deployment

In case you are interested in a detailed walk-through on how to set up continuous deployment, please check out this blog post on Continuous Deployment in BizTalk 2016, Feature Pack 1.

What is included?

Below, you can find a bullet point list of features included in this release.

  • An application version has been added and can be easily specified.
  • Automated deployment from VSTS, using a local deploy agent.
  • Automated deployment of schemas, maps, pipelines and orchestrations.
  • Automated import of multiple binding files.
  • Binding file management through VSTS environment variables.
  • Update of specific assemblies in an existing BizTalk application (with downtime)

What is not included?

This is a list of features that are currently not supported by the new VSTS release task:

  • Build BizTalk projects in VSTS hosted build servers.
  • Deployment to a remote BizTalk server (local deploy agent required)
  • Deployment to a multi-server BizTalk environment.
  • Deployment of shared artifacts (e.g. a schema that is used by several maps)
  • Deployment of more advanced artifacts: BAM, BRE, ESB Toolkit…
  • Control of which host instances / ports / orchestrations should be (re)started
  • Undeploy a specific BizTalk application, without redeploying it again.
  • Use the deployment task in TFS 2015 Update 2+ (no download supported)
  • Execute the deployment without the dependency of VSTS.

Conclusion!

Microsoft released this VSTS continuous deployment service into the wild, clearly stating that this is a first step in the BizTalk ALM story. That sounds very promising to me, as we can expect more functionality to be added in future feature packs!

After intensively testing the solution, I must conclude that there is a stable and solid foundation to build upon. I really like the design and how it is integrated with VSTS. This foundation can now be extended with the missing pieces, so we end up with great release management!

At the moment, this functionality can be used by BizTalk Server 2016 Enterprise customers that have a single server environment and only use the basic BizTalk artifacts. Other customers should still rely on the incredibly powerful BizTalk Deployment Framework (BTDF), until the next BizTalk Feature Pack release. At that moment in time, we can re-evaluate again! I’m quite confident that we’re heading in the good direction!

Looking forward for more on this topic!

Toon

BTS 2016 Feature Pack I – Management & Operational API

BTS 2016 Feature Pack I – Management & Operational API

The documentation of the Management API can be found here.  In short: almost everything you can access in the BizTalk Administration Console is now available in the BizTalk Management API.  The API is very well documented with Swagger, so it’s pretty much self-explaining.  

What is included?

A complete list of available operations can be found here.

Deployment

There are new opportunities on the deployment side. Here are some ideas that popped into my mind:

  • Dynamically create ports. Some messaging solutions are very generic. Adding new parties is sometimes just a matter of creating a new set of receive and send ports. This can now be done through this Management API, so you don’t need to do the plumbing yourself anymore.
  • Update tracking settings. We all know it quite difficult to keep your tracking settings consistent through all applications and binding files. The REST API can now be leveraged to change the tracking settings on the fly to their desired state.

Runtime

Also the runtime processing might benefit from this new functionality. Some scenarios:

  • Start and stop processes on demand. In situations that the business wants to take control on when certain processes should be active, you can start/stop receive/send ports on demand. Just a small UI on top of the Management API, including the appropriate security measures, and you’re good to go!
  • Maintenance windows. BizTalk is in the middle of your application landscape. Deployments on backend applications, can have a serious impact on running integrations. That’s why stopping certain ports during maintenance windows is a good approach. This can now be easily automated or controlled by non-BizTalk experts.

Monitoring

Most new opportunities reside on the monitoring side. A couple of potential use cases:

  • Simplified and short-lived BAM. It’s possible to create some simple reports with basic statistics of your BizTalk environment. You can leverage the Management API or the Operational OData Service. You can easily visualize the number of messages per port and for example the number of suspended instances. All of this is built on top of the data in your MessageBox and DTA database, so there’s no long term reporting out-of-the-box.
  • Troubleshooting. There are very easy-to-use operations available to get a list of services instances with a specific status. In that way, you can easily create a dashboard that gives an overview of all instances that require intervention. Suspended instances can be resumed and terminated through the Management API, without the need to access your BizTalk Server.

This is an example of the basic Power BI reports that are shipped with this feature pack.

What is not included?

This brand new BizTalk Management API is quite complete, very excited about the result! As always, I looked at it with a critical mindset and tried to identify missing elements that would enable even more additional value. Here are some aspects that are currently not exposed by the API, but would be handy in future releases:

  • Host Instances: it would be great to have the opportunity to also check the state of the host instances and to even start / stop / restart them. Currently, only a GET operation on the hosts is available.
  • Tracked Context Properties: I’m quite fond of these, as they enable you to search for particular message events, based on functional search criteria (e.g. OrderId, Domain…). Would be a nice addition to this API!
  • Real deployment: first I thought that the new deployment feature was built on top of this API, but that was wrong. The API exposes functionality to create and manage ports, but no real option to update / deploy a schema, pipeline, orchestration or map. Could be nice to have, but on the other hand, we have a new deployment feature of which we need to take advantage of!
  • Business Activity Monitoring: I really like to idea of the Operational OData Service, which smoothly integrates with Power BI. Would be great to have a similar and generic approach for BAM, so we can easily consume the business data without creating custom dashboards. The old BAM portal is really no option anymore nowadays. You can vote here.

Conclusion!

Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!

The exposure of BizTalk as a REST API opens up a new range of great opportunities. Don’t forget to apply the required security measures when exposing this API! By introducing this API, the need for auditing all activity becomes even more important!

Thanks BizTalk for this great addition! Thank you for reading!

Cheers,
Toon

BTS 2016 Feature Pack I – Scheduling capabilities

BTS 2016 Feature Pack I – Scheduling capabilities

The documentation of this scheduling feature can be found on MSDN.

What is included?

Support for time zones

The times provided within the schedule tab of receive locations are now accompanied by a time zone. This ensures your solution is not depending anymore on the local computer settings. There’s also a checkbox to automatically adjust for daylight saving time.

This is a small, but handy addition to the product! It avoids unpleasant surprises when rolling out your BizTalk solutions throughout multiple environments or even multiple customers!

Service window recurrence

The configuration of service windows is now a lot more advanced. You have multiple recurrence options available:

  • Daily: used to run the receive location every x number of days
  • Weekly: used to run the receive location on specific days of the week
  • Monthly: used to run the receive location on specific dates or specific days of the month

Up till now, I didn’t use the service window that much. These new capabilities allow some new scenarios. As an example, this would come in handy to schedule the release of batch messages on a specific time of the day, which is often required in EDI scenarios!

What is not included?

This is not a replacement for the BizTalk Scheduled Task Adapter, which is a great community adapter! There is a fundamental difference between an advanced service window configuration and the Scheduled Task Adapter. A service window configures the time on which a receive locations is active, whereas the Scheduled Task Adapter executes a pre-defined task on the configured recurrence cadence.

For the following scenarios, we still need the Scheduled Task Adapter:

  • Send a specific message every x seconds / minutes.
  • Trigger a process every x seconds / minutes.
  • Poll a rest endpoint every x seconds / minutes. Read more about it here.

Conclusion!

Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!

These new scheduling capabilities are a nice addition to BizTalk’s toolbelt! In future feature packs, I hope to see similar capabilities as the Scheduled Task Adapter. Many customers are still reluctant to use community adapters, so a supported adapter would be very nice! You can vote here!

Thanks for reading!
Toon

BTS 2016 Feature Pack I – Continuous Deployment Walk-Through

BTS 2016 Feature Pack I – Continuous Deployment Walk-Through

Introduction

I’ve created this walkthrough mainly because I had difficulties to fully understand how it works. The documentation does not seem 100% complete and some blog posts I’ve read created some confusion for me. This is a high-level overview of how it works:

  1. The developer must configure what assemblies and bindings should be part of the BizTalk application. Also, the order of deployment must be specified. This is done in the new BizTalk Application Project.
  2. The developer must check-in the BizTalk projects, including the configured BizTalk Application Project. Also, the required binding files must be added to the chosen source control system.
  3. A build is triggered (automatically or manually). A local build agent compiles the code. By building the BizTalk Application Project, a deployment package (.zip) is automatically generated with all required assemblies and bindings. This deployment package (.zip) is published to the drop folder.
  4. After the build completed, the release can be triggered (automatically or manually). A local deploy agent, installed on the BizTalk server, takes the deployment package (.zip) from the build’s drop folder and performs the deployment, based on the configurations done in step 1. Placeholders in the binding files are replaced by VSTS environment variables.

Some advice:

  • Make a clear distinction between build and release pipelines!
  • Do not create and check-in the deployment package (.zip) yourself!

You can follow the steps below to set up full continuous deployment of BizTalk applications. Make sure you check the prerequisites documented over here.

Create a build agent

As VSTS does not support building BizTalk projects out-of-the-box, we need to create a local build agent that performs the job.

Create Personal Access Token

For the build agent to authenticate, a Personal Access Token is required.

  • Browse to your VSTS home page. In my case this is https://toonvanhoutte.visualstudio.com
  • Click on the profile icon and select Security.

 

  • Select Personal access tokens and click Add

 

  • Provide a meaningful name, expiration time and select the appropriate account. Ensure you allow access to Agent Pools (read, manage).

 

  • Click Create Token

 

  • Ensure you copy the generated access token, as we will need this later.

Install local build agent

The build agent should be installed on the server that has Visual Studio, the BizTalk Project Build Component and BizTalk Developer Tools installed.

  • Select the Settings icon and choose Agent queues.

  • Select the Default agent queue. As an alternative, you could also create a new queue.

  • Click on Download agent
  • Click Download. Remark that the required PowerShell scripts to install the agent are provided.

  • Open PowerShell as administrator on the build server.
    Run the following command to unzip and launch the installation:
    mkdir agent ; cd agent
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; System.IO.Compression.ZipFile]::ExtractToDirectory(“$HOMEDownloadsvsts-agent-win7-x64-2.115.0.zip”, “$PWD”)

  • Execute this command to launch the configuration:
    .config.cmd
  • Provide the requested information:
    > Server URL: https://toonvanhoutte.visualstudio.com
    > Authentication: PAT
    > PAT: The personal access token copied in the previous step

 

  • Press enter for default pool
  • Press enter for default name
  • Press enter for default work folder
  • Provide Y to run as a service
  • Provide user
  • Provide password

  • Double check that the local build service is created and running.

  • If everything went fine, you should see the build agent online!

Create a build definition

Let’s now create and configure the required build definition.

  • In the Builds tab, click on New to create a new build definition.

  • Select Visual Studio to start with a pre-configured build definition. Click Next to continue.

  • Select your Team Project as the source, enable continuous integration, select the Default queue agent and click Create.

  • Delete the following build steps, so the build pipeline looks like this:
    > NuGet Installer
    > Visual Studio Test
    > Publish Symbols

  • Configure the Visual Studio Build step. Select the BizTalk solution that contains all required artifacts. Make sure Visual Studio 2015 is picked and verify that MSBuild architecture is set to MSBuild x86.

  • The other build steps can remain as-is. Click Save.

  • Provide a clear name for the build definition and click OK.

  • Queue a new build.

  •  Confirm with OK.

  • Hopefully your build finishes successful. Solve potential issues in case the build failed.

Configure BizTalk Application

In this chapter, we need to create and configure the definition of our BizTalk application. The BizTalk Server 2016 Feature Pack 1 introduces a new BizTalk project type: BizTalk Server Application Project. Let’s have a look how we can use this to kick off an automated deployment.

  • On your solution, click Add, Add New Project.
  • Ensure you select .NET Framework 4.6.1 and you are in the BizTalk Projects tab. Choose BizTalk Server Application Project and provide a descriptive name.

  • Add references to all projects that needs to be included in this BizTalk application and click OK.

  • Add all required binding files to the project. Make sure that every binding has Copy to Output Directory set to Copy Always. Via this way, the bindings will be included in the generated deploy package (.zip).

  • In case you want to replace environment specific settings in your binding file, such as connection string and passwords, you must add placeholders with the $(placeholder) notation.

  • Open the BizTalkServerInventory.json file and configure the following items:
    > Name and path of all assemblies that must be deployed in the BizTalk application
    > Name and path of all binding files that must be imported into the BizTalk application
    > The deployment sequence of assemblies to be deployed and bindings to be imported.
  • Right click on the BizTalk Application Project and choose Properties. Here you can specify the desired version of the BizTalk Application. Be aware that this version is different, depending whether you’re building in debug or release mode. Click OK to save the changes.

 

  • Build the application project locally. Fix any errors if they might occur. If the build succeeds, you should see a deployment package (.zip) in the bin folder. This package will be used to deploy the BizTalk application.

  • Check-in the new BizTalk Application Project. This should automatically trigger a new build. Verify that the deployment package (.zip) is also available in the drop folder of the build. This can be done by navigating to the Artifacts tab and clicking on Explore.

  • You should see the deployment package (.zip) in the bin folder of the BizTalk Application Project.

Create a release definition

We’ve created a successful build, that generated the required deployment package (.zip). Now it’s time to configure a release pipeline that takes this deployment package as an input and deploys it automatically on our BizTalk Server.

  • Navigate to the Releases tab and click Create release definition.

  • Select Empty to start with an empty release definition and click Next to continue.

  • Choose Build as the source for the release, as the build output contains the deployment package (.zip). Make sure you select the correct build definition. If you want to setup continuous deployment, make sure you check the option. Click Create to continue.

  • Change the name of the Release to a more meaningful name.

  • Change the name of the Environment to a more meaningful name.

  • Click on the “…” icon and choose Configure variables.

  • Add an environment variable, named Environment. This will ensure that every occurrence of $(Environment) in your binding file, will be replaced with the configured value (DEV). Click OK to confirm.

  • Click Add Tasks to add a new task. In the Deploy tab, click Add next to the BizTalk Server Application Deployment task. Click Close to continue.

  • Provide the Application Name in the task properties.

  • For the Deployment package path, navigate to the deployment package (.zip) that is in the drop folder of the linked build artefact. Click OK to confirm.

  • Specify, in the Advanced Options, the applications to reference, if any.

  • Select Run on agent and select the previously created agent queue to perform the deployment. In a real scenario, this will need to be a deployment agent per environment.

  • Save the release definition and provide a comment to confirm.

Test continuous deployment

  • Trigger now a release, by selecting Create Release.

  • Keep the default settings and click Create.

  • In the release logs, you can see all details. The BizTalk deployment task has very good log statements, so in case of an issue you can easily pinpoint the problem. Hopefully you encounter a successful deployment!

  • On the BizTalk Server, you’ll notice that the BizTalk application has been created and started. Notice that the application version is applied and the application references are created!

 

In case you selected the continuous integration options, there will now be an automated deployment each time you check in a change in source control. Continuous deployment has been set up!

Wrap-up

Hope you’ve enjoyed this detailed, but basic walkthrough. For real scenarios, I highly encourage to extend this continuous integration approach with:

  • Automated unit testing and optional integration testing
  • Versioning of the assembly file versions
  • Include the version dynamically in the build and release names

Cheers,
Toon

Logic Apps Batching

Logic Apps Batching

Scenario

For this blog post, I decided to try to batch the following XML message.  As Logic Apps supports JSON natively, we can assume that a similar setup will work quite easily for JSON messages.  Remark that the XML snippet below contains an XML declaration, so pure string appending won’t work.  Also namespaces are included.

Requirements

I came up with the following requirements for my batching solution:

  • External message store: in integration I like to avoid long-running workflow instances at all time. Therefore I prefer messages to be stored somewhere out-of-the-process, waiting to be batched, instead of keeping them active in a singleton workflow instance (e.g. BizTalk sequential convoy).
  • Message and metadata together: I want to avoid to store the message in a specific place and the metadata in another one. Keep them together, to simplify development and maintenance.
  • Native Logic Apps integration: preferably I can leverage an Azure service, that has native and smooth integration with Azure Logic Apps. It must ensure we can reliably assign messages to a specific batch and we must be able to remove them easily from the message store.
  • Multiple batch release triggers: I want to support multiple ways to decide when a batch can be released.
    > # Messages: send out batches containing each X messages
    > Time: send out a batch at a specific time of the day
    > External Trigger: release the batch when an external trigger is receive

Solution

After some analysis, I was convinced that Azure Service Bus queues are a good fit:

  • External message store: the messages can be queued for a long time in an Azure Service Bus queue.
  • Message and metadata together: the message is placed together with its properties on the queue. Each batch configuration can have its own queue assigned.
  • Native Logic Apps integration: there is a Service Bus connector to receive multiple messages inside one Logic App instance. With the peak-lock pattern, you can reliably assign messages to a batch and remove them from the queue.
  • Multiple batch release triggers:
    > # Messages: In the Service Bus connector, you can choose how many messages you want to receive in one Logic App instance

    > Time
    : Service Bus has a great property ScheduledEnqueueTimeUtc, which ensures that a message becomes only visible on the queue from a specific moment in time. This is a great way to schedule messages to be releases at a specific time, without the need for an external scheduler.

    > External Trigger
    : The Logic App can be easily instantiated via the native HTTP Request trigger

 

Implementation

Batching Store

The goal of this workflow is to put the message on a specific queue for batching purpose.  This Logic App is very straightforward to implement. Add a Request trigger to receive the messages that need to be batched and use the Send Message Service Bus connector to send the message to a specific queue.

In case you want to release the batch only at a specific moment in time, you must provide a value for the ScheduledEnqueueTimeUtc property in the advanced settings.

Batching Release

This is the more complex part of the solution. The first challenge is to receive for example 3 messages in one Logic App instance. My first attempt failed, because there is apparently a different behaviour in the Service Bus receive trigger and action:

  • When one or more messages arrive in a queue: this trigger receives messages in a batch from a Service Bus queue, but it creates for every message a specific Logic App instance. This is not desired for our scenario, but can be very useful in high throughput scenarios.
  • Get messages from a queue: this action can receive multiple messages in batch from a Service Bus queue. This results in an array of Service Bus messages, inside one Logic App instance. This is the result that we want for this batching exercise!

Let’s use the peak-lock pattern to ensure reliability and receive 3 messages in one batch:

As a result, we get this JSON array back from the Service Bus connector:

The challenge is to parse this array, decode the base64 content in the ContentData and create a valid XML batch message from it.  I tried several complex Logic App expressions, but realized soon that Azure Functions is better suited to take care of this complicated parsing.  I created the following Azure Fuction, as a Generic Webhook C# type:

Let’s consume this function now from within our Logic App.  There is seamless integration with Logic Apps, which is really great!


As an output of the GetBatchMessage Azure Funtion, I get the following XML 🙂

Large Messages

This solution is very nice, but what with large messages? Recently, I wrote a Service Bus connector that uses the claim check pattern, which exchanges large payloads via Blob Storage. In this batching scenario we can also leverage this functionality. When I have open sourced this project, I’ll update this blog with a working example.  Stay tuned for more!

Conclusion

This is a great and flexible way to perform batching within Logic Apps. It really demonstrates the power of the Better Together story with Azure Logic Apps, Service Bus and Functions. I’m sure this is not the only way to perform batching in Logic Apps, so do not hesitate to share your solution for this common integration challenge in the comments section below!

I hope this gave you some fresh insights in the capabilities of Azure Logic Apps!
Toon

Logic Apps Debatching

Logic Apps Debatching

SplitOn Command

Logic Apps offer the splitOn command that can only be added to a trigger of a Logic App. In this splitOn command, you can provide an expression that results in an array. For each item in that array, a new instance of the Logic App is fired.

Debatching JSON Messages

Logic Apps are completely built on API’s, so they natively support JSON messages. Let’s have a look on how we can debatch the JSON message below, by leveraging the splitOn command.

Create a new Logic App and add the Request trigger.  In the code view, add the splitOn command to the trigger.  Specify the following expression: @triggerBody()[‘OrderBatch’][‘Orders’]

Use Postman to send the JSON message to the HTTP trigger.  You’ll notice that one input message, triggers 3 workflow runs.  Very easy way to debatch a message!

Debatching XML Messages

In old-school integration, XML is still widely spread. When dealing with flat file or EDI messages, they are also converted into XML. So, it’s required to have this also working for XML messages. Let’s consider the following example.

Update the existing Logic App with the following expression for the splitOn command: @xpath(xml(triggerBody()), ‘//*[local-name()=”Order” and namespace-uri()=”http://namespace”]’).  In order to visualize the result, add a Terminate shape that contains the trigger body as the message.

Trigger the workflow again.  The result is as expected and the namespaces are nicely preserved!

Exception Handling

The advantage of this approach is that every child message immediately starts processing independently from the others. If one message fails during further processing, it does not impact the others and exception handling can be done on the level of the child message. This is comparable to recoverable interchange processing in BizTalk Server. In this way, you can better make use of the resubmit functionality. Read more about it here.

Let’s have a look what happens if the xPath expression is invalid. The following exception is returned: The template language expression evaluation failed: ‘The template language function ‘xpath’ parameters are invalid: the ‘xpath’ parameter must be a supported, well-formed XPath expression. Please see https://aka.ms/logicexpressions#xpath for usage details. This behavior is as desired.

What happens if the splitOn command does not find a match within the incoming trigger message? Just change the xPath for example to @xpath(xml(triggerBody()), ‘//*[local-name()=”XXX” and namespace-uri()=”http://namespace”]’). In this case, no workflow instance gets triggered. The trigger has the Succeeded status, but did not fire. The consumer of the Logic App receives an HTTP 202 Accepted, so assumes everything went fine.

This is important to bear in mind, as you might lose invalid messages in this way. The advice is to perform schema validation before consuming a nested Logic App with the splitOn trigger.

Monitoring

Within the standard overview blade, you cannot see that the three instances relate to each other. However, if you look into the Run Details, you notice that they share the same Correlation ID. It’s good to see that in the backend, these workflow instances can be correlated. Let’s hope that such functionality also makes it to the portal in a user-friendly way!  

For the time being, you can leverage the Logic Apps Management REST API to build your custom monitoring solution.

For Each Command

Another way to achieve a debatching-alike behavior, is by leveraging the forEach command. It’s very straightforward to use.

Debatching JSON Messages

Let’s use the same JSON message as in the splitOn example. Add a forEach command to the Logic App and provide the same expression: @triggerBody()[‘OrderBatch’][‘Orders’].

If we now send the JSON message to this Logic App, we get the following result. Remark that the forEach results in 3 loops, one for each child message.

Debatching XML Messages

Let’s have a look if the same experience applies for XML messages. Modify the Logic App, to perform the looping based on this expression: @xpath(xml(triggerBody()), ‘//*[local-name()=”Order” and namespace-uri()=”http://namespace”]’)

Use now the XML message from the first example to trigger the Logic App. Again, the forEach includes 3 iterations.  Great!

Exception Handling

I want to see what happens if one child message fails processing. Therefore, I take the JSON Logic App and add the Parse JSON action that validates against the schema below. Remark that all fields are required.

Take the JSON message from previous example and remove in the second order a required field. This will cause the Logic App to fail for the second child message, but to succeed for the first and third one.

Trigger the Logic App and investigate the run history. This is a great result! Each iteration processes independent from the other. Quite similar behavior as with the splitOn command, however it’s more difficult to use the resubmit function.

You must understand that by default, the forEach branches are executed in parallel. You can modify this to sequential execution. Dive into the code view and add “operationOptions” : “Sequential” to the forEach.

Redo the test and you will see that this has no influence on the exception behavior. Every loop gets invoked, regardless whether the previous run failed.

Monitoring

The monitoring experience is great! You can easily scroll through all iterations to see which iteration succeeded and which on failed. If one of the actions fails within a forEach, the Logic App gets the Failed status assigned.

What should we use?

In order to have a real debatching experience, I recommend the splitOn command to be used within enterprise integration scenarios. The fact that each child message gets immediately its specific workflow instance assigned, makes the exception handling strategy easier and operational interventions more straightforward.

Do not forget to perform first schema validation and then invoke a nested workflow with the Request trigger, configured with the splitOn command. This will ensure that no invalid message disappears. Calling a nested workflow also offers the opportunity to pass the batch header information via the HTTP headers, so you can preserve header information in the child message. Another way to achieve this, is by executing a Transformation in the first Logic App, that adds header information to every child message.

The nested workflow cannot have a Response action, because it’s decorated with a splitOn trigger.  If you want to invoke such a Logic App, you need to update the consuming Logic App action with the following expression: “operationOptions”: “DisableAsyncPattern”.

If we run the setup, explained above, we get the following debatching experience with header information preserved!

Conclusion

Logic Apps provides all required functionality to debatch XML and JSON messages. As always, it’s highly encouraged to investigate all options in depth and to conclude what approach suites the best for your scenario.

Thanks for reading!
Toon

Cascading effects of nested Logic Apps

Cascading effects of nested Logic Apps

Scenario

Let’s take the following scenario as a base to start from. Logic App 1 is exposed as a web service. It consumes Logic App 2 which calls on its turn Logic App 3. Logic App 2 and 3 are considered to be reusable building blocks of the solution design. As an example, Logic App 3 puts a request message on the particular queue. Below you can find the outcome of a successful run that finishes within an acceptable timeframe for a web service.

Exception Scenario

If you stick to the above design, you’ll discover unpleasant behavior in case you need to cope with failure. Building cloud-based solutions means dealing with failure in your design, even in this basic scenario. Let’s simulate an exception in Logic App 3, by trying to put a message on a non-existing queue. As a result, Logic App 1 fails after 6 minutes of processing!

I expected a long delay and potentially a timeout, but those 6 minutes were a real surprise to me. The reason for this behavior is the default retry policies that are applied on Logic Apps. I consulted the documentation and that explains everything. Logic App 1 was fired once. Logic App 2 got retried 4 times, which results in 5 failed instances. The third workflow got even executed 25 (5×5) times.

The retry interval is specified in the ISO 8601 format. Its default value is 20 seconds, which is also the minimum value. The maximum value is 1 hour. The default retry count is 4, 4 is also the maximum retry count. If the retry policy definition is not specified, a fixed strategy is used with default retry count and interval values. To disable the retry policy, set its type to None.

Optimize the retry policies

Time to overwrite those default retry policies. For Logic App 1, I do not want any retry in case Logic App 2 fails. This is achieved by updating the code view:

In Logic App 2, I configure the retry policy to retry once after 20 seconds:

The result is acceptable from a timing perspective:

On the other hand, the exception message we receive is completely meaningless.  Check out this post to learn more about exception handling in such a situation.

Implement fire and forget

In the previous examples, we invoked the underlying Logic App in a synchronous way: call the Logic App and only continue if the Logic App has completed its processing. For those with a BizTalk background: this is comparable with the Call Orchestration shape. As Logic Apps gives you complete freedom on where to put the Response action in your workflow, you can also go for a fire-and-forget pattern, comparable with the Start Orchestration shape. This can be achieved by placing the Response action right after the Request trigger. Via this way, these reusable Logic Apps execute independently from their calling process.

This eventual consistency can have an impact on the way user applications are built and it requires also good operational monitoring in case asynchronous processes fail. Remark in the example below, that the consuming application is not aware that Logic App 3 failed.

Update: Recently I discovered that it’s even possible to leave out the Response action within the nested workflows.  Just ensure to update the consuming Logic App action with the following expression: “operationOptions”: “DisableAsyncPattern”.  This is even more fire-and-forget style and will improve performance a little bit.

This solution reduces processing dependencies between the reusable Logic Apps. Unfortunately, the design is still not bullet-proof. Under a high load, throttling could occur in the underlying Logic Apps, which could result in time-outs when calling the nested workflows. A more convenient design, is to put a Service Bus queue in between. This increases on the other hand the complexity of development, maintenance and operations. It’s important to assess this potential issue of throttling within the context of your business case. Is it really worth the effort? It depends on so many factors…

Monitoring

As a final topic, I want to demonstrate the nested workflows all share a common identifier. The parent workflow has a specific ID of its instance.

This ID appears in every involved Logic App run execution, in the form of a Correlation ID. This ID can be used to link / correlate the Logic App instances with each other.

This ID is handed over to the underlying workflow, via the x-ms-client-tracking-id HTTP Header.

Feedback to the product team

It’s fantastic that you get full control on the retry policies. The minimal retry interval of 20 seconds seems quite long to me, if you need to deal with sync web service scenario. I found also a nice suggestion to include an exponential back-off retry mechanism. Implementing circuit breakers would also be nice to have!

The monitoring experience for retried instances could be improved. In the Azure portal, they just show up as individual instances. There’s no easy way to find out that they are all related to each other. Would be a great feature if all runs with the same Correlation ID are grouped together in the default. Like it? Vote here!

Conclusion

Logic Apps nested workflows are very powerful to reuse functionality in a user-friendly way. Think about the location of the Response action within the underlying Logic App, as this greatly impacts the runtime dependencies. Implement fire and forget if your business scenario allows it and consider a queuing system in case you need a scalable solution that must handle a high load.

Thanks for reading!
Toon

The importance of idempotent receivers

The importance of idempotent receivers

What?

Let’s first have a closer look at the definition of idempotence, according to Wikipedia. “Idempotence is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application.” The meaning of this definition is explained as: “a function is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once; i.e., ƒ(ƒ(x)) ≡ ƒ(x)“.

If we apply this on integration, it means that a system is idempotent when it can process a specific message multiple times, while still retaining the same end-result. As a real-life example, an ERP system is idempotent if only one sales order is created, even if the CreateSalesOrder command message was submitted multiple times by the integration layer.

Why?

Often, customers request the integration layer to perform duplicate detection, so the receiving systems should not be idempotent. This statement is only partially true. Duplicate detection on the middleware layer can discard messages that are received more than once. However, even in case a message is only received once by the middleware, it may still end-up multiple times in the receiving system. Below you can find two examples of such edge cases.

Web service communication

Nowadays, integration leverages more and more the power of API’s. API’s are built on top of the HTTP protocol, which can cause issues due to its nature. Let’s consider the following situations:

  1. In this case, all is fine. The service processed the request successfully and the client is aware of this.

  2. Here there is also no problem. The service failed processing the request and the client knows about it. The client will retry. Eventually the service will only process the message once.

  3. This is a dangerous situation in which client and service are misaligned on the status. The service successfully processed the message, however the HTTP 200 response never reached the client. The client times out and will retry. In this case the message is processed twice by the server, so idempotence might be needed.

Asynchronous queueing

In case a message queueing system is used, idempotency is required if the queue supports guaranteed at-least-once delivery. Let’s take Azure Service Bus queues as an example. Service Bus queues support the PeekLock mode. When you peek a message from the queue, it becomes invisible for other receivers during a specific time window. You can explicitly remove the message from the queue, by executing a Complete command.

In the example below, the client peaks the message from the queue and sends it to the service. Server side processing goes fine and the client receives the confirmation from the service. However, the client is not able to complete the message because of an application crash or a network interference. In this case, the message will become visible again on the queue and will be presented a second time towards the service. As a consequence, idempotence might be required.

How?

The above scenarios showcase that duplicate data entries can be avoided most of the time, however in specific edge cases a message might be processed twice. Within the business context of your project, you need to determine if this is an issue. If 1 out of 1000 emails is sent twice, this is probably not a problem. If, however 1 out of 1000 sales orders are created twice, this can have a huge business impact. The problem can be resolved by implementing exactly-once delivery or by introducing idempotent receivers.

Exactly-once delivery

The options to achieve exactly-once delivery on a protocol level are rather limited. Exactly-once delivery is very difficult to achieve between systems of different technologies. Attempts to provide an interoperable exactly-once protocol, such as SOAP WS-ReliableMessaging, ended up very complex and often not interoperable in practice. In case the integration remains within the same technology stack, some alternative protocols can be considered. On a Windows platform, Microsoft Distributed Transaction Coordinator can ensure exactly-once delivery (or maybe better exactly-once processing). The BizTalk Server SQL adapter and the NServiceBus MSMQ and SQL transport are examples that leverage this transactional message processing.

On the application level, the integration layer could be made responsible to check first against the service if the message was already processed. If this turns out to be true, the message can be discarded; otherwise the message must be delivered to the target system. Be aware that this results in chatty integrations, which may influence performance in case of a high message throughput.

Idempotent receiver

Idempotence can be established within the message itself. A classic example to illustrate this is a financial transaction. A non-idempotent message contains a command to increase your bank balance with € 100. If this message gets processed twice, it’s positive for you, but the bank won’t like it. It’s better to create a command message that states that the resulting bank balance must be € 12100.  This example clearly solves idempotence, but is not built for concurrent transactions.

An idempotent message is not always an option. In such cases the receiving application must take responsibility to ensure idempotence. This can be done by maintaining a list of message id’s that have been processed already. If a message arrives with an id that is already on the list, it gets discarded. When it’s not possible to have a message id within the message, you can keep a list of processed hash values. Another way of achieving idempotence is to set unique constraints on the Id of the data entity. Instead of pro-actively checking if a message was already processed, you can just give it a try and handle the unique constraint exception.

Lately, I see more and more SaaS providers that publish idempotent upsert services, which I can only encourage! The term upsert means that the service itself can determine whether it needs to perform an insert or an update of the data entity. Preferably, this upsert is performed based on a functional id (e.g. customer number) and not based on an internal GUID, as otherwise you do not benefit from the performance gains.

Conclusion

For each integration you set up, it’s important to think about idempotence. That’s why Codit has it by default on its Integration Analysis Checklist. Investigate the probability of duplicate data entries, based on the used protocols. If your business case needs to avoid this at all time, check whether the integration layer takes responsibility on this matter or if the receiving system provides idempotent service endpoints. The latter is mostly the best performing choice.

Do you have other ways to deal with this? Do not hesitate to share your experience via the comments section!

Thanks for reading!

Toon

AS4 for Dummies – Part V: Are you ready for AS4?

If you are active in B2B/B2C messaging, then AS4 is definitely coming your way! Make sure you are prepared.

AS2 Comparison

As AS4 is inspired by AS2, both are often considered in message exchange over the internet. On a high level, they are very similar, however there are some key differences that you should be aware of in order to make a profound decision on this matter. Let’s have a look!

Common Characteristics

These are the most important common characteristics of AS2 and AS4:

  • Payload agnostic: both messaging protocols are payload agnostic, so they support any kind of payload to be exchanged: XML, flat file, EDI, HL7, PDF, binary…
  • Payload compression: AS2 and AS4 support both compression of the exchanged messages, in order to reduce bandwidth. It’s however done via totally different algorithms.
  • Signing and encryption: the two protocols support both signing and encryption of the exchanged payloads. It’s a trading partner agreement whether to apply it or not.
  • Non-repudiation: the biggest similarity is the way non-repudiation of origin and receipt are achieved. This is done by applying signing and using acknowledgement mechanisms.

Technology Differences

The common characteristics are established by using totally different technologies:

  • Message packaging: within AS2, the message packaging is purely MIME based. In AS4, this is governed by SOAP with Attachments, a combination of MIME and SOAP.
  • Security: AS2 applies security via the S/MIME specifications, whereas AS4’s security model is based on the well-known WS-Security standard.
  • Acknowledgements: in AS2 and AS4, acknowledgements are introduced to support reliable messaging and non-repudiation of receipt. In AS2 this is done by so-called Message Disposition Notifications (MDN), AS4 uses signed Receipts.

AS4 Differentiators

These are the main reasons why AS4 could be chosen, instead of AS2. If none of these features are applicable to your scenario, AS2 might be the best option as it is currently more known and widely adopted.

  • Support for multiple payloads: AS4 has full support for exchanging more than one business payload. Custom key/value properties for each payload are available.
  • Support for native web services: being built on top of SOAP with Attachments, AS4 offers native web service support. It’s a drawback that SwA is not supported out-of-the-box by .NET.
  • Support for pulling: this feature is very important if message exchange is required with a trading partner that cannot offer high availability or static addressing.
  • Support for lightweight client implementations: three conformance clauses are defined within AS4, so client applications must not support immediately the full AS4 feature stack.
  • Support for modern crypto algorithms: in case data protection is an important aspect, AS4 can offer more modern and less vulnerable crypto algorithms.
  • Support for more authentication types: AS4 supports username-password and X.509 authentication. There is also a conformance clause on SAML authentication within AS4.

Getting Started

Are you interested to learn more on AS4?

From an architecture / specifications perspective it’s good to have a look at the related OASIS standards. This includes the ebMS 3.0 Core Specifications and the AS4 profile of ebMS 3.0. In case you are more interested on how an AS4 usage profile could be described between trading partners, this ENTSOG AS4 Usage Profile is a great example.

If you’re more developer oriented, it’s good to have a look at Holodeck B2B. It is a java-based open-source software that fully supports the AS4 profile. Some sample files provide you a head start in creating your first AS4 messages. Unfortunately SOAP with Attachments or AS4 is not supported out-of-the-box within Microsoft .NET.

Can we help you?

Codit is closely involved with AS4. It is represented in the OASIS ebXML Messaging Services TC by Toon Vanhoutte, as a contributing member. In this way, Codit keeps a close eye on the evolution of the AS4 messaging standard. Throughout several projects, Codit has gained an extended expertise in AS4. Do not hesitate to contact us if you need any assistance on architecture, analysis or development of your AS4 implementation.

Within our R&D department, we have developed a base .NET library with support for the main AS4 features. In the demo for Integration Monday, this library was plugged into the BizTalk Server engine in order to create AS4 compatible messages. At the time of writing, Codit is defining its AS4 offering. If you would like to be informed about the strategy of Codit when it comes to AS4, please reach out to us. We will be more than happy to exchange ideas.