A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.
Some examples of this pattern are:
Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.
Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…
Assign Routing Slip
There are multiple ways to assign a routing slip to a message. Let’s have a look:
- External: the source system already attaches the routing slip to the message
- Static: when a message is received, a fixed routing slip is attached to it
- Dynamic: when a message is received, a routing slip is attached, based on some business logic
- Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message
A service is considered as a “step” within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there’s a clear boundary of its scope.
A service must consist of three steps:
- Receive the message
- Process the message, based on the routing slip configuration
- Invoke the next service, based on the routing slip configuration
There are multiple ways to invoke services:
- Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
- Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.
Think on the desired way to invoke services. If required, a combination of sync and async can be supported.
Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.
Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.
Faster release cycles
Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There’s also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.
A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.
A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?
Not enough reusability
Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it’s important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you’ll introduce more overhead than benefits.
Too complex logic
A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.
In case of maintenance of the surrounding systems, you often need to stop a message flow. Let’s take the scenario where you face the following requirement: “Do not send orders to SAP for the coming 2 hours”. One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!
A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.
A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.
Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:
- Analyze in depth if can benefit from the routing slip pattern
- Limit the complexity that the routing slip resolves
- Have explicit versioning of services inside the routing slip
- Include a unique correlation ID into the routing slip
- Add historical data to the routing slip
Hope this was a useful read!
Extension objects are used to consume external .NET libraries from within XSLT maps. This is often required to perform database lookups or complex functions during a transformation. Read more about extension objects in this excellent blog.
We are facing two big challenges:
- We must execute the existing XSLT’s with extension objects in Logic App maps
- On premises Oracle and SQL databases must be accessed from within these maps
It’s clear that we should extend Logic Apps with non-standard functionality. This can be done by leveraging Azure Functions or Azure API Apps. Both allow custom coding, integrate seamlessly with Logic Apps and offer the following hybrid network options (when using App Service Plans):
- Hybrid Connections: most applicable for light weight integrations and development / demo purposes
- VNET Integration: if you want to access a number of on premise resources through your Site-to-Site VPN
- App Service Environment: if you want to access a high number of on premise resources via ExpressRoute
As the pricing model is quite identical, because we must use an App Service Plan, the choice for Azure API Apps was made. The main reason was the already existing WebAPI knowledge within the organization.
A Site-to-Site VPN is used to connect to the on-premise SQL and Oracle databases. By using a standard App Service Plan, we can enable VNET integration on the custom Transform API App. Behind the scenes, this creates a Point-to-Site VPN between the API App and the VNET, as described here. The Transform API App can be consumed easily from the Logic App, while being secured with Active Directory authentication.
The following steps were needed to build the solution. More details can be found in the referenced documentation.
- Create a VNET in Azure. (link)
- Setup a Site-to-Site VPN between the VNET and your on-premises network. (link)
- Develop an API App that executes XSLT’s with corresponding extension objects. (link)
- Foresee Swagger documentation for the API App. (link)
- Deploy the API App. Expose the Swagger metadata and configure CORS policy. (link)
- Configure VNET Integration to add the API App to the VNET. (link)
- Add Active Directory authentication to the API App. (link)
- Consume the API App from within Logic Apps.
The source code of the Transform API can be found here. It leverages Azure Blob Storage, to retrieve the required files. The Transform API must be configured with the required app settings, that define the blob storage connection string and the containers where the artefacts will be uploaded.
The Transform API offers one Transform operation, that requires 3 parameters:
- InputXml: the byte that needs to be transformed
- MapName: the blob name of the XSLT map to be executed
- ExtensionObjectName: the blob name of the extension object to be used
You can run this sample to test the Transform API with custom extension objects.
This is a sample input that can be provided as input for the Transform action.
This XSLT must be uploaded to the right blob storage container and will be executed during the Transform action.
Extension Object XML
This extension object must be uploaded to the right blob storage container and will be used to load the required assemblies.
Create an assembly named, TVH.Sample.dll, that contains the class Common.cs. This class contains a simple method to generate a GUID. Upload this assembly to the right blob storage container, so it can be loaded at runtime.
Deploy the Transform API, using the instructions on GitHub. You can easily test it using the Request / Response actions:
As a response, you should get the following output XML, that contains the generated GUID.
Important remark: Do not forget to add security to your Transform API (Step 7), as is it accessible on public internet, by default!
Thanks to the Logic Apps extensibility through API Apps and their VNET integration capabilities, we were able to build this solution in a very short time span. The solution offers an easy way to migrate BizTalk maps as-is towards Logic Apps, which is a big time saver! Access to resources that remain on premises is also a big plus nowadays, as many organizations have a hybrid application landscape.
Hope to see this functionality out-of-the-box in the future, as part of the Integration Account!
Thanks for reading. Sharing is caring!
Democratization of integration
Before we dive into the details, I want to provide some reasoning behind this post. With the rise of cloud technology, integration takes a more prominent role than ever before. In Microsoft’s integration vision, democratization of integration is on top of the list.
Microsoft aims to take integration out of its niche market and offers it as an intuitive and easy-to-use service to everyone. The so-called Citizen Integrators are now capable of creating light-weight integrations without the steep learning curve that for example BizTalk Server requires. Such integrations are typically point-to-point, user-centric and have some accepted level of fault tolerance.
As an Integration Expert, you must be aware of this. Enterprise integration faces completely different requirements than light-weight citizen integration: loosely coupling is required, no message loss is accepted because it’s mission critical interfacing, integrations must be optimized for operations personnel (monitoring and error handling), etc…
Keep this in mind when designing Logic App solutions for enterprise integration! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The advice below can give you a jump start in designing reliable interfaces within Logic Apps!
Design enterprise integration solutions
1. Decouple protocol and message processing
Once you created a Logic App that receives a message via a specific transport protocol, it’s extremely difficult to change the protocol afterwards. This is because the subsequent actions of your Logic App often have a hard dependency on your protocol trigger / action. The advice is to perform the protocol handling in one Logic App and hand over the message to another Logic App to perform the message processing. This decoupling will allow you to change the receiving transport protocol in a flexible way, in case the requirements change or in case a certain protocol (e.g. SFTP) is not available in your DEV / TEST environment.
2. Establish reliable messaging
You must realize that every action you execute, is performed by an underlying HTTP connection. By its nature, an HTTP request/response is not reliable: the service is not aware if the client disconnects during request processing. That’s why receiving messages must always happen in two phases: first you mark the data as returned by the service; second you label the data as received by the client (in our case the Logic App). The Service Bus Peek-Lock pattern is a great example that provides such at-least-once reliability. Another example can be found here.
3. Design for reuse
Real enterprise integration is composed of several common integration tasks such as: receive, decode, transform, debatch, batch, enrich, send, etc… In many cases, each task is performed by a combination of several Logic App actions. To avoid reconfiguring these tasks over and over again, you need to design the solution upfront to encourage reuse of these common integration tasks. You can for example use the Process Manager pattern that orchestrates the message processing by reusing nested Logic Apps or introduce the Routing Slip pattern to build integration on top of generic Logic Apps. Reuse can also be achieved on the deployment side, by having some kind of templated deployments of reusable integration tasks.
4. Secure your Logic Apps
From a security perspective, you need to take into account both role-based access control to your Logic App resources and runtime security considerations. RBAC can be configured in the Access Control (IAM) tab of your Logic App or on a Resource Group level. The runtime security really depends on the triggers and actions you’re using. As an example: Request endpoints are secured via a Shared Access Signature that must be part of the URL, IP restrictions can be applied. Azure API Management is the way to go if you want to govern API security centrally, on a larger scale. It’s a good practice to assign the minimum required privileges (e.g. read only) to your Logic Apps.
5. Think about idempotence
Logic Apps can be considered as composite services, built on top of several API’s. API’s leverage the HTTP protocol, which can cause data consistency issues due to its nature. As described in this blog, there are multiple ways the client and server can get misaligned about the processing state. In such situations, clients will mostly retry automatically, which could result in the same data being processed twice at server side. Idempotent service endpoints are required in such scenarios, to avoid duplicate data entries. Logic Apps connectors that provide Upsert functionality are very helpful in these cases.
6. Have a clear error handling strategy
With the rise of cloud technology, exception and error handling become even more important. You need to cope with failure when connecting to multiple on premise systems and cloud services. With Logic Apps, retry policies are your first resort to build resilient integrations. You can configure a retry count and interval at every action, there’s no support for exponential retries or circuit breaker pattern. In case the retry policy doesn’t solve the issue, it’s advised to return a clear error description within sync integrations and to ensure a resumable workflow within async integrations. Read here how you can design a good resume / resubmit strategy.
7. Ensure decent monitoring
Every IT solution benefits from a good monitoring. It provides visibility and improves the operational experience for your support personnel. If you want to expose business properties within your monitoring, you can use Logic Apps custom outputs or tracked properties. These can be consumed via the Logic Apps Workflow Management API or via OMS Log Analytics. From an operational perspective, it’s important to be aware that there is an out-of-the-box alerting mechanism that can send emails or trigger Logic Apps in case a run fails. Unfortunately, Logic Apps has no built-in support for Application Insights, but you can leverage extensibility (custom API App or Azure Function) to achieve this. If your integration spans multiple Logic Apps, you must foresee correlation in your monitoring / tracing! Find here more details about monitoring in Logic Apps.
8. Use async wherever possible
Solid integrations are often characterized by asynchronous messaging. Unless the business requirements really demand request/response patterns, try to implement them asynchronously. It comes with the advantage that you introduce real decoupling, both from a design and runtime perspective. Introducing a queuing system (e.g. Azure Service Bus) in fire-and-forget integrations, results in highly scalable solutions that can handle an enormous amount of messages. Retry policies in Logic Apps must have different settings depending whether you’re dealing with async or sync integration. Read more about it here.
9. Don’t forget your integration patterns
Whereas BizTalk Server forces you to design and develop in specific integration patterns, Logic Apps is more intuitive and easier to use. This could come with a potential downside that you forget about integration patterns, because they are not suggested by the service itself. As an integration expert, it’s your responsible to determine which integration patterns should be applied on your interfaces. Loosely coupling is common for enterprise integration. You can for example introduce Azure Service Bus that provides a Publish/Subscribe architecture. Its message size limitation can be worked around by leveraging the Claim Check pattern, with Azure Blob Storage. This is just one example of introducing enterprise integration patterns.
10. Apply application lifecycle management (ALM)
The move to a PaaS architecture, should be done carefully and must be governed well, as described here. Developers should not have full access to the production resources within the Azure portal, because the change of one small setting can have an enormous impact. Therefore, it’s very important to setup ALM, to deploy your Logic App solutions throughout the DTAP-street. This ensures uniformity and avoids human deployment errors. Check this video to get a head start on continuous integration for Logic Apps and read this blog on how to use Azure Key Vault to retrieve passwords within ARM deployments. Consider ALM as an important aspect within your disaster recovery strategy!
Yes, we can! Logic Apps really is a fit for enterprise integration, if you know what you’re doing! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The Logic App framework is a truly amazing and stable platform that brings a whole range of new opportunities to organizations. The way you use it, should be depending on the type of integration you are facing!
Interested in more? Definitely check out this session about building loosely coupled integrations with Logic Apps!
Any questions or doubts? Do not hesitate to get in touch!
Let’s discuss the scenario briefly. We need to consume data from the following table. All orders with the status New must be processed!
The table can be created with the following SQL statement:
To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.
The Logic App fires with a Recurrence trigger. The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not. In case an order is retrieved, its further processing can be performed, which will not be covered in this post.
If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.
In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.
We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:
- You mark the data as Peeked, which means it has been assigned to a receiving process
- You mark the data as Completed, which means it has been received by the receiving process
Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.
Let’s create the first stored procedure that marks the order as Peeked.
The second stored procedure accepts the OrderId and marks the order as Completed.
The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.
Let’s consume now the two stored procedures from within our Logic App. First we Peek for a new order and when we received it, the order gets Completed. The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][‘Id’]
The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.
Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?
As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.
The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.
The second Logic App gets invoked by the first one. This Logic App completes the order first and performs afterwards the further processing. In case something goes wrong, a resubmit of the second Logic App can be initiated.
Very happy with the result as:
- The data is received from the SQL table in a reliable fashion
- The data can be resumed in case further processing fails
Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!
This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.
Liked this post? Feel free to share with others!
In case you are interested in a detailed walk-through on how to set up continuous deployment, please check out this blog post on Continuous Deployment in BizTalk 2016, Feature Pack 1.
What is included?
Below, you can find a bullet point list of features included in this release.
- An application version has been added and can be easily specified.
- Automated deployment from VSTS, using a local deploy agent.
- Automated deployment of schemas, maps, pipelines and orchestrations.
- Automated import of multiple binding files.
- Binding file management through VSTS environment variables.
- Update of specific assemblies in an existing BizTalk application (with downtime)
What is not included?
This is a list of features that are currently not supported by the new VSTS release task:
- Build BizTalk projects in VSTS hosted build servers.
- Deployment to a remote BizTalk server (local deploy agent required)
- Deployment to a multi-server BizTalk environment.
- Deployment of shared artifacts (e.g. a schema that is used by several maps)
- Deployment of more advanced artifacts: BAM, BRE, ESB Toolkit…
- Control of which host instances / ports / orchestrations should be (re)started
- Undeploy a specific BizTalk application, without redeploying it again.
- Use the deployment task in TFS 2015 Update 2+ (no download supported)
- Execute the deployment without the dependency of VSTS.
Microsoft released this VSTS continuous deployment service into the wild, clearly stating that this is a first step in the BizTalk ALM story. That sounds very promising to me, as we can expect more functionality to be added in future feature packs!
After intensively testing the solution, I must conclude that there is a stable and solid foundation to build upon. I really like the design and how it is integrated with VSTS. This foundation can now be extended with the missing pieces, so we end up with great release management!
At the moment, this functionality can be used by BizTalk Server 2016 Enterprise customers that have a single server environment and only use the basic BizTalk artifacts. Other customers should still rely on the incredibly powerful BizTalk Deployment Framework (BTDF), until the next BizTalk Feature Pack release. At that moment in time, we can re-evaluate again! I’m quite confident that we’re heading in the good direction!
Looking forward for more on this topic!
The documentation of the Management API can be found here. In short: almost everything you can access in the BizTalk Administration Console is now available in the BizTalk Management API. The API is very well documented with Swagger, so it’s pretty much self-explaining.
What is included?
A complete list of available operations can be found here.
There are new opportunities on the deployment side. Here are some ideas that popped into my mind:
- Dynamically create ports. Some messaging solutions are very generic. Adding new parties is sometimes just a matter of creating a new set of receive and send ports. This can now be done through this Management API, so you don’t need to do the plumbing yourself anymore.
- Update tracking settings. We all know it quite difficult to keep your tracking settings consistent through all applications and binding files. The REST API can now be leveraged to change the tracking settings on the fly to their desired state.
Also the runtime processing might benefit from this new functionality. Some scenarios:
- Start and stop processes on demand. In situations that the business wants to take control on when certain processes should be active, you can start/stop receive/send ports on demand. Just a small UI on top of the Management API, including the appropriate security measures, and you’re good to go!
- Maintenance windows. BizTalk is in the middle of your application landscape. Deployments on backend applications, can have a serious impact on running integrations. That’s why stopping certain ports during maintenance windows is a good approach. This can now be easily automated or controlled by non-BizTalk experts.
Most new opportunities reside on the monitoring side. A couple of potential use cases:
- Simplified and short-lived BAM. It’s possible to create some simple reports with basic statistics of your BizTalk environment. You can leverage the Management API or the Operational OData Service. You can easily visualize the number of messages per port and for example the number of suspended instances. All of this is built on top of the data in your MessageBox and DTA database, so there’s no long term reporting out-of-the-box.
- Troubleshooting. There are very easy-to-use operations available to get a list of services instances with a specific status. In that way, you can easily create a dashboard that gives an overview of all instances that require intervention. Suspended instances can be resumed and terminated through the Management API, without the need to access your BizTalk Server.
This is an example of the basic Power BI reports that are shipped with this feature pack.
What is not included?
This brand new BizTalk Management API is quite complete, very excited about the result! As always, I looked at it with a critical mindset and tried to identify missing elements that would enable even more additional value. Here are some aspects that are currently not exposed by the API, but would be handy in future releases:
- Host Instances: it would be great to have the opportunity to also check the state of the host instances and to even start / stop / restart them. Currently, only a GET operation on the hosts is available.
- Tracked Context Properties: I’m quite fond of these, as they enable you to search for particular message events, based on functional search criteria (e.g. OrderId, Domain…). Would be a nice addition to this API!
- Real deployment: first I thought that the new deployment feature was built on top of this API, but that was wrong. The API exposes functionality to create and manage ports, but no real option to update / deploy a schema, pipeline, orchestration or map. Could be nice to have, but on the other hand, we have a new deployment feature of which we need to take advantage of!
- Business Activity Monitoring: I really like to idea of the Operational OData Service, which smoothly integrates with Power BI. Would be great to have a similar and generic approach for BAM, so we can easily consume the business data without creating custom dashboards. The old BAM portal is really no option anymore nowadays. You can vote here.
Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!
The exposure of BizTalk as a REST API opens up a new range of great opportunities. Don’t forget to apply the required security measures when exposing this API! By introducing this API, the need for auditing all activity becomes even more important!
Thanks BizTalk for this great addition! Thank you for reading!
The documentation of this scheduling feature can be found on MSDN.
What is included?
Support for time zones
The times provided within the schedule tab of receive locations are now accompanied by a time zone. This ensures your solution is not depending anymore on the local computer settings. There’s also a checkbox to automatically adjust for daylight saving time.
This is a small, but handy addition to the product! It avoids unpleasant surprises when rolling out your BizTalk solutions throughout multiple environments or even multiple customers!
Service window recurrence
The configuration of service windows is now a lot more advanced. You have multiple recurrence options available:
- Daily: used to run the receive location every x number of days
- Weekly: used to run the receive location on specific days of the week
- Monthly: used to run the receive location on specific dates or specific days of the month
Up till now, I didn’t use the service window that much. These new capabilities allow some new scenarios. As an example, this would come in handy to schedule the release of batch messages on a specific time of the day, which is often required in EDI scenarios!
What is not included?
This is not a replacement for the BizTalk Scheduled Task Adapter, which is a great community adapter! There is a fundamental difference between an advanced service window configuration and the Scheduled Task Adapter. A service window configures the time on which a receive locations is active, whereas the Scheduled Task Adapter executes a pre-defined task on the configured recurrence cadence.
For the following scenarios, we still need the Scheduled Task Adapter:
- Send a specific message every x seconds / minutes.
- Trigger a process every x seconds / minutes.
- Poll a rest endpoint every x seconds / minutes. Read more about it here.
Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!
These new scheduling capabilities are a nice addition to BizTalk’s toolbelt! In future feature packs, I hope to see similar capabilities as the Scheduled Task Adapter. Many customers are still reluctant to use community adapters, so a supported adapter would be very nice! You can vote here!
Thanks for reading!
I’ve created this walkthrough mainly because I had difficulties to fully understand how it works. The documentation does not seem 100% complete and some blog posts I’ve read created some confusion for me. This is a high-level overview of how it works:
- The developer must configure what assemblies and bindings should be part of the BizTalk application. Also, the order of deployment must be specified. This is done in the new BizTalk Application Project.
- The developer must check-in the BizTalk projects, including the configured BizTalk Application Project. Also, the required binding files must be added to the chosen source control system.
- A build is triggered (automatically or manually). A local build agent compiles the code. By building the BizTalk Application Project, a deployment package (.zip) is automatically generated with all required assemblies and bindings. This deployment package (.zip) is published to the drop folder.
- After the build completed, the release can be triggered (automatically or manually). A local deploy agent, installed on the BizTalk server, takes the deployment package (.zip) from the build’s drop folder and performs the deployment, based on the configurations done in step 1. Placeholders in the binding files are replaced by VSTS environment variables.
- Make a clear distinction between build and release pipelines!
- Do not create and check-in the deployment package (.zip) yourself!
You can follow the steps below to set up full continuous deployment of BizTalk applications. Make sure you check the prerequisites documented over here.
Create a build agent
As VSTS does not support building BizTalk projects out-of-the-box, we need to create a local build agent that performs the job.
Create Personal Access Token
For the build agent to authenticate, a Personal Access Token is required.
- Browse to your VSTS home page. In my case this is https://toonvanhoutte.visualstudio.com
- Click on the profile icon and select Security.
- Select Personal access tokens and click Add
- Provide a meaningful name, expiration time and select the appropriate account. Ensure you allow access to Agent Pools (read, manage).
- Ensure you copy the generated access token, as we will need this later.
Install local build agent
The build agent should be installed on the server that has Visual Studio, the BizTalk Project Build Component and BizTalk Developer Tools installed.
- Select the Settings icon and choose Agent queues.
- Select the Default agent queue. As an alternative, you could also create a new queue.
- Click on Download agent
- Click Download. Remark that the required PowerShell scripts to install the agent are provided.
- Open PowerShell as administrator on the build server.
Run the following command to unzip and launch the installation:
mkdir agent ; cd agent
Add-Type -AssemblyName System.IO.Compression.FileSystem ; System.IO.Compression.ZipFile]::ExtractToDirectory(“$HOMEDownloadsvsts-agent-win7-x64-2.115.0.zip”, “$PWD”)
- Execute this command to launch the configuration:
- Provide the requested information:
> Server URL: https://toonvanhoutte.visualstudio.com
> Authentication: PAT
> PAT: The personal access token copied in the previous step
- Press enter for default pool
- Press enter for default name
- Press enter for default work folder
- Provide Y to run as a service
- Provide user
- Provide password
- Double check that the local build service is created and running.
- If everything went fine, you should see the build agent online!
Create a build definition
Let’s now create and configure the required build definition.
- In the Builds tab, click on New to create a new build definition.
- Select Visual Studio to start with a pre-configured build definition. Click Next to continue.
- Select your Team Project as the source, enable continuous integration, select the Default queue agent and click Create.
- Delete the following build steps, so the build pipeline looks like this:
> NuGet Installer
> Visual Studio Test
> Publish Symbols
- Configure the Visual Studio Build step. Select the BizTalk solution that contains all required artifacts. Make sure Visual Studio 2015 is picked and verify that MSBuild architecture is set to MSBuild x86.
- The other build steps can remain as-is. Click Save.
- Provide a clear name for the build definition and click OK.
- Hopefully your build finishes successful. Solve potential issues in case the build failed.
Configure BizTalk Application
In this chapter, we need to create and configure the definition of our BizTalk application. The BizTalk Server 2016 Feature Pack 1 introduces a new BizTalk project type: BizTalk Server Application Project. Let’s have a look how we can use this to kick off an automated deployment.
- On your solution, click Add, Add New Project.
- Ensure you select .NET Framework 4.6.1 and you are in the BizTalk Projects tab. Choose BizTalk Server Application Project and provide a descriptive name.
- Add references to all projects that needs to be included in this BizTalk application and click OK.
- Add all required binding files to the project. Make sure that every binding has Copy to Output Directory set to Copy Always. Via this way, the bindings will be included in the generated deploy package (.zip).
- In case you want to replace environment specific settings in your binding file, such as connection string and passwords, you must add placeholders with the $(placeholder) notation.
- Open the BizTalkServerInventory.json file and configure the following items:
> Name and path of all assemblies that must be deployed in the BizTalk application
> Name and path of all binding files that must be imported into the BizTalk application
> The deployment sequence of assemblies to be deployed and bindings to be imported.
- Right click on the BizTalk Application Project and choose Properties. Here you can specify the desired version of the BizTalk Application. Be aware that this version is different, depending whether you’re building in debug or release mode. Click OK to save the changes.
- Build the application project locally. Fix any errors if they might occur. If the build succeeds, you should see a deployment package (.zip) in the bin folder. This package will be used to deploy the BizTalk application.
- Check-in the new BizTalk Application Project. This should automatically trigger a new build. Verify that the deployment package (.zip) is also available in the drop folder of the build. This can be done by navigating to the Artifacts tab and clicking on Explore.
- You should see the deployment package (.zip) in the bin folder of the BizTalk Application Project.
Create a release definition
We’ve created a successful build, that generated the required deployment package (.zip). Now it’s time to configure a release pipeline that takes this deployment package as an input and deploys it automatically on our BizTalk Server.
- Navigate to the Releases tab and click Create release definition.
- Select Empty to start with an empty release definition and click Next to continue.
- Choose Build as the source for the release, as the build output contains the deployment package (.zip). Make sure you select the correct build definition. If you want to setup continuous deployment, make sure you check the option. Click Create to continue.
- Change the name of the Release to a more meaningful name.
- Change the name of the Environment to a more meaningful name.
- Click on the “…” icon and choose Configure variables.
- Add an environment variable, named Environment. This will ensure that every occurrence of $(Environment) in your binding file, will be replaced with the configured value (DEV). Click OK to confirm.
- Click Add Tasks to add a new task. In the Deploy tab, click Add next to the BizTalk Server Application Deployment task. Click Close to continue.
- Provide the Application Name in the task properties.
- For the Deployment package path, navigate to the deployment package (.zip) that is in the drop folder of the linked build artefact. Click OK to confirm.
- Specify, in the Advanced Options, the applications to reference, if any.
- Select Run on agent and select the previously created agent queue to perform the deployment. In a real scenario, this will need to be a deployment agent per environment.
- Save the release definition and provide a comment to confirm.
Test continuous deployment
- Trigger now a release, by selecting Create Release.
- Keep the default settings and click Create.
- In the release logs, you can see all details. The BizTalk deployment task has very good log statements, so in case of an issue you can easily pinpoint the problem. Hopefully you encounter a successful deployment!
- On the BizTalk Server, you’ll notice that the BizTalk application has been created and started. Notice that the application version is applied and the application references are created!
In case you selected the continuous integration options, there will now be an automated deployment each time you check in a change in source control. Continuous deployment has been set up!
Hope you’ve enjoyed this detailed, but basic walkthrough. For real scenarios, I highly encourage to extend this continuous integration approach with:
- Automated unit testing and optional integration testing
- Versioning of the assembly file versions
- Include the version dynamically in the build and release names
For this blog post, I decided to try to batch the following XML message. As Logic Apps supports JSON natively, we can assume that a similar setup will work quite easily for JSON messages. Remark that the XML snippet below contains an XML declaration, so pure string appending won’t work. Also namespaces are included.
I came up with the following requirements for my batching solution:
- External message store: in integration I like to avoid long-running workflow instances at all time. Therefore I prefer messages to be stored somewhere out-of-the-process, waiting to be batched, instead of keeping them active in a singleton workflow instance (e.g. BizTalk sequential convoy).
- Message and metadata together: I want to avoid to store the message in a specific place and the metadata in another one. Keep them together, to simplify development and maintenance.
- Native Logic Apps integration: preferably I can leverage an Azure service, that has native and smooth integration with Azure Logic Apps. It must ensure we can reliably assign messages to a specific batch and we must be able to remove them easily from the message store.
- Multiple batch release triggers: I want to support multiple ways to decide when a batch can be released.
> # Messages: send out batches containing each X messages
> Time: send out a batch at a specific time of the day
> External Trigger: release the batch when an external trigger is receive
After some analysis, I was convinced that Azure Service Bus queues are a good fit:
- External message store: the messages can be queued for a long time in an Azure Service Bus queue.
- Message and metadata together: the message is placed together with its properties on the queue. Each batch configuration can have its own queue assigned.
- Native Logic Apps integration: there is a Service Bus connector to receive multiple messages inside one Logic App instance. With the peak-lock pattern, you can reliably assign messages to a batch and remove them from the queue.
- Multiple batch release triggers:
> # Messages: In the Service Bus connector, you can choose how many messages you want to receive in one Logic App instance
> Time: Service Bus has a great property ScheduledEnqueueTimeUtc, which ensures that a message becomes only visible on the queue from a specific moment in time. This is a great way to schedule messages to be releases at a specific time, without the need for an external scheduler.
> External Trigger: The Logic App can be easily instantiated via the native HTTP Request trigger
The goal of this workflow is to put the message on a specific queue for batching purpose. This Logic App is very straightforward to implement. Add a Request trigger to receive the messages that need to be batched and use the Send Message Service Bus connector to send the message to a specific queue.
In case you want to release the batch only at a specific moment in time, you must provide a value for the ScheduledEnqueueTimeUtc property in the advanced settings.
This is the more complex part of the solution. The first challenge is to receive for example 3 messages in one Logic App instance. My first attempt failed, because there is apparently a different behaviour in the Service Bus receive trigger and action:
- When one or more messages arrive in a queue: this trigger receives messages in a batch from a Service Bus queue, but it creates for every message a specific Logic App instance. This is not desired for our scenario, but can be very useful in high throughput scenarios.
- Get messages from a queue: this action can receive multiple messages in batch from a Service Bus queue. This results in an array of Service Bus messages, inside one Logic App instance. This is the result that we want for this batching exercise!
Let’s use the peak-lock pattern to ensure reliability and receive 3 messages in one batch:
As a result, we get this JSON array back from the Service Bus connector:
The challenge is to parse this array, decode the base64 content in the ContentData and create a valid XML batch message from it. I tried several complex Logic App expressions, but realized soon that Azure Functions is better suited to take care of this complicated parsing. I created the following Azure Fuction, as a Generic Webhook C# type:
Let’s consume this function now from within our Logic App. There is seamless integration with Logic Apps, which is really great!
As an output of the GetBatchMessage Azure Funtion, I get the following XML 🙂
This solution is very nice, but what with large messages? Recently, I wrote a Service Bus connector that uses the claim check pattern, which exchanges large payloads via Blob Storage. In this batching scenario we can also leverage this functionality. When I have open sourced this project, I’ll update this blog with a working example. Stay tuned for more!
This is a great and flexible way to perform batching within Logic Apps. It really demonstrates the power of the Better Together story with Azure Logic Apps, Service Bus and Functions. I’m sure this is not the only way to perform batching in Logic Apps, so do not hesitate to share your solution for this common integration challenge in the comments section below!
I hope this gave you some fresh insights in the capabilities of Azure Logic Apps!
Logic Apps offer the splitOn command that can only be added to a trigger of a Logic App. In this splitOn command, you can provide an expression that results in an array. For each item in that array, a new instance of the Logic App is fired.
Debatching JSON Messages
Logic Apps are completely built on API’s, so they natively support JSON messages. Let’s have a look on how we can debatch the JSON message below, by leveraging the splitOn command.
Create a new Logic App and add the Request trigger. In the code view, add the splitOn command to the trigger. Specify the following expression: @triggerBody()[‘OrderBatch’][‘Orders’]
Use Postman to send the JSON message to the HTTP trigger. You’ll notice that one input message, triggers 3 workflow runs. Very easy way to debatch a message!
Debatching XML Messages
In old-school integration, XML is still widely spread. When dealing with flat file or EDI messages, they are also converted into XML. So, it’s required to have this also working for XML messages. Let’s consider the following example.
Update the existing Logic App with the following expression for the splitOn command: @xpath(xml(triggerBody()), ‘//*[local-name()=”Order” and namespace-uri()=”http://namespace”]’). In order to visualize the result, add a Terminate shape that contains the trigger body as the message.
Trigger the workflow again. The result is as expected and the namespaces are nicely preserved!
The advantage of this approach is that every child message immediately starts processing independently from the others. If one message fails during further processing, it does not impact the others and exception handling can be done on the level of the child message. This is comparable to recoverable interchange processing in BizTalk Server. In this way, you can better make use of the resubmit functionality. Read more about it here.
Let’s have a look what happens if the xPath expression is invalid. The following exception is returned: The template language expression evaluation failed: ‘The template language function ‘xpath’ parameters are invalid: the ‘xpath’ parameter must be a supported, well-formed XPath expression. Please see https://aka.ms/logicexpressions#xpath for usage details. This behavior is as desired.
What happens if the splitOn command does not find a match within the incoming trigger message? Just change the xPath for example to @xpath(xml(triggerBody()), ‘//*[local-name()=”XXX” and namespace-uri()=”http://namespace”]’). In this case, no workflow instance gets triggered. The trigger has the Succeeded status, but did not fire. The consumer of the Logic App receives an HTTP 202 Accepted, so assumes everything went fine.
This is important to bear in mind, as you might lose invalid messages in this way. The advice is to perform schema validation before consuming a nested Logic App with the splitOn trigger.
Within the standard overview blade, you cannot see that the three instances relate to each other. However, if you look into the Run Details, you notice that they share the same Correlation ID. It’s good to see that in the backend, these workflow instances can be correlated. Let’s hope that such functionality also makes it to the portal in a user-friendly way!
For the time being, you can leverage the Logic Apps Management REST API to build your custom monitoring solution.
For Each Command
Another way to achieve a debatching-alike behavior, is by leveraging the forEach command. It’s very straightforward to use.
Debatching JSON Messages
Let’s use the same JSON message as in the splitOn example. Add a forEach command to the Logic App and provide the same expression: @triggerBody()[‘OrderBatch’][‘Orders’].
If we now send the JSON message to this Logic App, we get the following result. Remark that the forEach results in 3 loops, one for each child message.
Debatching XML Messages
Let’s have a look if the same experience applies for XML messages. Modify the Logic App, to perform the looping based on this expression: @xpath(xml(triggerBody()), ‘//*[local-name()=”Order” and namespace-uri()=”http://namespace”]’)
Use now the XML message from the first example to trigger the Logic App. Again, the forEach includes 3 iterations. Great!
I want to see what happens if one child message fails processing. Therefore, I take the JSON Logic App and add the Parse JSON action that validates against the schema below. Remark that all fields are required.
Take the JSON message from previous example and remove in the second order a required field. This will cause the Logic App to fail for the second child message, but to succeed for the first and third one.
Trigger the Logic App and investigate the run history. This is a great result! Each iteration processes independent from the other. Quite similar behavior as with the splitOn command, however it’s more difficult to use the resubmit function.
You must understand that by default, the forEach branches are executed in parallel. You can modify this to sequential execution. Dive into the code view and add “operationOptions” : “Sequential” to the forEach.
Redo the test and you will see that this has no influence on the exception behavior. Every loop gets invoked, regardless whether the previous run failed.
The monitoring experience is great! You can easily scroll through all iterations to see which iteration succeeded and which on failed. If one of the actions fails within a forEach, the Logic App gets the Failed status assigned.
What should we use?
In order to have a real debatching experience, I recommend the splitOn command to be used within enterprise integration scenarios. The fact that each child message gets immediately its specific workflow instance assigned, makes the exception handling strategy easier and operational interventions more straightforward.
Do not forget to perform first schema validation and then invoke a nested workflow with the Request trigger, configured with the splitOn command. This will ensure that no invalid message disappears. Calling a nested workflow also offers the opportunity to pass the batch header information via the HTTP headers, so you can preserve header information in the child message. Another way to achieve this, is by executing a Transformation in the first Logic App, that adds header information to every child message.
The nested workflow cannot have a Response action, because it’s decorated with a splitOn trigger. If you want to invoke such a Logic App, you need to update the consuming Logic App action with the following expression: “operationOptions”: “DisableAsyncPattern”.
If we run the setup, explained above, we get the following debatching experience with header information preserved!
Logic Apps provides all required functionality to debatch XML and JSON messages. As always, it’s highly encouraged to investigate all options in depth and to conclude what approach suites the best for your scenario.
Thanks for reading!