10 Differences between Azure Functions and Logic Apps

Comparison

Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.

Connectivity

Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API’s. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn’t solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn’t always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!

State

Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.

Networking

Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.

Deployment

Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.

Runtime

Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.

Monitoring

Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It’s on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there’s full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It’s important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.

Security

Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.

Conclusion

Based on the comparison above, you’ll notice that a lot of factors are involved when deciding between the two technologies. At first, it’s important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API’s are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 

Stay tuned for more!

10 ways to leverage scrum principles within integration!

Important note: this post is not intended to state that when you do scrum, you should make your own interpretation of it. It’s to explain how you can benefit from agile / scrum principles on integration projects, that are not using the scrum methodology at all. It’s a subtle, but important difference!

1. Prototype in an early stage 

I’m working for more than 10 years on integration projects and every new assignment comes with its specific challenges: a new type of application to integrate with, a new protocol that is not supported out-of-the-box and specific non-functional requirements that you never faced before. Challenges can become risks if you do not tackle them soon. It’s important to list them and to perform a short risk assessment.
 
Plan proof of concepts (PoC) to overcome these challenges. Schedule these prototyping exercises early in the project, as they might influence overall planning (e.g. extra development required) and budget (e.g. purchase of third party tool or plug-in). Perform them in an isolated sandbox environment (e.g. cloud), so you do not lose time with organizational procedures and administration overhead. A PoC must have a clear scope and success criteria defined. Real life examples where we introduced a PoC: validate performance characteristics of the BizTalk MLLP adapter, determine the best design to integrate with the brand-new Dynamics 365 for Operations (AX), test the feature set of specific Logic Apps connectors against the requirements…

2. Create a Definition of Ready 

A Definition of Ready is a kind of a prerequisite list that the development team and product owner agree on. This list contains the essential information that is required in order to kick off the development of a specific backlog item. It’s important to agree on a complete, but not too extended Definition of Ready. Typical items that are listed on an integration focused Definition of Ready are: samples files, data contracts, transformation analysis, single point of contact of relying backend application.

This is a very important aspect in large integration projects. You want to avoid that your development team is constantly blocked by unclear dependencies, but on the other hand it’s not advised to postpone development constantly as this imposes a risk. It’s a difficult balancing exercise that requires a pragmatic approach and a decent level of flexibility.
 
It’s important to liberate your development team from the task of gathering these prerequisites, so they can focus on delivering business value. In large integration projects, it’s a full-time occupation to chase responsibles from the impacted teams to get the required specs or dependencies. The person taking up this responsibility has a crucial role in the success of the project. Excellent communication and people skills are a must.

3. Strive for a self-organized team

“The team lead gives direct orders to each individual team member”. Get rid of this old-fashioned idea of “team work”. First, the development team must be involved in estimating the effort for backlog items. In that way, you get a realistic view on the expected development progress and you get the team motivated to meet their estimates. Secondly, it’s highly advised to encourage the team to become self-organized. This means they decide on how they organize themselves to get the maximum out of the team, to deliver high quality and to meet the expectations. In the beginning you need to guide them towards that direction, but it’s amazing how quick they adapt to that vision.

Trust is the basis of this kind of collaboration between the team lead (or product owner) and the team. I must admit that it wasn’t easy for me in the beginning, as my natural flavour is to be in control. However, the advantages are incredible: team members become highly involved, take responsibility, are better motivated and show real dedication to the project.

One might think you lose control, but nothing is less true. Depending on the development progress, you can shift the product backlog in collaboration with your stakeholders. It’s also good to schedule regular demo sessions (with or without the customer) to provide your feedback to the development team.

Each team member has its own role and responsibilities within the team, even though no one ever told them to do so. Replacing one member within the team, always has a drastic impact on the team performance and behaviour. It’s like the team loses part of its DNA and needs some time to adjust to the new situation. I’m blessed that I was always able to work together with highly motivated colleagues, but I can imagine it’s a hell of a job to strive for a self-organized team that includes some unmotivated individuals.

4. Bridge the gap between teams

The agile vision encourages cross-functional teams, consisting of e.g. business analysts, developers and testers. Preferably one person within the team, can take multiple roles. However, if we face reality, many large organizations still have the mindset of teams per expertise (HR, Finance, .NET, Integration, Java, Testing…). Often there is no good interaction amongst these teams and they are even physically separated.

If you are part of the middleware team, you’re stuck between the two teams: the ones who manage the source application and those who are developing the target system. Try to convince them to create cross-functional project teams, that are preferably working at the same place. If this is not an option, you can aim at least for a daily stand-up meeting with the most important key-players (the main analysts and developers) involved. Avoid at all time that communication always goes via a management layer, as this is time consuming and a lot of context is lost. As a last resort, you can just go on a daily basis to the floor where the team is situated and discuss the most urgent topics.

Throughout many integration projects, I’ve seen the importance of people and communication skills. These soft skills are a must to bridge the gap between different teams. Working full time behind your laptop on your own island, is not the key to success within integration. Collaborate on all levels and cross teams!

5. Leverage the power of mocking

In an ideal scenario, all backend services and modules we need to integrate with are already up and running. However, if we face reality, this is almost never the case. In a waterfall approach, integration would be typically scheduled in the last phase of the project, assuming all required prerequisites are ready at that moment in time. This puts a big risk on the integration layer. According to the scrum and agile principles, this must be avoided at all time.
 
This introduces a challenge for the development team. Developers need to make an abstraction of external systems their solution relies on. They must get familiar with dependency injection and / or mocking frameworks that simulate back-end applications. These techniques allow to start development of the integration layer with less prerequisites and ensure a fast delivery once the depending backend applications are ready. A great mocking framework for BizTalk Server is Transmock, definitely worth checking out if you face problems with mocking. Interesting blogs about this framework can be found here and here, I’ve also demonstrated its value in this presentation.

6. Introduce spikes to check connectivity

Integration is all about connecting backend systems seamlessly with each other. The setup of a new connection with a backend system can often be a real hassle: exceptions need to be made in the corporate firewall, permissions must be granted on test environments, security should be configured correctly, valid test data sets must be available, etc…
 
In many organizations, these responsibilities are spread across multiple teams and the procedures to request such changes can cause a lot of administrative and time consuming overhead. In order to avoid your development team is being blocked by such organizational waste, it is advised to put these connectivity setups early on the product backlog as “spikes”. When the real development work starts in a later iteration, the connectivity setup has already been given a green light.

7. Focus first on end-to-end

This flowchart explains in depth the rules you can apply to split user stories. Integration scenarios match the best with Workflow Steps. This advice is really helpful: “Can you take a thin slice through the workflow first and enhance it with more stories later?”. The first focus should be to get it work end-to-end, so that at least some data is exchanged between source and target application. This can be with a temporary data contract, within a simplified security model, without more advanced features like caching, sequence controlling, duplicate detection, batching, etc…
 
As a real-life example, we recently had the request to expose an internal API that must consume an external API to calculate distances. There were some additional requirements: the responses from the external API must be stored for a period of 1 month, to save transaction costs of the external API; authentication must be performed with the identity of the requesting legal entity, so this can be billed separately; both sync and asynchronous internal API must be exposed. The responsibility of the product owner is to find the Minimal Valuable Product (MVP). In this case, it was a synchronous internal API, without caching and with one fixed identity for the whole organization. During later phases, this API was enhanced with caching, with a dynamic identity and an async interface.
 
In some projects, requirements are written in stone upfront and are not a subject of negotiation: the interface can only be released in production if all requirements are met. In such cases, it’s also a good exercise to find the MVP required for acceptance testing. In that way, you can release faster internally, which results in faster feedback from internal testing.

8. Put common non-functionals on the Definition of Done

In middleware solutions, there are often requirements on high performance, high throughput and large message handling. Most of these requirements can be tackled by applying best practices in your development: use a streaming design in order to avoid loading messages entirely in memory, reduce the number of persistence points, cache configuration values wherever it’s applicable, etc…
 
It’s a good practice to put such development principles on the Definition of Done, to ensure an overall quality of your product. Code reviews should check whether these best practices are applied. Only in case that specific measures need to be taken to meet exceptional performance criteria, it’s advised to list these requirements explicitly as user stories on the product backlog.

“Done” also means: it’s tested and can be shipped at any moment. Agree on the required level of test automation: is unit testing (white box) sufficient, do you fully rely on manual acceptance testing or is a minimal level of automated system testing (black box) required? Involve the customer in this decision, as this impacts the team composition, quality and budget. It’s also a common practice to ensure automated deployment is in place, so you can release quickly, with a minimal impact. Fantastic to see that team members are challenging each other, during the daily stand-up, to verify if the Definition of Done has been respected.

9. Aim for early acceptance (testing)

In quite a lot of ERP implementations, go-live is performed in some big phases, preceded by several months of development. Mostly, acceptance testing is planned at the same pace. This means that flows being developed at the beginning of the development stage, will remain several months untouched until acceptance testing is executed. One important advice here: acceptance testing should follow the iterative development approach and not the slow-paced go-live schedule.
 
One of the base principles of an agile approach is to get fast feedback: fail fast and cheap. Early acceptance testing will ensure your integrations will be evaluated by the end users against the requirements. If possible, also involve operations in this acceptance process: they will be able to provide feedback on the monitoring, alerting, troubleshooting capabilities of your integration solution. This feedback is very useful to optimize the integration flows and to take into account these lessons learned for the subsequent development efforts. This approach can avoid a lot of refactoring afterwards…
 
Testing is not the only way to get feedback. Try to schedule demos on a regular basis, to verify if you are heading in the right direction. It’s very important to adapt the demo to your stakeholders. A demo for operations can be done with technical tools, while explaining all details about reliability and security. When presenting to functional key users, keep the focus on the business process and the added value that integration brings. Try to include both source and target application, so they can witness the end result without exactly knowing what is under the hood. If you can demonstrate that you create a customer in one application and this get synchronised into two other applications within 10 seconds, you have them on your side!

10. Adapt to improve

Continuous improvement is a key to success. This improvement must be reflected on two levels: your product and your team. Let’s first consider improvements on the product, of which you have two types. You have optimizations that are derived from direct feedback from your stakeholders. They provide immediate value to your product, which is in this case your integration project. These can be placed on the backlog. Secondly, there are adaptations that result in indirect value, such as refactoring. Refactoring is intended to stabilize the product, to improve its maintenance and to prepare it for change. It’s advised to only refactor a codebase that is thoroughly tested, to ensure you do not introduce regression bugs.
 
Next to this, it’s even more important to challenge the way the team is working and collaborating. Recurring retrospectives are the starting point, but they must result in real actions. Let the development team decide on the subjects they want to improve. Sometimes these could be quick wins: making some working agreements about collaboration, communication, code review, etc… Other actions might take more time: improve the development experience, extend the unit testing platform, optimize the ALM approach. All these actions result in better collaboration, higher productivity and faster release cycles.

I find it quite challenging to deal with such indirect improvements. I used to place them also on the backlog, whilst the team decides on their priority. We mixed them with backlog items that result in direct business value, in a 90% (direct value) / 10% (indirect value) proportion. The drawback of this approach is that not everyone is involved in indirect improvements. Another way to tackle this is reserving 1 day, every two weeks, that is dedicated for such improvements. In that way the whole team is involved in this process, which encourages the idea of having a self-organized development team.

Hope you’ve enjoyed this one!

Toon

The Routing Slip Pattern

The Routing Slip Pattern

The Pattern

Introduction

A routing slip is a configuration that specifies a sequence of processing steps (services). This routing slip must be attached to the message to be processed. Each service (processing step) is designed to receive the message, perform its functionality (based on the configuration) and invoke the next service. In that way, a message gets processed sequentially by multiple services, without the need of a coordinating component. The schema below is taken from Enterprise Integration Patterns.

Some examples of this pattern are:

Routing Slip

Routing slips can be configured in any language, JSON or XML are quite popular. An example of a simple routing slip can be found below. The header contains the name of the routing slip and a counter that carries the current step number. Each service is represented by a routing step. A step has its own name to identify the service to be invoked and has a specific key-value configuration pairs.

Remark that this is just one way to represent a routing slip. Feel free to add your personal flavor…

Assign Routing Slip

There are multiple ways to assign a routing slip to a message. Let’s have a look:

  • External: the source system already attaches the routing slip to the message
  • Static: when a message is received, a fixed routing slip is attached to it
  • Dynamic: when a message is received, a routing slip is attached, based on some business logic
  • Scheduled: the integration layer has routing slips scheduled that contain also a command to retrieve a message

Service

A service is considered as a “step” within your routing slip. When defining a service, you need to design it to be generic. The executed logic within the service must be based on the configuration, if any is required. Ensure your service has a single responsibility and there’s a clear boundary of its scope.

A service must consist of three steps:

  • Receive the message
  • Process the message, based on the routing slip configuration
  • Invoke the next service, based on the routing slip configuration

There are multiple ways to invoke services:

  • Synchronous: the next service is invoked without any persistence in between (e.g. in memory). This has the advantage that it will perform faster.
  • Asynchronous: the next service is invoked with persistence in between (e.g. a queue). This has the advantage that reliability increases, but performance degrades.

Think on the desired way to invoke services. If required, a combination of sync and async can be supported.

Advantages

Encourages reuse

Integrations are composed of reusable and configurable building blocks. The routing slip pattern forces you to analyze, develop and operate in a streamlined manner. Reuse is heavily encouraged on different levels: the way analysis is performed, how patterns are implemented, the way releases are rolled out and how operational tasks are performed. One unified way of working, built on reusability.

Configuration based

Your integration is completely driven by the assigned routing slip. There are no hard-coded links between components. This allows you to change its behavior without the need of a re-deployment. This configuration also serves as a great source of documentation, as it explains exactly what message exchanges are running on your middleware and what they exactly do.

Faster release cycles

Once you have set up a solid routing slip framework, you can increase your release cadence. By leveraging your catalogue of reusable services, you heavily benefit from previous development efforts. The focus is only on the specifics of a new message exchange, which are mostly data bound (e.g. mapping). There’s also a tremendous increase of agility, when it comes to small changes. Just update the routing slip configuration and it has an immediate effect on your production workload.

Technology independent

A routing slip is agnostic to the underlying technology stack. The way the routing slip is interpreted, is of course specific to the technology used. This introduces ways to have a unified integration solution, even if it is composed of several different technologies. It enables also cross technology message exchanges. As an example, you can have an order that is received via an AS2 Logic App, being transformed and sent to an on premise BizTalk Server that inserts it into the mainframe, all governed by a single routing slip config.

Provides visibility

A routing slip can introduce more visibility into the message exchanges, for sure from an operational perspective. If a message encounters an issue, operations personnel can immediately consult the routing slip to see where the message comes from, what steps are already executed and where it is heading to. This visibility can be improved, by updating the routing slip with some extra historical information, such as the service start and end time. Why even not including an URL in the routing slip that points to a wiki page or knowledge base about that interface type?

Pitfalls

Not enough reusability

Not every integration project is well-suited to use the routing slip pattern. During analysis phase, it’s important to identity the integration needs and to see if there are a lot of similarities between all message exchanges. When a high level of reusability is detected, the routing slip pattern might be a good fit. If all integrations are too heterogenous, you’ll introduce more overhead than benefits.

Too complex logic

A common pitfall is adding too much complexity into the routing slip. Try to stick as much as possible to a sequential series of steps (services) that are executed. Some conditional decision logic inside a routing slip might be acceptable, but define clear boundaries for such logic. Do not start writing you own workflow engine, with its own workflow language. Keep the routing slip logic clean and simple, to stick to the purpose of a routing slip.

Limited control

In case of maintenance of the surrounding systems, you often need to stop a message flow. Let’s take the scenario where you face the following requirement: “Do not send orders to SAP for the coming 2 hours”. One option is to stop a message exchange at its source, e.g. stop receiving messages from an SFTP server. In case this is not accepted, as these orders are also sent to other systems that should not be impacted, things get more complicated. You can stop the generic service that sends a message to SAP, but then you also stop sending other message types… Think about this upfront!

Hard deployments

A very common pain-point of a high level of reuse, is the impact of upgrading a generic service that is used all over the place. There are different ways to reduce the risks of such upgrades, of which automated system testing is an important one. Within the routing slip, you can specify explicitly the version of a service you want to invoke. In that way, you can upgrade services gradually to the latest version, without the risk of a big bang deploy. Define a clear upgrade policy, to avoid that too many different versions of a service are running side-by-side.

Monitoring

A message exchange is spread across multiple loosely coupled service instances, which could impose a monitoring challenge. Many technologies offer great monitoring insights for a single service instance, but lack an overall view across multiple service instances. Introducing a correlation ID into your routing slip, can highly improve the monitoring experience. This ID can be generated the moment you initialize a routing slip.

Conclusion

Routing slips are a very powerful mechanism to deliver unified and robust integrations in a fast way. The main key take-aways of this blog are:

  • Analyze in depth if can benefit from the routing slip pattern
  • Limit the complexity that the routing slip resolves
  • Have explicit versioning of services inside the routing slip
  • Include a unique correlation ID into the routing slip
  • Add historical data to the routing slip

Hope this was a useful read!
Toon

Run BizTalk extension objects in Logic Apps

Run BizTalk extension objects in Logic Apps

Extension objects are used to consume external .NET libraries from within XSLT maps. This is often required to perform database lookups or complex functions during a transformation. Read more about extension objects in this excellent blog.

Analysis

Requirements

We are facing two big challenges:

  1. We must execute the existing XSLT’s with extension objects in Logic App maps
  2. On premises Oracle and SQL databases must be accessed from within these maps

Analysis

It’s clear that we should extend Logic Apps with non-standard functionality. This can be done by leveraging Azure Functions or Azure API Apps. Both allow custom coding, integrate seamlessly with Logic Apps and offer the following hybrid network options (when using App Service Plans):

  • Hybrid Connections: most applicable for light weight integrations and development / demo purposes
  • VNET Integration: if you want to access a number of on premise resources through your Site-to-Site VPN
  • App Service Environment: if you want to access a high number of on premise resources via ExpressRoute

As the pricing model is quite identical, because we must use an App Service Plan, the choice for Azure API Apps was made. The main reason was the already existing WebAPI knowledge within the organization.

Design

A Site-to-Site VPN is used to connect to the on-premise SQL and Oracle databases. By using a standard App Service Plan, we can enable VNET integration on the custom Transform API App. Behind the scenes, this creates a Point-to-Site VPN between the API App and the VNET, as described here. The Transform API App can be consumed easily from the Logic App, while being secured with Active Directory authentication.

Solution

Implementation

The following steps were needed to build the solution. More details can be found in the referenced documentation.

  1. Create a VNET in Azure. (link)
  2. Setup a Site-to-Site VPN between the VNET and your on-premises network. (link)
  3. Develop an API App that executes XSLT’s with corresponding extension objects. (link)
  4. Foresee Swagger documentation for the API App. (link)
  5. Deploy the API App. Expose the Swagger metadata and configure CORS policy. (link)
  6. Configure VNET Integration to add the API App to the VNET. (link)
  7. Add Active Directory authentication to the API App. (link)
  8. Consume the API App from within Logic Apps.

Transform API

The source code of the Transform API can be found here. It leverages Azure Blob Storage, to retrieve the required files. The Transform API must be configured with the required app settings, that define the blob storage connection string and the containers where the artefacts will be uploaded.

The Transform API offers one Transform operation, that requires 3 parameters:

  • InputXml: the byte[] that needs to be transformed
  • MapName: the blob name of the XSLT map to be executed
  • ExtensionObjectName: the blob name of the extension object to be used

Sample

You can run this sample to test the Transform API with custom extension objects.

Input XML

This is a sample input that can be provided as input for the Transform action.

Transformation XSLT

This XSLT must be uploaded to the right blob storage container and will be executed during the Transform action.

Extension Object XML

This extension object must be uploaded to the right blob storage container and will be used to load the required assemblies.

External Assembly

Create an assembly named, TVH.Sample.dll, that contains the class Common.cs. This class contains a simple method to generate a GUID. Upload this assembly to the right blob storage container, so it can be loaded at runtime.

Output XML

Deploy the Transform API, using the instructions on GitHub. You can easily test it using the Request / Response actions:

As a response, you should get the following output XML, that contains the generated GUID.

Important remark: Do not forget to add security to your Transform API (Step 7), as is it accessible on public internet, by default!

Conclusion

Thanks to the Logic Apps extensibility through API Apps and their VNET integration capabilities, we were able to build this solution in a very short time span. The solution offers an easy way to migrate BizTalk maps as-is towards Logic Apps, which is a big time saver! Access to resources that remain on premises is also a big plus nowadays, as many organizations have a hybrid application landscape.

Hope to see this functionality out-of-the-box in the future, as part of the Integration Account!

Thanks for reading. Sharing is caring!
Toon

10 tips for enterprise integration with Logic Apps

Democratization of integration

Before we dive into the details, I want to provide some reasoning behind this post. With the rise of cloud technology, integration takes a more prominent role than ever before. In Microsoft’s integration vision, democratization of integration is on top of the list.

Microsoft aims to take integration out of its niche market and offers it as an intuitive and easy-to-use service to everyone. The so-called Citizen Integrators are now capable of creating light-weight integrations without the steep learning curve that for example BizTalk Server requires. Such integrations are typically point-to-point, user-centric and have some accepted level of fault tolerance.

As an Integration Expert, you must be aware of this. Enterprise integration faces completely different requirements than light-weight citizen integration: loosely coupling is required, no message loss is accepted because it’s mission critical interfacing, integrations must be optimized for operations personnel (monitoring and error handling), etc…

Keep this in mind when designing Logic App solutions for enterprise integration! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The advice below can give you a jump start in designing reliable interfaces within Logic Apps!

Design enterprise integration solutions

1. Decouple protocol and message processing

Once you created a Logic App that receives a message via a specific transport protocol, it’s extremely difficult to change the protocol afterwards. This is because the subsequent actions of your Logic App often have a hard dependency on your protocol trigger / action. The advice is to perform the protocol handling in one Logic App and hand over the message to another Logic App to perform the message processing. This decoupling will allow you to change the receiving transport protocol in a flexible way, in case the requirements change or in case a certain protocol (e.g. SFTP) is not available in your DEV / TEST environment.

2. Establish reliable messaging

You must realize that every action you execute, is performed by an underlying HTTP connection. By its nature, an HTTP request/response is not reliable: the service is not aware if the client disconnects during request processing. That’s why receiving messages must always happen in two phases: first you mark the data as returned by the service; second you label the data as received by the client (in our case the Logic App). The Service Bus Peek-Lock pattern is a great example that provides such at-least-once reliability.  Another example can be found here.

3. Design for reuse

Real enterprise integration is composed of several common integration tasks such as: receive, decode, transform, debatch, batch, enrich, send, etc… In many cases, each task is performed by a combination of several Logic App actions. To avoid reconfiguring these tasks over and over again, you need to design the solution upfront to encourage reuse of these common integration tasks. You can for example use the Process Manager pattern that orchestrates the message processing by reusing nested Logic Apps or introduce the Routing Slip pattern to build integration on top of generic Logic Apps. Reuse can also be achieved on the deployment side, by having some kind of templated deployments of reusable integration tasks.

4. Secure your Logic Apps

From a security perspective, you need to take into account both role-based access control to your Logic App resources and runtime security considerations. RBAC can be configured in the Access Control (IAM) tab of your Logic App or on a Resource Group level. The runtime security really depends on the triggers and actions you’re using. As an example: Request endpoints are secured via a Shared Access Signature that must be part of the URL, IP restrictions can be applied. Azure API Management is the way to go if you want to govern API security centrally, on a larger scale. It’s a good practice to assign the minimum required privileges (e.g. read only) to your Logic Apps.

5. Think about idempotence

Logic Apps can be considered as composite services, built on top of several API’s. API’s leverage the HTTP protocol, which can cause data consistency issues due to its nature. As described in this blog, there are multiple ways the client and server can get misaligned about the processing state. In such situations, clients will mostly retry automatically, which could result in the same data being processed twice at server side. Idempotent service endpoints are required in such scenarios, to avoid duplicate data entries. Logic Apps connectors that provide Upsert functionality are very helpful in these cases.

6. Have a clear error handling strategy

With the rise of cloud technology, exception and error handling become even more important. You need to cope with failure when connecting to multiple on premise systems and cloud services. With Logic Apps, retry policies are your first resort to build resilient integrations. You can configure a retry count and interval at every action, there’s no support for exponential retries or circuit breaker pattern. In case the retry policy doesn’t solve the issue, it’s advised to return a clear error description within sync integrations and to ensure a resumable workflow within async integrations. Read here how you can design a good resume / resubmit strategy.

7. Ensure decent monitoring

Every IT solution benefits from a good monitoring. It provides visibility and improves the operational experience for your support personnel. If you want to expose business properties within your monitoring, you can use Logic Apps custom outputs or tracked properties. These can be consumed via the Logic Apps Workflow Management API or via OMS Log Analytics. From an operational perspective, it’s important to be aware that there is an out-of-the-box alerting mechanism that can send emails or trigger Logic Apps in case a run fails. Unfortunately, Logic Apps has no built-in support for Application Insights, but you can leverage extensibility (custom API App or Azure Function) to achieve this. If your integration spans multiple Logic Apps, you must foresee correlation in your monitoring / tracing!  Find here more details about monitoring in Logic Apps.

8. Use async wherever possible

Solid integrations are often characterized by asynchronous messaging. Unless the business requirements really demand request/response patterns, try to implement them asynchronously. It comes with the advantage that you introduce real decoupling, both from a design and runtime perspective. Introducing a queuing system (e.g. Azure Service Bus) in fire-and-forget integrations, results in highly scalable solutions that can handle an enormous amount of messages. Retry policies in Logic Apps must have different settings depending whether you’re dealing with async or sync integration. Read more about it here.

9. Don’t forget your integration patterns

Whereas BizTalk Server forces you to design and develop in specific integration patterns, Logic Apps is more intuitive and easier to use. This could come with a potential downside that you forget about integration patterns, because they are not suggested by the service itself. As an integration expert, it’s your responsible to determine which integration patterns should be applied on your interfaces. Loosely coupling is common for enterprise integration. You can for example introduce Azure Service Bus that provides a Publish/Subscribe architecture. Its message size limitation can be worked around by leveraging the Claim Check pattern, with Azure Blob Storage. This is just one example of introducing enterprise integration patterns.

10. Apply application lifecycle management (ALM)

The move to a PaaS architecture, should be done carefully and must be governed well, as described here. Developers should not have full access to the production resources within the Azure portal, because the change of one small setting can have an enormous impact. Therefore, it’s very important to setup ALM, to deploy your Logic App solutions throughout the DTAP-street. This ensures uniformity and avoids human deployment errors. Check this video to get a head start on continuous integration for Logic Apps and read this blog on how to use Azure Key Vault to retrieve passwords within ARM deployments. Consider ALM as an important aspect within your disaster recovery strategy!

Conclusion

Yes, we can! Logic Apps really is a fit for enterprise integration, if you know what you’re doing! Make sure you know your cloud and integration patterns. Ensure you understand the strengths and limits of Logic Apps. The Logic App framework is a truly amazing and stable platform that brings a whole range of new opportunities to organizations. The way you use it, should be depending on the type of integration you are facing!

Interested in more?  Definitely check out this session about building loosely coupled integrations with Logic Apps!

Any questions or doubts? Do not hesitate to get in touch!
Toon

Reliably receive SQL data in Logic Apps

Reliably receive SQL data in Logic Apps

Scenario

Let’s discuss the scenario briefly.  We need to consume data from the following table.  All orders with the status New must be processed!

The table can be created with the following SQL statement:

First Attempt

Solution

To receive the data, I prefer to create a stored procedure. This avoids maintaining potentially complex SQL queries within your Logic App. The following stored procedure selects the first order with status New and updates its status to Processed in the same statement. Remark that it also returns the @@ROWCOUNT, as this will come in handy in the next steps.

The Logic App fires with a Recurrence trigger.  The stored procedure gets executed and via the ReturnCode we can easily determine whether it returned an order or not.  In case an order is retrieved, its further processing can be performed, which will not be covered in this post.

Evaluation

If you have a BizTalk background, this is a similar approach on using a polling SQL receive location. One very important difference: the BizTalk receive adapter executes the stored procedure within the same distributed transaction as it persists the data in the MessageBox, whereas Logic Apps is completely built on API’s that have no notion of MSDTC at all.

In failure situations, when a database shuts down or the network connection drops, it could be that the order is already marked as Processed, but it never reaches the Logic App. Depending on the returned error code, your Logic App will end up in a Failed state without clear description or the Logic App will retry automatically (for error codes 429 and 5xx). In both situations you’re facing data loss, which is not acceptable for our scenario.

Second attempt

Solution

We need to come up with a reliable way of receiving the data. Therefore, I suggest to implement a similar pattern as the Azure Service Bus Peek-Lock. Data is received in 2 phases:

  1. You mark the data as Peeked, which means it has been assigned to a receiving process
  2. You mark the data as Completed, which means it has been received by the receiving process

Next to these two explicit processing steps, there must be a background task which reprocesses messages that have the Peeked status for a too long duration. This makes our solution more resilient.

Let’s create the first stored procedure that marks the order as Peeked.

The second stored procedure accepts the OrderId and marks the order as Completed.

The third stored procedure should be executed by a background process, as it sets the status back to New for all orders that have the Peeked status for more than 1 hour.

Let’s consume now the two stored procedures from within our Logic App.  First we Peek for a new order and when we received it, the order gets Completed.  The OrderId is retrieved via this expression: @body(‘Execute_PeekNewOrder_stored_procedure’)?[‘ResultSets’][‘Table1’][0][‘Id’]

The background task could be executed by a SQL Agent Job (SQL Server only) or by another Logic App that is fired every hour.

Evaluation

Happy with the result? Not a 100%! What if something goes wrong during further downstream processing of the order? The only way to reprocess the message is by changing its status in the origin database, which can be a quite cumbersome experience for operators. Why can’t we just resume the Logic App in case of an issue?

Third Attempt

Solution

As explained over here, Logic Apps has an extremely powerful mechanism of resubmitting workflows. Because Logic Apps has – at the time of writing – no triggers for SQL Server, a resubmit of the Recurrence trigger is quite useless. Therefore I only want to complete my order when I’m sure that I’ll be able to resubmit it if something fails during its further processing. This can be achieved by splitting the Logic App in two separate workflows.

The first Logic App peeks for the order and parses the result into a JSON representation. This JSON is passed to the next Logic App.

The second Logic App gets invoked by the first one.  This Logic App completes the order first and performs afterwards the further processing.  In case something goes wrong, a resubmit of the second Logic App can be initiated.

Evaluation

Very happy with the result as:

  • The data is received from the SQL table in a reliable fashion
  • The data can be resumed in case further processing fails

Conclusion

Don’t forget that every action is HTTP based, which can have an impact on reliability. Consider a two-phased approach for receiving data, in case you cannot afford message loss. The same principle can also by applied on receiving files: read the file content in one action and delete the file in another action. Always think upfront about resume / resubmit scenarios. Triggers are better suited for resubmit than actions, so if there are triggers available: always use them!

This may sound overkill to you, as these considerations will require some additional effort. My advice is to determine first if your business scenario must cover such edge case failure situations. If yes, this post can be a starting point for you final solution design.

Liked this post? Feel free to share with others!
Toon

BTS 2016 Feature Pack I – Continuous Deployment

BTS 2016 Feature Pack I – Continuous Deployment

In case you are interested in a detailed walk-through on how to set up continuous deployment, please check out this blog post on Continuous Deployment in BizTalk 2016, Feature Pack 1.

What is included?

Below, you can find a bullet point list of features included in this release.

  • An application version has been added and can be easily specified.
  • Automated deployment from VSTS, using a local deploy agent.
  • Automated deployment of schemas, maps, pipelines and orchestrations.
  • Automated import of multiple binding files.
  • Binding file management through VSTS environment variables.
  • Update of specific assemblies in an existing BizTalk application (with downtime)

What is not included?

This is a list of features that are currently not supported by the new VSTS release task:

  • Build BizTalk projects in VSTS hosted build servers.
  • Deployment to a remote BizTalk server (local deploy agent required)
  • Deployment to a multi-server BizTalk environment.
  • Deployment of shared artifacts (e.g. a schema that is used by several maps)
  • Deployment of more advanced artifacts: BAM, BRE, ESB Toolkit…
  • Control of which host instances / ports / orchestrations should be (re)started
  • Undeploy a specific BizTalk application, without redeploying it again.
  • Use the deployment task in TFS 2015 Update 2+ (no download supported)
  • Execute the deployment without the dependency of VSTS.

Conclusion!

Microsoft released this VSTS continuous deployment service into the wild, clearly stating that this is a first step in the BizTalk ALM story. That sounds very promising to me, as we can expect more functionality to be added in future feature packs!

After intensively testing the solution, I must conclude that there is a stable and solid foundation to build upon. I really like the design and how it is integrated with VSTS. This foundation can now be extended with the missing pieces, so we end up with great release management!

At the moment, this functionality can be used by BizTalk Server 2016 Enterprise customers that have a single server environment and only use the basic BizTalk artifacts. Other customers should still rely on the incredibly powerful BizTalk Deployment Framework (BTDF), until the next BizTalk Feature Pack release. At that moment in time, we can re-evaluate again! I’m quite confident that we’re heading in the good direction!

Looking forward for more on this topic!

Toon

BTS 2016 Feature Pack I – Management & Operational API

BTS 2016 Feature Pack I – Management & Operational API

The documentation of the Management API can be found here.  In short: almost everything you can access in the BizTalk Administration Console is now available in the BizTalk Management API.  The API is very well documented with Swagger, so it’s pretty much self-explaining.  

What is included?

A complete list of available operations can be found here.

Deployment

There are new opportunities on the deployment side. Here are some ideas that popped into my mind:

  • Dynamically create ports. Some messaging solutions are very generic. Adding new parties is sometimes just a matter of creating a new set of receive and send ports. This can now be done through this Management API, so you don’t need to do the plumbing yourself anymore.
  • Update tracking settings. We all know it quite difficult to keep your tracking settings consistent through all applications and binding files. The REST API can now be leveraged to change the tracking settings on the fly to their desired state.

Runtime

Also the runtime processing might benefit from this new functionality. Some scenarios:

  • Start and stop processes on demand. In situations that the business wants to take control on when certain processes should be active, you can start/stop receive/send ports on demand. Just a small UI on top of the Management API, including the appropriate security measures, and you’re good to go!
  • Maintenance windows. BizTalk is in the middle of your application landscape. Deployments on backend applications, can have a serious impact on running integrations. That’s why stopping certain ports during maintenance windows is a good approach. This can now be easily automated or controlled by non-BizTalk experts.

Monitoring

Most new opportunities reside on the monitoring side. A couple of potential use cases:

  • Simplified and short-lived BAM. It’s possible to create some simple reports with basic statistics of your BizTalk environment. You can leverage the Management API or the Operational OData Service. You can easily visualize the number of messages per port and for example the number of suspended instances. All of this is built on top of the data in your MessageBox and DTA database, so there’s no long term reporting out-of-the-box.
  • Troubleshooting. There are very easy-to-use operations available to get a list of services instances with a specific status. In that way, you can easily create a dashboard that gives an overview of all instances that require intervention. Suspended instances can be resumed and terminated through the Management API, without the need to access your BizTalk Server.

This is an example of the basic Power BI reports that are shipped with this feature pack.

What is not included?

This brand new BizTalk Management API is quite complete, very excited about the result! As always, I looked at it with a critical mindset and tried to identify missing elements that would enable even more additional value. Here are some aspects that are currently not exposed by the API, but would be handy in future releases:

  • Host Instances: it would be great to have the opportunity to also check the state of the host instances and to even start / stop / restart them. Currently, only a GET operation on the hosts is available.
  • Tracked Context Properties: I’m quite fond of these, as they enable you to search for particular message events, based on functional search criteria (e.g. OrderId, Domain…). Would be a nice addition to this API!
  • Real deployment: first I thought that the new deployment feature was built on top of this API, but that was wrong. The API exposes functionality to create and manage ports, but no real option to update / deploy a schema, pipeline, orchestration or map. Could be nice to have, but on the other hand, we have a new deployment feature of which we need to take advantage of!
  • Business Activity Monitoring: I really like to idea of the Operational OData Service, which smoothly integrates with Power BI. Would be great to have a similar and generic approach for BAM, so we can easily consume the business data without creating custom dashboards. The old BAM portal is really no option anymore nowadays. You can vote here.

Conclusion!

Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!

The exposure of BizTalk as a REST API opens up a new range of great opportunities. Don’t forget to apply the required security measures when exposing this API! By introducing this API, the need for auditing all activity becomes even more important!

Thanks BizTalk for this great addition! Thank you for reading!

Cheers,
Toon

BTS 2016 Feature Pack I – Scheduling capabilities

BTS 2016 Feature Pack I – Scheduling capabilities

The documentation of this scheduling feature can be found on MSDN.

What is included?

Support for time zones

The times provided within the schedule tab of receive locations are now accompanied by a time zone. This ensures your solution is not depending anymore on the local computer settings. There’s also a checkbox to automatically adjust for daylight saving time.

This is a small, but handy addition to the product! It avoids unpleasant surprises when rolling out your BizTalk solutions throughout multiple environments or even multiple customers!

Service window recurrence

The configuration of service windows is now a lot more advanced. You have multiple recurrence options available:

  • Daily: used to run the receive location every x number of days
  • Weekly: used to run the receive location on specific days of the week
  • Monthly: used to run the receive location on specific dates or specific days of the month

Up till now, I didn’t use the service window that much. These new capabilities allow some new scenarios. As an example, this would come in handy to schedule the release of batch messages on a specific time of the day, which is often required in EDI scenarios!

What is not included?

This is not a replacement for the BizTalk Scheduled Task Adapter, which is a great community adapter! There is a fundamental difference between an advanced service window configuration and the Scheduled Task Adapter. A service window configures the time on which a receive locations is active, whereas the Scheduled Task Adapter executes a pre-defined task on the configured recurrence cadence.

For the following scenarios, we still need the Scheduled Task Adapter:

  • Send a specific message every x seconds / minutes.
  • Trigger a process every x seconds / minutes.
  • Poll a rest endpoint every x seconds / minutes. Read more about it here.

Conclusion!

Very happy to see more commitment from Microsoft towards BizTalk Server. This emphasises their “better together” integration vision on BizTalk Server and Logic Apps! Check out the BizTalk User Voice page if you want to influence the BizTalk roadmap!

These new scheduling capabilities are a nice addition to BizTalk’s toolbelt! In future feature packs, I hope to see similar capabilities as the Scheduled Task Adapter. Many customers are still reluctant to use community adapters, so a supported adapter would be very nice! You can vote here!

Thanks for reading!
Toon

BTS 2016 Feature Pack I – Continuous Deployment Walk-Through

BTS 2016 Feature Pack I – Continuous Deployment Walk-Through

Introduction

I’ve created this walkthrough mainly because I had difficulties to fully understand how it works. The documentation does not seem 100% complete and some blog posts I’ve read created some confusion for me. This is a high-level overview of how it works:

  1. The developer must configure what assemblies and bindings should be part of the BizTalk application. Also, the order of deployment must be specified. This is done in the new BizTalk Application Project.
  2. The developer must check-in the BizTalk projects, including the configured BizTalk Application Project. Also, the required binding files must be added to the chosen source control system.
  3. A build is triggered (automatically or manually). A local build agent compiles the code. By building the BizTalk Application Project, a deployment package (.zip) is automatically generated with all required assemblies and bindings. This deployment package (.zip) is published to the drop folder.
  4. After the build completed, the release can be triggered (automatically or manually). A local deploy agent, installed on the BizTalk server, takes the deployment package (.zip) from the build’s drop folder and performs the deployment, based on the configurations done in step 1. Placeholders in the binding files are replaced by VSTS environment variables.

Some advice:

  • Make a clear distinction between build and release pipelines!
  • Do not create and check-in the deployment package (.zip) yourself!

You can follow the steps below to set up full continuous deployment of BizTalk applications. Make sure you check the prerequisites documented over here.

Create a build agent

As VSTS does not support building BizTalk projects out-of-the-box, we need to create a local build agent that performs the job.

Create Personal Access Token

For the build agent to authenticate, a Personal Access Token is required.

  • Browse to your VSTS home page. In my case this is https://toonvanhoutte.visualstudio.com
  • Click on the profile icon and select Security.

 

  • Select Personal access tokens and click Add

 

  • Provide a meaningful name, expiration time and select the appropriate account. Ensure you allow access to Agent Pools (read, manage).

 

  • Click Create Token

 

  • Ensure you copy the generated access token, as we will need this later.

Install local build agent

The build agent should be installed on the server that has Visual Studio, the BizTalk Project Build Component and BizTalk Developer Tools installed.

  • Select the Settings icon and choose Agent queues.

  • Select the Default agent queue. As an alternative, you could also create a new queue.

  • Click on Download agent
  • Click Download. Remark that the required PowerShell scripts to install the agent are provided.

  • Open PowerShell as administrator on the build server.
    Run the following command to unzip and launch the installation:
    mkdir agent ; cd agent
    Add-Type -AssemblyName System.IO.Compression.FileSystem ; System.IO.Compression.ZipFile]::ExtractToDirectory(“$HOMEDownloadsvsts-agent-win7-x64-2.115.0.zip”, “$PWD”)

  • Execute this command to launch the configuration:
    .config.cmd
  • Provide the requested information:
    > Server URL: https://toonvanhoutte.visualstudio.com
    > Authentication: PAT
    > PAT: The personal access token copied in the previous step

 

  • Press enter for default pool
  • Press enter for default name
  • Press enter for default work folder
  • Provide Y to run as a service
  • Provide user
  • Provide password

  • Double check that the local build service is created and running.

  • If everything went fine, you should see the build agent online!

Create a build definition

Let’s now create and configure the required build definition.

  • In the Builds tab, click on New to create a new build definition.

  • Select Visual Studio to start with a pre-configured build definition. Click Next to continue.

  • Select your Team Project as the source, enable continuous integration, select the Default queue agent and click Create.

  • Delete the following build steps, so the build pipeline looks like this:
    > NuGet Installer
    > Visual Studio Test
    > Publish Symbols

  • Configure the Visual Studio Build step. Select the BizTalk solution that contains all required artifacts. Make sure Visual Studio 2015 is picked and verify that MSBuild architecture is set to MSBuild x86.

  • The other build steps can remain as-is. Click Save.

  • Provide a clear name for the build definition and click OK.

  • Queue a new build.

  •  Confirm with OK.

  • Hopefully your build finishes successful. Solve potential issues in case the build failed.

Configure BizTalk Application

In this chapter, we need to create and configure the definition of our BizTalk application. The BizTalk Server 2016 Feature Pack 1 introduces a new BizTalk project type: BizTalk Server Application Project. Let’s have a look how we can use this to kick off an automated deployment.

  • On your solution, click Add, Add New Project.
  • Ensure you select .NET Framework 4.6.1 and you are in the BizTalk Projects tab. Choose BizTalk Server Application Project and provide a descriptive name.

  • Add references to all projects that needs to be included in this BizTalk application and click OK.

  • Add all required binding files to the project. Make sure that every binding has Copy to Output Directory set to Copy Always. Via this way, the bindings will be included in the generated deploy package (.zip).

  • In case you want to replace environment specific settings in your binding file, such as connection string and passwords, you must add placeholders with the $(placeholder) notation.

  • Open the BizTalkServerInventory.json file and configure the following items:
    > Name and path of all assemblies that must be deployed in the BizTalk application
    > Name and path of all binding files that must be imported into the BizTalk application
    > The deployment sequence of assemblies to be deployed and bindings to be imported.
  • Right click on the BizTalk Application Project and choose Properties. Here you can specify the desired version of the BizTalk Application. Be aware that this version is different, depending whether you’re building in debug or release mode. Click OK to save the changes.

 

  • Build the application project locally. Fix any errors if they might occur. If the build succeeds, you should see a deployment package (.zip) in the bin folder. This package will be used to deploy the BizTalk application.

  • Check-in the new BizTalk Application Project. This should automatically trigger a new build. Verify that the deployment package (.zip) is also available in the drop folder of the build. This can be done by navigating to the Artifacts tab and clicking on Explore.

  • You should see the deployment package (.zip) in the bin folder of the BizTalk Application Project.

Create a release definition

We’ve created a successful build, that generated the required deployment package (.zip). Now it’s time to configure a release pipeline that takes this deployment package as an input and deploys it automatically on our BizTalk Server.

  • Navigate to the Releases tab and click Create release definition.

  • Select Empty to start with an empty release definition and click Next to continue.

  • Choose Build as the source for the release, as the build output contains the deployment package (.zip). Make sure you select the correct build definition. If you want to setup continuous deployment, make sure you check the option. Click Create to continue.

  • Change the name of the Release to a more meaningful name.

  • Change the name of the Environment to a more meaningful name.

  • Click on the “…” icon and choose Configure variables.

  • Add an environment variable, named Environment. This will ensure that every occurrence of $(Environment) in your binding file, will be replaced with the configured value (DEV). Click OK to confirm.

  • Click Add Tasks to add a new task. In the Deploy tab, click Add next to the BizTalk Server Application Deployment task. Click Close to continue.

  • Provide the Application Name in the task properties.

  • For the Deployment package path, navigate to the deployment package (.zip) that is in the drop folder of the linked build artefact. Click OK to confirm.

  • Specify, in the Advanced Options, the applications to reference, if any.

  • Select Run on agent and select the previously created agent queue to perform the deployment. In a real scenario, this will need to be a deployment agent per environment.

  • Save the release definition and provide a comment to confirm.

Test continuous deployment

  • Trigger now a release, by selecting Create Release.

  • Keep the default settings and click Create.

  • In the release logs, you can see all details. The BizTalk deployment task has very good log statements, so in case of an issue you can easily pinpoint the problem. Hopefully you encounter a successful deployment!

  • On the BizTalk Server, you’ll notice that the BizTalk application has been created and started. Notice that the application version is applied and the application references are created!

 

In case you selected the continuous integration options, there will now be an automated deployment each time you check in a change in source control. Continuous deployment has been set up!

Wrap-up

Hope you’ve enjoyed this detailed, but basic walkthrough. For real scenarios, I highly encourage to extend this continuous integration approach with:

  • Automated unit testing and optional integration testing
  • Versioning of the assembly file versions
  • Include the version dynamically in the build and release names

Cheers,
Toon