Last September 28, I had the pleasure of speaking at the BizTalk To Azure event, which focused on ‘The Migration Journey‘ and was organized by Contica in Gothenburg, Sweden. I’d like, once again, to take this opportunity to thank Contica for the kind invitation and express my appreciation to all the attendees for their warm reception and valuable feedback.
My second presentation at the event (see my first presentation here), which I’m bringing you today, was entitled: Azure Integration in Action – BizTalk to Azure Transition Case Studies.
BizTalk To Azure The Migration Journey: Azure Integration in Action – BizTalk to Azure Transition Case Studies
In this session, we bring Azure Integration Services to life by migrating simple real-world BizTalk Server case samples like:
Through these exercises, you’ll gain practical insights, strategies, and tips to ensure a smooth BizTalk Server migration while embracing Azure’s agility, scalability, and cost-efficiency. Take advantage of this opportunity to see Azure Integration Services in action, guiding your path toward a seamless and future-ready integration landscape.
Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
Last September 28, I had the pleasure of speaking at the BizTalk To Azure event, which focused on ‘The Migration Journey‘ and was organized by Contica in Gothenburg, Sweden. I’d like to take this opportunity to thank Contica for the kind invitation and express my appreciation to all the attendees for their warm reception and valuable feedback.
My first presentation at the event, which I’m bringing you today, was entitled: Elevating Integration – The Roadmap from BizTalk Server.
BizTalk Server to Azure, The Migration Journey: Elevating Integration – Roadmap from BizTalk Server to Azure
If you are embracing the journey to move your current BizTalk Server environment to the cloud, in this session, we’ll guide you through the steps, strategies, and best practices needed to successfully transition your integration solutions to the cloud.
On this talk we will address topics like:
What phases in your migration journey are crucial?
Which tools and technologies should you use?
What to do in this migration journey
And what not to do in this migration journey!
After this session, you’ll know about the dos and don’ts, as well as the key considerations that will empower the agility and scalability of Azure Integration Services.
Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
Azure Logic App (Consumption) is one of the most used integration services for orchestrating critical workflows, and an error in it would essentially affect your business continuity.
By default, Logic App allows handling errors using the Configure run after settings at a per action level. For more complex scenarios, it can be done by setting up Scope action and implementing try-catch or try-catch-finally statements.
This video aims to explain and discuss error handling within Logic Apps and explore the implementation of a try-catch-finally statement. Even though we are using Logic Apps Consumption for this proof of concept, the same principles will be applied to Logic Apps Standard.
Error Handling in Logic Apps by Luis Rígueira
Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Big thanks to my team member Luís Rigueira for creating this video.
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
In a general and abstract way, we can say that Data Mapper maps are graphical representations of XSLT 3.0 (Extensible Stylesheet Language Transformation) documents that allow us to perform simple and visual transformations between XML and JSON messages (more formats will be added in the future). But in reality, Data Mapper maps are composed of two files: a Map Definition file (.yml) that basically is an abstraction of the underlying implementation. And one XSLT file (.xslt) contains all the transformation rules – This is the file that in runtime will be executed by the Logic Apps (standard) engine.
What types of transformation can we apply in Logic Apps?
Similar to BizTalk Server, we can define two types of transformations:
Syntax Transformations: In BizTalk Server, this type of transformation occurs in receive or send pipelines and aims to transform a document into another representation, e.g., CSV to XML. In Logic Apps, that will occur inside the Logic Apps by using an Integration Account or certain actions and expressions depending on which syntax transformation we need to perform. Here the document maintains the same data (semantics) but changes the syntax that is represented. I.e., we translate the document, but typically we don’t modify the structure. Normally, this type of transformation is bidirectional, since we still have the same semantic content, we can apply the same transformation logic and obtain the document in its original format.
Semantic Transformations: In BizTalk Server, this type of transformation usually occurs only in BizTalk maps. Inside Logic Apps (Standard), they will take place in the Data Mapper. Here the document can maintain the same syntax that is represented (XML or JSON) – not mandatory since Data Mapper supports at the moment JSON Schemas and XML Schemas – but changes its semantics (data content). These types of transformations are typically one-way. Since we add and aggregate small parts of the information that compose the document into another document, we may miss important details for its reconstruction.
Introduction to the Data Mapper
The Data Mapper enables us to perform transformations of complex messages visually and effortlessly, graphically represented by links that define the relationships between the various elements of messages.
These relationships between elements are internally implemented as XSL Transformations (XSLT – Extensible Stylesheet Language Transformation), the standard Worldwide Web Consortium (W3C) recommended to perform message transformations.
Data Mapper is a Visual Code extension that allows you to perform mapping transformations. Currently, it has its own extension, but that will be merged with the main extension in the future.
Essentially the editor consists of three main modules:
Source Schema view: this is the data structure of the source message and is on the left side of the main window;
Destination Schema view: this is the data structure of the target message and is on the right side of the main window; the links that define the mapping lead into the destination schema tree view from the grid view, and ultimately from the source schema tree view.
Mapping areaview: is in the middle of the main window, between the two data structures (source and target); This area plays a critical role in the definition of maps, containing the links and Functions that control how data in a source instance message is transformed into an instance message that conforms to the destination schema. The mapping area can have multiple layers, each of them associated with a specific record on the destination schema, allowing us this way to organize complex maps in a better way.
Apart from these three modules, there are other important windows for the developer:
Function panel: At the left side of the source schema, providing access to all Functions we can use in the Data Mapper.
Function properties panel: in this panel, we can see and modify the properties of a selected function inside the Data Mapper.
Task List and Output windows: much of the time hidden, we can and must use these windows to examine the results of saving and testing we do in our maps. These windows normally appear underneath the Mapping area or the Function properties panel.
Code view Panel: On this panel, you can view Map definitions rules (abstraction of the underlying implementation). It is a read-only panel.
Basic maps functionalities (Document mapping)
There are several mapping functionalities that you can perform inside maps, like Data normalization, Transform Injection (XLST injection), calculating values covering math and scientific functoids, and so on. Still, most of the time, transformations are quite simple, and when we perform a transformation in the message 5 basic functionalities typically arise:
Simple mapping of a given value (direct copy)
Concatenation of values
Conditional selection
Custom scripts
Add new values (data)
Here we will take a simple transformation problem that will address each one of these functionalities or operations, providing a simple example of how to accomplish this inside the Data Mapper. Of course, there are plenty of other operations or transformation rules that we could do that will not be addressed here.
Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
Function Chain? What is a Function Chain? And how that works inside the Data Mapper?
When we use more than one cascading function to perform a transformation rule, we call it a Function Chain, or a chain of functions that are going to be executed in order. This way, we can apply more complex mapping rules inside the Logic Apps (Standard) Data Mapper. By the way, we had the same concept inside BizTalk Server Maps and Logic App Consumption Map (that is, the BizTalk Server Maps extracted and isolated)
The picture below shows us a Function Chain inside the Data Mapper. In this case, we are calculating national and international calls based on the phone number:
In this video, we will learn t what a Function Chain is inside the Logic Apps (Standard) Data Mapper and how it works.
Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
Unfortunately, no Logic App connector can make the bridge to RabbitMQ, which makes this integration challenge a little bit more complicated. However, we have the ability to create an Azure Function by using the RabbitMQ trigger for Azure Functions to overpass this limitation.
And we saw and explained in our last blog post that Azure Functions integrates with RabbitMQ via triggers and bindings. The Azure Functions RabbitMQ extension allows you to send and receive messages using the RabbitMQ API with Functions.
The purpose of this video is to explain how you create a POC that allows you to receive a message in a RabbitMQ queue, and that event triggers the Azure Function that then will route the message to a Logic App.
This was a real problem presented by a client during one of our Logic Apps training courses, where they have RabbitMQ on-premises, and they did want to pull messages from a queue into a Logic App Consumption to integrate them with other systems.
Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Big thanks to my team member Luís Rigueira for creating this video.
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
In my last blog post, I explain in detail the String functions available in the new Data Mapper, and I endup documenting each of them.
String functions are used to manipulate strings in standard ways, such as conversions to all uppercase or all lowercase, string concatenation, determination of string length, white space trimming, etc. If you come from the BizTalk Server background or are migrating BizTalk Server projects, they are the equivalent of String Functoids inside BizTalk Mapper Editor.
The String functoids are:
Codepoints to string: Converts the specified codepoints value to a string and returns the result.
Concat: Combines two or more strings and returns the combined string.
Contains: Returns true or false based on whether the string input contains the specified substring.
Ends with: Returns true or false based on whether the string input ends with the specified substring.
Length: Returns the number of items in the specified string or array.
Lowercase: Returns a string in lowercase format.
Name: Returns the local name of the selector node, which is useful when you want to retrieve the name of the incoming message component, not the value.
Regular expression matches: Returns true or false based on whether the string input matches the specified regular expression.
Regular expression replace: Returns a string created from the string input by using a given regular expression to find and replace matching substrings with the specified string.
Replace: Replaces a substring with the specified string and return the new complete string.
Starts with: Returns true if the given string starts with the specified substring.
String to codepoints: Converts the specified string to codepoints.
Substring: Returns characters from the specified string, starting from the specified position.
Substring after: Returns the characters that follow the specified substring in the source string.
Substring before: Returns the characters that precede the specified substring in the source string.
Trim: Returns the specified string with all the leading and trailing white space characters removed.
Trim left: Returns the specified string with all the leading white space characters removed.
Trim right: Returns the specified string with all the trailing white space characters removed.
Uppercase: Returns a string in uppercase format.
In this video, we will see each of these String Functions in action. For each one, we will provide simple input data, and we will see what the expected output is.
Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more, you can buy (or help buy) my son a Star Wars Lego!
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
This documentation is intended for clients considering or having already decided to move their entire BizTalk Server on-premises integration solution to Azure or parts of the solution into Azure, making some hybrid solutions and helping them with this process.
Now I had, once again, the honor of being invited a few weeks ago by my friend – and now Microsoft Principal Product Manager – Azure Logic Apps – Kent Weare to record a special episode on BizTalk Server to Azure Integration Services – Ask the Experts on his YouTube channel.
In this episode, we are going to discuss some important questions and concerns customers may have on this journey to migrate their BizTalk Server Solutions to Azure, like:
What are some examples of BizTalk migrations that I have been involved in? What is the biggest driver for these customers?
If I’m helping a customer migrate, what advice would I provide to the customer?
When considering BizTalk architectures, what is an anti-pattern or something that customers should avoid/re-think as they move to Azure?
And many other questions.
You can see the full episode here: https://www.youtube.com/watch?v=iYLFsUK5AmY
I hope you enjoy and find this topic interesting. Let me know what you think.
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
I had the pleasure of being invited a few weeks ago by my friend – and now Microsoft Principal Product Manager – Azure Logic Apps – Kent Weare to record a special episode on Logic Apps Development Tips and Tricks on his YouTube channel.
First of all, if you are unaware of his YouTube channel and you like or are interested in Azure Integration Services, I suggest you follow his channel, which is full of fantastic content. You can check and follow his channel here: https://www.youtube.com/@KentWeare
In this episode, we are going to discuss some of the most basic and important Logic Apps development best practices, tips, and tricks:
Naming Conventions, which will include Logic App, Action, and Connectors naming conventions
Error Handling and how to retrieve the error message inside Logic Apps
For Each Parallelism
Fixing API Connections and why you should care about this.
and comparing Logic Apps (Standard) and Azure Logic Apps (Consumption)
You can see the full episode here: https://www.youtube.com/watch?v=cLzplA1xVaM&t=479s
Let me know what you think about these Best practices, tips, and tricks or what you would like to be addressed in my series of blogs about this topic.
You can check all my tips and tricks here:
And of course, stay tuned for more Logic App Best practices, Tips, and Tricks.
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira