Friday Fact: Azure Logic Apps Supports Both Dot and Bracket Notations

Friday Fact: Azure Logic Apps Supports Both Dot and Bracket Notations

Azure Logic Apps offer flexibility in how you access data from JSON inputs, supporting both dot and bracket notations

Azure Logic Apps offers a flexible and first-class experience for you to access data within JSON structures. The most common way to access it is using tokens when you tokenize a message using a JSON schema or by using bracket notations. In fact, tokens use behind-the-scene bracket notations to reference those values. But do you know that there is another way? Another convention?

Consider we receive the following JSON input in a Logic App through a When a HTTP Request is received trigger:

{
    "ID": {
        "name": "Luis",
        "lastName": "Rigueira",
        "age": 34
    }
}

The approach to access the name using the bracket notation would be using the expression:

triggerBody()?['ID']?['name']

In a Logic Apps context:

Now, what a lot of people don’t know, and what isn’t very common to see, is that Logic Apps also supports Dot notation. Taking the sample above, accessing the same field can be achieved in Dot notation like this:

triggerBody().ID.name

In a Logic Apps context:

As you can see both methods return the same output.

The dot notation feels more natural and readable, so why do we not use it more often?

Dot notation is notably more streamlined and mirrors the property access seen in programming languages like JavaScript, enhancing readability. Bracket notation, however, brings a level of versatility to the table.

While both notations are valid and can be used interchangeably in many cases, bracket notation has the upper hand when it comes to accessing properties with dynamic names or names that include special characters, which dot notation can’t handle.

Dot Notation: Known for its simplicity and readability, dot notation is straightforward. It allows you to access the properties of a JSON object directly by their names. For instance, triggerBody().ID.name effortlessly fetches the name property from the trigger body of a Logic App that receives the previous input. This notation shines in its clarity, making code easier to read and understand at a glance.

Bracket Notation: Bracket notation, on the other hand, provides a level of flexibility unmatched by dot notation. It allows for the dynamic access of properties and the handling of property names that contain special characters or spaces.

Let’s take this JSON as an example:

{
  "user-info": {
    "first-name": "John",
    "last-name": "Doe"
  }
}

Here the bracket notation, (triggerBody()[‘user-info’][‘first-name’]) accesses properties that would be inaccessible with dot notation due to the presence of hyphens. This method is indispensable when dealing with dynamic property names or JSON objects with complex structures.

And, in case you were wondering, yes you can combine both methods together!

triggerBody().ID['name']

In conclusion, while Azure Logic Apps support both dot and bracket notations for accessing JSON properties, each notation has its strengths and best use cases.

Ultimately, the choice between dot and bracket notation depends on the specific requirements of your Logic App and the nature of the JSON data you’re working with. By understanding the strengths of each notation, you can leverage them effectively to build robust and adaptable Logic Apps tailored to your needs.

To lazy to read? We’ve got you covered! Check out our video version of this content!

Hope you find this helpful! If you enjoyed the content or found it useful and wish to support our efforts to create more, you can contribute towards purchasing a Star Wars Lego for my son!

Logic App Consumption deployment: The secret of KeyVault parameter cannot be retrieved. Http status code: ‘Forbidden’. Error message: ‘Access denied to first party service

Logic App Consumption deployment: The secret of KeyVault parameter cannot be retrieved. Http status code: ‘Forbidden’. Error message: ‘Access denied to first party service

Recently, a client asked me for help rectifying some existing logic apps in their environment because their resource had left the company. Not only to rectify the project but to put it in a better shape and use all good best practices.

One of the tasks we decided to do was reference secrets in Key Vault for the deployment process, whether through CI/CD or directly through Visual Studio. We had administrator access to the Key Vault in the dev environment, so we were able to easily create those secrets and reference them in the Logic App parameter file, for example, an Azure Service Bus connection string. For those who are not aware, we can archive that by using the code below:

"arm_servicebus_connectionString": {
      "reference": {
        "keyVault": {
          "id": "/subscriptions//resourceGroups//providers/Microsoft.KeyVault/vaults/"
        },
        "secretName": "KVS-SB-ConnectionString"
      }
    }

The problem was that when we tried to deploy the solution through Visual Studio, we got the following error:

Logic app visual studio deployment Multiple error occurred: Forbidden,Forbidden,Forbidden. Please see details.

Without any more detail. After some analysis, we realized that the number of Forbidden words in the messages matched the number of key vault secrets we were trying to reference. When we commented on them all and only left one, then we got an error message with more detail:

The secret of KeyVault parameter ‘name’ cannot be retrieved. Http status code: ‘Forbidden’.
Error message: ‘Access denied to first party service.
Caller: name=ARM;tid=;appid=…
Vault:;location=’. Please see https://aka.ms/arm-keyvault for usage details.

Initially, I thought that was a Key Vault access permission issue, even though I was a Key Vault administrator. However, sometimes, we also need some RBAC permission. In the end, I ended up giving Administrator, Reader, and Secret User permission access at the key vault, resource group, and subscription level:

Still, I was getting the same error!

Cause

When you are developing a Logic App Consumption, this is, in fact, an ARM template project and an ARM template deployment. So, when we reference a Key Vault secret in the LogicApp.parameters.json file, we are referencing a secure parameter that will be used during the ARM template deployment.

The problem is that to be able to access the key vault by the resource manager, you need to change the access policy to allow Azure Resource Manager for template deployment.

You can see this on the official documentation here:

When you need to pass a secure value (like a password) as a parameter during deployment, you can retrieve the value from an Azure Key Vault. To access the Key Vault when deploying Managed Applications, you must grant access to the Appliance Resource Provider service principal. The Managed Applications service uses this identity to run operations. To successfully retrieve a value from a Key Vault during deployment, the service principal must be able to access the Key Vault.

Solution

To solve this issue is quite very simple:

  1. Sign in to the Azure Portal.
  2. Open your key vault. Enter key vaults in the search box or select Key vaults.
  3. On the Key Vault, select Access configuration under the Settings section
  1. Select Azure Resource Manager for template deployment under Resource access. Then, select Apply.

Now, you will be able to successfully reference the Key Vault secure parameter and deploy the Logic App Consumption solution from Visual Studio.

Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more, you can buy (or help me buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

BizTalk Server Visual Studio project: attempted re-targeting of the project has been canceled.

BizTalk Server Visual Studio project: attempted re-targeting of the project has been canceled.

Yesterday while I was trying to configure a new BizTalk Server RosettaNet project in a client, I found a curious new error/issue while copying the BizTalk Server solution into the client development environment. After I copied the project to the development environment, I tried to open the BizTalk Server Visual Studio solution, and I got the following warning:

The C# project “ is targeting “.NETFramework, Version=v4.7.2”, which is not installed on this machine. To proceed, select an option below.

Knowing that we didn’t have .NET Framework 4.7.2, I chose the first option, but we endup getting the following error:

Attempted re-targeting of the project has been canceled. BizTalk Server 2016 Developer Tools only supports targeting ‘.Net.Framework 4.6.x’ and above.

Cause

This error/issue first happened because I thought our environment (client environment) was already using BizTalk Server 2020, and I developed the project locally using that version. BizTalk Server 2020 uses .NET Framework 4.7.3 or above by default. This was not the case. Our environment is still BizTalk Server 2016, and that is why when I tried to open the BizTalk Server Visual Studio solution, it asked me to re-target the project.

Normally, we don’t have a problem doing this if we can move from a previous version to a higher version, like, for example, from BizTalk Server 2016 to BizTalk Server 2020. In this case, it was a downgrade, and I don’t really know why he presented me with 4.5.2 instead of 4.6, which is the “official” version of BizTalk Server 2016.

Nevertheless, I was not able to do it or workaround this directly on Visual Studio.

Solution

To solve this issue is quite very simple:

  • Go to the project folder and open the *.btproj file with Notepad or Notepad++
  • And change the TargetFrameworkVersion from 4.7.2 to 4.6.

After these steps, I was able to open the BizTalk Server Visual Studio solution in our dev environment.

Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more, you can buy (or help me buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

BizTalk Oracle Adapter error: The assembly required for type mapping not found.

BizTalk Oracle Adapter error: The assembly required for type mapping not found.

Today while I was developing a new BizTalk Server solution that communicates with the Oracle database, I encountered a familiar issue that I forgot to document in the past:

Microsoft.ServiceModel.Channels.Common.MetadataException: The assembly required for type mapping not found.

That forced me to once again lose time not only to remember but to find how I could solve this.

Cause

This error happens when you try to call an Oracle Procedure, function, or package that contains User-Defined Types or UDTs. The UDTs can be present in the following artifacts:

  • Interface tables and interface views containing UDT columns.
  • Database tables and views containing UDT columns.
  • Packages, stored procedures, and functions containing UDT parameters.

Oracle UDTs help represent complex entities as a “single” object that can be shared among the applications.

BizTalk Server supports Oracle UDTs, but unlike what happens with SQL Server, which natively supports these types, in Oracle, we need to configure some more properties and generate a UDT DLL.

When this error occurs, two things may happen or maybe the reason:

  • You forgot to configure, in the Schema generation, the following properties:
    • GeneratedUserTypesAssemblyFilePath
    • GeneratedUserTypesAssemblyKeyFilePath
  • Or you forgot in runtime (aka receive location or send port) to configure the following property:
    • userAssembliesLoadPath

Solution

To solve this issue, we need to guarantee to perform the following steps:

  • It is necessary to create a signed assembly (DLL) of the User-Defined Types (UDTs) created in Oracle and that correspond to those interpreted by the WCF-Oracle Adapter. To do this, when creating the schemas from the Consume Adapter Service option, these assemblies must be created specifying:
    • On the GeneratedUserTypesAssemblyFilePath property, we need to provide a full path and name of the DLL that the wizard will create for us.
    • And on the GeneratedUserTypesAssemblyKeyFilePath property, the strong name key (.snk) path that the wizard will use to sign the DLL.

Both these properties are present in the UDT .NET Type Generation – Design Time section of the Binding Properties.

Once again, this will create for us a UDT DLL on the path we define and that we need to use in runtime:

Once we deploy our schemas and create the receive location or send port, we then need to make sure that we configure the following property with the path to the UDT DLL:

  • userAssembliesLoadPath 

Notice: even if you deploy the UDT DLL to GAC (what is advisable), you still need to configure the path to the UDT DLL in this property,

After these steps, you can successfully communicate with Oracle using an Oracle Procedure, function, or package that contains User-Defined Types.

Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more, you can buy (or help me buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

PowerShell script to identify all SQL V1 actions and triggers inside Logic Apps Consumption

PowerShell script to identify all SQL V1 actions and triggers inside Logic Apps Consumption

A few days ago, Luis Rigueira created a Kusto query to identify all Logic Apps Consumptions that use SQL V1 actions and triggers that will soon be deprecated (end of March 2024). The query also tries to identify all actions that are using those actions/triggers. This query actually works decently if a Logic App has actions or triggers in the first level (not inside if, switch, scope’s, and do on), but if we have Logic App with nested actions, which is quite common, then that query tries as a best effort to identify those actions – it will identify all Logic Apps, but it will not provide the name of the actions. The reason why is because it is quite difficult to loop to all the logic app definitions (JSON) with a Kusto query.

Don’t get me wrong. That query is awesome for you to identify all Logic Apps that we need to address to fix those actions. Now, if we need to identify all actions with that “problem” to estimate the work involved better, then that Kusto query will not be the best option. To solve this problem, we decided to create a PowerShell script that not only identifies all Logic Apps Consumption using SQL V1 actions and triggers but also identifies the names of those actions and triggers for you, providing a good report that you can use to plan and estimate your work.

# Function to extract actions recursively
function Get-ActionsAndTriggers {
    param (
        $node
    )
    $actionsAndTriggers = @()
    foreach ($key in $node.psobject.Properties.Name) {
        if ($node.$key.type -eq "ApiConnection") {
            if ($node.$key.inputs.path -like "*/datasets/default*" -and $node.$key.inputs.host.connection.name -like "*sql*") {
                $actionsAndTriggers += $key
            }
        } elseif ($node.$key -is [System.Management.Automation.PSCustomObject]) {
            $actionsAndTriggers += Get-ActionsAndTriggers -node $node.$key
        }
    }
    return $actionsAndTriggers
}
 
# Retrieve all Logic Apps within the subscription
$logicApps = Get-AzResource -ResourceType Microsoft.Logic/workflows
 
# Iterate through each Logic App and extract actions and triggers
foreach ($logicApp in $logicApps) {
    # Retrieve Logic App definition
    $logicAppDefinition = Get-AzResource -ResourceId $logicApp.ResourceId -ExpandProperties
 
    # Extract actions and triggers from the Logic App definition
    $allActionsAndTriggers = Get-ActionsAndTriggers -node $logicAppDefinition.Properties.Definition.triggers
    $allActionsAndTriggers += Get-ActionsAndTriggers -node $logicAppDefinition.Properties.Definition.actions
 
    # Display the Logic App name if filtered actions and triggers were found
    if ($allActionsAndTriggers.Count -gt 0) {
        Write-Host "Logic App: $($logicApp.Name) - RG: $($logicApp.ResourceGroupName)" -ForegroundColor Red
        # Display the list of filtered actions and triggers
        Write-Host "Filtered Actions and Triggers:"
        $allActionsAndTriggers
        Write-Host ""
    }
}

The great thing about this script is that it also identifies all nested actions (actions inside other actions like If, Scopes, Switch, and so on)

If you are wondering, can we do the same and identify all using SQL V2 actions and triggers? Don’t worry, we have your back covered, check the download section. In fact, with small changes to this script you can use for all types of connectors.

Download

THESE COMPONENTS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND.

You can download the PowerShell script to identify all SQL V1 actions and triggers inside Logic Apps Consumption from GitHub here:

If you want to identify SQL V2 actions and triggers, then download this script from GitHub:

Huge thanks to Luis Rigueira for working with me on these scripts.

Hope you find this helpful! So, if you liked the content or found it useful and want to help me write more, you can help us buy a Star Wars Lego for my son! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Friday (funny) Fact: There is no size limit for the Logic App parameter name

Friday (funny) Fact: There is no size limit for the Logic App parameter name

In Azure Logic Apps, you can abstract values that might change in workflows across development, test, and production environments by defining parameters. A Logic App parameter stores values that can be reused throughout a Logic App workflow. These parameters allow for a more flexible and maintainable configuration of logic apps, making it easier to update values without changing the actual workflow’s logic.

Parameters can store various types of data, such as strings, secure strings, boolean, arrays, or any other data that might need to be used multiple times within the Logic App or may change based on the environment (development, test, production, etc.). They can also be defined at deployment time using CI/CD pipelines.

By using parameters, you can easily update these values in one place without needing to edit the logic in multiple actions or triggers throughout the app.

In practice, you define parameters in the Logic App’s definition and can then use them in expressions or directly in actions throughout the app. When the Logic App is deployed or executed, these parameters are evaluated and used accordingly. This approach helps you manage and deploy Logic Apps across different environments, making the workflows more dynamic and easier to configure.

The funny fact about parameters is that they are probably the only Logic App “component” that doesn’t have a size limit regarding the name. For example:

  • Logic App Consumption name has a maximum limit of 80 characters.
  • Logic App Standard Workflow name has a maximum limit of 43 characters
  • A trigger or action name has a maximum limit of 80 characters.
  • and so on.

But the Logic App parameter name is unlimited! To prove that and for fun, I have created a parameter with this name:

p_material_availability_changed_range_hours_assaasasasasasasasasassaassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassaasassaasasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassasasasasasaassasa_assaasasasasasasasasassaassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassaasassaasasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassasasasasasaassasa_assaasasasasasasasasassaassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassaasassaasasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassasasasasasaassasa_assaasasasasasasasasassaassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassaasassaasasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassasasasasasaassasa_assaasasasasasasasasassaassaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassaasassaasasaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaassasasasasasaassasa

To be honest, I think this is crazy! They should fix this and set up a limit because giving that amount of power to developers (and I’m a developer, too) is insane, we can do some nasty stuff!

To lazy to read? We’ve got you covered! Check out our video version of this content!

Hope you find this helpful! If you enjoyed the content or found it useful and wish to support our efforts to create more, you can contribute towards purchasing a Star Wars Lego for my son!

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

How to document an integration architecture

How to document an integration architecture

Mar 08, 2024

One could argue that creating good documentation for an integration solution is harder than with other types of software projects. The reason is that integration solutions almost always span over many systems and services and that we often talk about hundreds of integrations, touching almost every part of your architecture.

Keeping track of such a complex system is literally impossible without the right documentation. ?

An example of bad documentation …

Basic principles of good documentation for an integration architecture

1. Visualizations and diagrams are more important than textual documentation.

Textual documentation has its place and is important. But when it comes to trying to describe complex scenarios, visualizations in the form of architectural diagrams are easier to quickly understand and therefore arguably the most important part of your documentation.

2. Focus on the high-level diagrams.

As developers, we’re good at documenting all the little technical details in our solutions. But we often document things we end up looking up in the live code and configurations. Focus on high-level documentation, the overall picture, and how the different parts of your solutions relate to each other.

3. Make it readable for everyone.

UML and complex notations are great – if you understand them fully. The C4 model formalized the idea of a much simpler notation, based on boxes and arrows – easier to understand and work with.

Use a dead simple notation that everybody can write and read.

4. Make it useful for everyone.

Besides a simple notation, the C4 model describes a few different diagrams with increasing levels of detail. Using different levels makes it possible to drill down from an overview into more detailed diagrams. Similar to zooming in and out of a digital map.

Utilizing the levels and the simple notation in the C4 model, diagrams can have information useful for non-technical business users and describe large parts of architecture without cluttering them down with too many details.

The C4 levels in an integration context

Let’s look at the three levels that make up good integration documentation.

C4 model diagrams levels with increasing detail.

Level 1 (Context) – The Overall picture

For quickly understanding an integration architecture the most important part of documentation is the high-level documentation. It shows all the systems and the data-flows between them. Not the nitty-gritty details, just what systems are involved and what data is transferred between them.

In smaller architectures, it’s possible to describe all systems and data-flows in a single level 1 diagram. In large ones, we need to choose a context to avoid too large and hard-to-read diagrams. In this example, the context is the e-Commerce part of the architecture.

Image description
Level 1, Overall picture example – https://my.revision.app/diagram/6idZKQTeD8C9.

It’s important to keep these diagrams as simple as possible. It’s often tempting to include more details than necessary. Try to skip everything besides concrete systems and what type of data that’s transferred. The reason for this is that these diagrams otherwise get too complicated, hard to work with and hard to understand.

This part of the documentation is important. This is information that often is hard to understand by looking at the code and configuration.

This is also the part of the documentation that business users will understand. As we all know, getting the business users involved is critical for making the right decisions and in the end creating solutions that actually help the business in the right way.

Even though most people would agree that the overall picture is important to have documented, I’ve found that this is the part most developers tend to skip.

Level 2 (Container) – The Integration Processes

It’s hard to define an integration. In the overall picture product data are moved between several systems. Is the integration the part that moves the products between InRiver PIM and Dynamics 365 PO? Or is the integration the overall process that makes sure the products end up in all the destination systems?

Leaving that discussion aside we can agree that the purpose of an integration is to support some sort of business process. In this case, the handling of product master data.

Documenting how we support a business process is the second level of diagrams.

The difference between this and the level 1 diagram is that we’ll zoom in to a specific area and that we’ll here include more details – things that we didn’t want to include in the overall picture to keep it easy to understand. In the example, we’re showing that product master data actually is distributed using a product service. We’re also including other details we didn’t show in the level 1 diagram.

Level 2, Integration Process example – https://my.revision.app/diagram/bwoG8AXEhhxD. Shows how product master data flows between systems.

Again this is information that is hard to get from reading code and configuration and that is important for getting the business users to understand what we’re actually building.

It’s important to try and keep purely technical details out of this diagram and try and focus on more business rules like information. Information that is useful for a business user.

Example of information we might include on level 2;

  • what’s triggering the transfer
  • duplication checks
  • conversions
  • transformations of data, and so on

Level 3 – The Integration Details

This is probably the diagram that needs the least explanation. This is the part I often find well-documented and maintained. Here we document all the technical components and implementation details.

This is however also the part that doesn’t always require that much documentation. Much of the details at this level can easily be found in code and configuration and it’s easy to spend time documenting things that in the end developers will look up code and configuration instead.

Level 3, Integration Details example – https://my.revision.app/diagram/RmXsxDtuAEHy. Shows a detailed view of how product master data in moved between one system to another.