A Comprehensive Guide For Logic Apps Standard REST APIs

A Comprehensive Guide For Logic Apps Standard REST APIs

Azure REST APIs are service endpoints that support sets of HTTP operations (methods), which provide create, retrieve, update, or delete access to all Azure service’s resources. You may know, and many of you may be familiar with Logic Apps Consumption REST APIs that are very well documented by Microsoft here: Azure Logic Apps.

However, for those who didn’t know about this, I recommend you go there and have a look. You may find them very useful for achieving many things and automating certain tasks. One of these cases is the strategy I documented to get the error message from a particular failure inside Logic App Consumption, where I invoke the Get Workflow Run method to get the run history of a particular run in order
to extract the correct error message. You can read more about it here: How to get the Error Message with Logic App Try-Catch (Part II) – Using an Azure Function.

Another great thing about this Microsoft REST APIs documentation is the availability to try them directly from the documentation page. But unfortunately, these REST APIs can be applied only to Logic Apps Consumption. There isn’t any official REST APIs documentation available for Logic Apps Standard, and yes, they are different. A few months ago I decided to start documenting the new Logic Apps Standard REST APIs publishing three blog posts:

But that were only a few parts of the existing RESP APIs.

Now I have created a comprehensive whitepaper or guide about Logic Apps Standard REST APIs that you can download for free here: Logic Apps Standard Rest API’s a Comprehensive Guide.

What’s in store for you?

This whitepaper will give you a detailed understanding of the following:

  • A short introduction to Logic App Consumption REST APIs.
  • Comprehensive details about Logic Apps Standard REST APIs:
    • Workflow operations: For example, it provides operations for creating and managing workflows.
    • Workflow Runs operations: For example, it provides operations for listing and canceling workflow runs.
    • Workflow Run Actions operations: For example, it provides operations for lists of workflow run actions.
    • Workflow Versions operations: For example, it provides operations for lists of workflow versions.
    • Workflow Triggers operations: For example, it provides operations for listing and running workflow triggers.
    • Workflow Trigger Histories operations: For example, it provides operations for listening workflow trigger histories.
    • Logic App operations: For example, it provides that you can apply at the Logic App Standard App level.
    • App Service Plans: For example, it lists App Service Plan REST APIs operations that are interested in using bind with Logic App Standard.

Where can I download it?

You can download the whitepaper here:

I hope you enjoy reading this paper and any comments or suggestions are welcome.

Big thanks to my team member Luis Rigueira for contributing to this whitepaper as a Co-Author!

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

BizTalk Health Monitor Dashboards Customization: Monitoring BizTalk Host Instances Status

BizTalk Health Monitor Dashboards Customization: Monitoring BizTalk Host Instances Status

Have you noticed that the default BizTalk Health Monitor Dashboard doesn’t monitor/report the status of the BizTalk Server Host Instances?

A few weeks ago, while delivering a BizTalk Server training course, I was presenting and playing around with the BizTalk Health Monitor with my “students” and explaining the importance of actively monitoring your BizTalk Server platform. Because my developer environment is a machine that doesn’t have many errors and is properly configured, like a production environment, it is difficult to show the BizTalk Health Monitor presenting problems, so I decided to stop some of the Host Instances! It is an easy way to “emulate” a problem:

However, when I ran the BizTalk Health Monitor, I realized the Host Instance tile was green!

Notice also that the information provided states that I have 10 Host Instances, and only 8 were running. I was surprised by that behavior. I confirmed with the Microsoft Engineering team, and this is not a bug. Actually, by default, the Host Instances dashboard tile is NOT supposed to report a warning if some Host Instances are stopped. this tile reports, in fact, what MSFT has in the category “BizTalk Host Instance” of the KPI view:

Each default Dashboard tile normally reports the content of one or more categories of different views (Warnings, Summary, Key indicators, Queries Output…); However, the choice of content of these tiles cannot be changed by us.

Now the main question is: Can we put the BizTalk Server Health Monitor “watching” the status of the Host Instances and raising some alerts?

Luckily for us, the answer is yes, and it is not that difficult. The tool allows us to:

  • Add custom rules that will allow us to query the environment.
  • And add our own custom tiles in our profile Dashboard view.

That also means that each profile can have there own monitoring customizations.

To create some custom rules to monitor the status of the Host Instances, assuming that you have the BizTalk Health monitor tool open, we need to:

  • Right-click over the Default profile, and select the Profile settings… option from the context menu.
  • From the Default profile – Profile settings window, select the Rules tab and then click on New rule.
  • From the New Rule – Select Query window, expand the Important target category, then select the BizTalk Host Instances sub-category, and click Ok.
  • On the New Rule (Query: BizTalk Host Instances) window, select the My Rule option from the left tree and:
    • On the Caption property, give a name to the rule: Stopped Host Instances.
    • On the Comment property, provide a small description: Monitoring BizTalk Server Host Instances status.
    • On the Trigger Actions panel, select the Each time a row validated all the Rule conditions option.
    • And click Commit changes.
  • On the New Rule (Query: BizTalk Host Instances) window, select the Condition 1 option from the left tree and:
    • On the Column to Check property, leave the default value: %GLOBALPROP_REPORTVALUE:Running%.
    • On the Operator property, from dropbox, select the option: IS DIFFERENT OF.
    • On the Comparison value property, type Yes.
    • And click Commit changes.
  • On the New Rule (Query: BizTalk Host Instances) window, on the Condition option from the left tree, right-click and select the option New Condition:
  • On the New Rule (Query: BizTalk Host Instances) window, select the Condition 2 option from the left tree and:
    • On the Column to Check property, leave the default value: %GLOBALPROP_REPORTVALUE:Running%.
    • On the Operator property, from dropbox, select the option: IS DIFFERENT OF.
    • On the Comparison value property, type Not Applicable.
    • And click Commit changes.
  • On the New Rule (Query: BizTalk Host Instances) window, select the Add Summary or Warning Entry option from the left tree under the Actions option and:
    • On the Category property, type: Host Instances.
    • On the Severity property dropbox, select the Red Warning option.
    • On the Caption property, type: Host Instances Status.
    • On the Value property, type: Is %GLOBALPROP_REPORTVALUE:Name% running: %GLOBALPROP_REPORTVALUE:Running%
    • And click Commit changes.
  • Finally, click Ok.

You can always try the custom rule by clicking Test.

This rule will gather information about Host Instances that are not running. Now we are going to create another rule to gather information about Host Instances that are running. To do that, we need to:

  • From the Default profile – Profile settings window, select the Rules tab and then click on New rule.
  • From the New Rule – Select Query window, expand the Important target category, then select the BizTalk Host Instances sub-category, and click Ok.
  • On the New Rule (Query: BizTalk Host Instances) window, select the My Rule option from the left tree and:
    • On the Caption property, give a name to the rule: Running Host Instances.
    • On the Comment property, provide a small description: Monitor Running Host Instances status.
    • On the Trigger Actions panel, select the Each time a row validated all the Rule conditions option.
    • And click Commit changes.
  • On the New Rule (Query: BizTalk Host Instances) window, select the Condition 1 option from the left tree and:
    • On the Column to Check property, leave the default value: %GLOBALPROP_REPORTVALUE:Running%.
    • On the Operator property, from dropbox, select the option: IS EQUAL TO.
    • On the Comparison value property, type Yes.
    • And click Commit changes.
  • On the New Rule (Query: BizTalk Host Instances) window, select the Add Summary or Warning Entry option from the left tree under the Actions option and:
    • On the Category property, type: Host Instances.
    • On the Severity property dropbox, select the Information option.
    • On the Caption property, type: Host Instances Status.
    • On the Value property, type: Is %GLOBALPROP_REPORTVALUE:Name% running: %GLOBALPROP_REPORTVALUE:Running%
    • And click Commit changes.
  • Finally, click Ok.

Make sure that the two custom rules are selected, and then perform another analysis of your platform.

Now, what we need to do is to create a custom tile to pin to our dashboard. A custom tile can be indeed created easily from any entry or category of the Warning view, Summary view, Key indicators view, or query output view. And to do that, we need to:

  • After analyzing our BizTalk Server environment, expand the report and then select the Summary option.
  • On the Summary report page, scroll down until you find the Host Instances summary, right-click on Host Instances and select the option Pin to the dashboard and then .
  • A new window will appear, saying that a new item was added to the dashboard. Click Ok.
  • If we now click on the Default profile, we will see that the Favorite tile was added to the dashboard.
  • We can customize the name of that tile by right-clicking and selecting the Edit option.
  • On the Favorite Tile – Favorite window:
    • On the Caption property, type: Host Instances Status.
    • On the Comment property, type: Host Instances Status.
    • And click Ok.

And finally, test it by doing another analysis of the environment.

How amazing is this!

Thanks to all that helped me document this feature. You know how you are!

Hope you find this useful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

March 13, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

March 13, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

__CONFIG_colors_palette__{“active_palette”:0,”config”:{“colors”:{“f3080”:{“name”:”Main Accent”,”parent”:-1},”f2bba”:{“name”:”Main Light 10″,”parent”:”f3080″},”trewq”:{“name”:”Main Light 30″,”parent”:”f3080″},”poiuy”:{“name”:”Main Light 80″,”parent”:”f3080″},”f83d7″:{“name”:”Main Light 80″,”parent”:”f3080″},”frty6″:{“name”:”Main Light 45″,”parent”:”f3080″},”flktr”:{“name”:”Main Light 80″,”parent”:”f3080″}},”gradients”:[]},”palettes”:[{“name”:”Default”,”value”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]},”original”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]}}]}__CONFIG_colors_palette__

Using Logic Apps to interact with ChatGPT

Using Logic Apps to interact with ChatGPT

ChatGPT, the AI chatbot that everyone is talking about it and to it!! But what is ChatGPT, and what is the importance of AI in the actual world content?

ChatGPT is a large language model developed by OpenAI that can generate human-like responses to text prompts. It is part of the GPT (Generative Pre-trained Transformer) family of language models, which have been trained on massive amounts of text data using deep learning techniques. In a few words, it is a conversational artificial intelligence platform. GPT stands for Generative Pre-Trained Transformer, and the prefix Chat means that it allows you to get all you looking for in a simple chat.

The importance of AI, and specifically language models like ChatGPT, in the actual world context, is that they have the potential to transform the way we interact with technology and each other. Here are some examples of how AI can be beneficial in various fields:

  • Customer service: AI-powered chatbots can help businesses automate customer support and provide quick and efficient responses to common queries.
  • Healthcare: AI can be used to analyze medical data and provide insights that can help doctors make more accurate diagnoses and treatment plans.
  • Education: Language models like ChatGPT can be used to develop personalized learning experiences for students, providing instant feedback and adapting to their individual needs.
  • Natural language processing: AI can help improve communication between people who speak different languages by automatically translating text and speech in real-time.
  • Personal assistants: AI-powered personal assistants can help people manage their daily tasks, schedule appointments, and provide helpful reminders.

and many more.

The future for AI is very promising as technology continues to evolve and improve rapidly. It will bring potential developments and advancements we can expect to see in the future. One of them will be increasing automation. AI is already being used to automate many tasks in various industries, and this trend is expected to continue. As AI algorithms become more sophisticated, we can expect to see even more automation, particularly in fields such as manufacturing, logistics, and transportation.

Overall, the future for AI is bright, and we can expect to see continued advancements and innovations that will have a significant impact on our lives. However, it is important to approach these developments cautiously and ensure that AI is developed and used responsibly and ethically.

So, let’s play a little just for fun! And combine Logic Apps with ChatGPT!

In this blog post, we will be creating a Logic App that will be responsible for interacting with ChatGPT to obtain an organic answer. This Logic App can then be called by your tools or programs like, for example, a PowerApp, serving as your integration layer and containing more business logic inside if you need it.

Create a ChatGPT Account

First and foremost, you need to create a ChatGPT account, so follow these steps:

  • On the Welcome to ChatGPT page, select Sign up.
  • On the Create you account page, make sure you create an account.
  • On the Tell us about you page, confirm your name and click Continue to accept the terms.
  • On the Verify your phone number page, type your phone and click Send code.
  • On the Enter code page, enter the code you received on your phone.

You are ready to rumble! You can now use chatGPT, but let’s go a little deeper to create the keys we will need to interact with ChatGPT on our Logic App:

  • That will open an API key generated popup. Make sure you copy that key to a safe place or to your notes for us to use later on.

Create a Logic App

Next, we need to create a Logic App. For simplicity, we are going to use a Logic App Consumption and name it LA-ChatGPT-POC, and as for the trigger, we are going to use a Request > When a HTTP request is received, so:

  • From the Search connectors and triggers, type Request and select the Request connector and then the trigger: When a HTTP request is received.

We are going to receive a text payload – Content-Type: plain/text – so we will be using the default HTTP Method POST, and we will not need to provide any Request Body JSON Schema, since we will be receiving plain text. That means leaving the trigger configuration as is.

Note that once we save the Logic App, a URL will be generated that we can later use to invoke the workflow.

Next, on our business logic, we need to add an HTTP action to be able to interact with ChatGPT. Diving in, in the ChatGPT documentation, we rapidly found the endpoint to where this request should be:

If you want to know more about this topic, you can follow this link: https://platform.openai.com/docs/api-reference/chat/create.

Once again, for the sake of simplicity, we are not going to implement error handling inside our workflow to control and deal with failures – in real cases, you should empower your processes with these capabilities/functionalities.

Next, on our Logic App:

  • Click on + New step, and from the search text box, type HTTP and select the HTTP connector followed by the HTTP action.
  • And do the following configurations:
    • Set the Method property to POST.
    • On the URI property, enter the URL that we mentioned previously:
    • On the Headers property, add the following header:
      • Authorization: Bearer
      • Content-Type: application/json
    • On the Body property, we are going to add the following JSON message:
{
  "model": "gpt-3.5-turbo",
  "messages": [
      {
          "role": "user",
           "content": "@{triggerBody()}"
           }
        ]
}

Once again, you can follow the ChatGPT documentation to see how to send the requests and their structure.

If we saved this Logic App as is now and tried our Logic App with a hello question: Hello?

On the Run from that Logic App, in the HTTP – Call ChatGPT action output, you will see something like this, as the documentation already foresees:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "nnHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

Now, to finalize our Logic App, let’s add a response:

  • Click on + New step, and from the search text box, type Request and select the Request connector followed by the Response action.
  • And do the following configurations:
    • Set the Status Code property to 200.
    • On the Headers property, add the following header:
      • Content-Type: text/plain
    • On the Body property, we are going to add the following expression:
      • trim(outputs(‘HTTP_-_Call_ChatGTP’)?[‘body’]?[‘choices’]?[0]?[‘message’]?[‘content’])

Of course, if you are trying, you need to adjust the name of the HTTP action according to your scenario.

You are probably wondering why do we use the trim() function in our expression?

We use the trim function to remove any whitespace characters from the beginning and end of the message content before returning it. This should result in a clean message without the “nn”.

In the end, visually, the overall workflow should end up like this:

And now, we just need to save our Logic App and test it!

Testing our process

So, after saving our Logic App, we will use the URI on present on the When a HTTP request is received trigger and use it in Postman to test our process – you are free to use any other tool.

Let’s start with the basics and ask: Hello? to see what the expected response from ChatGPT:

Now, let’s do a more difficult question: Who is Sandro Pereira? and the answer got me surprised!

As an AI language model, I cannot properly answer subjective questions such “Who is Sandro Pereira?” since I cannot browse the internet nor access a person’s thoughts or opinions. However, based on online searches Sandro Pereira appears to be a well-known Portuguese software integration professional, speaker, author, and a Microsoft Azure MVP (Most Value Professional) with more than 10 years of experience in the field.

Nicely done ChatGPT! you only failed in the number of field expert years, that is more than 16 ?

Finally, let’s ask: Can you suggest me a plate for dinner?

Where can I download it

You can download the complete Azure Function source code here:

Credits

Kudu to my team member Luis Rigueira for participating in this proof-of-concept!

Hope you find this useful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

March 6, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

March 6, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

__CONFIG_colors_palette__{“active_palette”:0,”config”:{“colors”:{“f3080”:{“name”:”Main Accent”,”parent”:-1},”f2bba”:{“name”:”Main Light 10″,”parent”:”f3080″},”trewq”:{“name”:”Main Light 30″,”parent”:”f3080″},”poiuy”:{“name”:”Main Light 80″,”parent”:”f3080″},”f83d7″:{“name”:”Main Light 80″,”parent”:”f3080″},”frty6″:{“name”:”Main Light 45″,”parent”:”f3080″},”flktr”:{“name”:”Main Light 80″,”parent”:”f3080″}},”gradients”:[]},”palettes”:[{“name”:”Default”,”value”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]},”original”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]}}]}__CONFIG_colors_palette__

How to get the Error Message with Logic App Try-Catch (Part III) – Using a generic Logic App

How to get the Error Message with Logic App Try-Catch (Part III) – Using a generic Logic App

A few months ago, I wrote two blog posts about How to get the Error Message with Logic App Try-Catch, which you can find it here:

Of course, in this series of posts, we are addressing Logic App Consumption. We can actually implement the same strategy for Logic App Standard, but the APIs will be different – this is something that I will write about in the future.

Nevertheless, when I published the second part of this series, I mentioned that we could actually use a no code-low code approach using a Logic App to perform the same operation we were doing by code inside an Azure Function. And that got curiosity by some of my readers to question me if I was doing a thirty-part addressing that scenario. Well, it took some time, but here it is!

What we pretend to do here is to create a generic Logic App Consumption that can dynamically catch the actual error message and action inside a run from another Logic App. This means that we will discard generical errors like “An action failed. No dependent actions succeeded.” which don’t tell us anything about what really happened during the workflow, only that a subsequent child action failed, and dig deeper to find the real error behind that.

The Logic App will receive the same inputs of the Azure Function that we described in the previous post:

  • Subscription Id;
  • Resource name;
  • Logic App name;
  • Run id;

But in this case, in a JSON format using the Request > When a HTTP Request is received trigger.

{
    "subscriptionId": "xxxxx",
    "resourceGroup": "RG-DEMO-LASTD-MAIN",
    "workflowName": "LA-catchError-POC",
    "runId": "08585259279877955762218280603CU192"
}

Of course, to do this, after we add the trigger, we need to click on Use sample payload to generate schema.

And paste the above JSON sample for the editor to generate the JSON schema

The next step we are going to do is to invoke the Azure Logic App REST API in order to get the run history:

GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/workflows/{workflowName}/runs/{runName}/actions?api-version=2016-06-01

Here is the link for the Microsoft documentation about is: Workflow Run Actions – List.

We are going to do that by creating a new HTTP action.

Of course, we need to:

  • Specify the Method parameter to be GET.
  • On the URL parameter, copy the URL described above and replace:
    • {subscriptionId} by the token subscriptionId present in the trigger
    • {resourceGroupName} by the token resourceGroup present in the trigger
    • {workflowName} by the token workflowName present in the trigger
    • {runName} by the token runId present in the trigger
  • On the Headers parameter, add the following header:
    • Content-Type: application/json
  • On the Authentication type, we will use Managed identity to dynamically generate the OAuth token necessary to invoke the Azure Logic App REST API.

Managed identities in Azure Logic Apps are an authentication mechanism that allows the workflow to access other services without having the user define the credentials for those services inside the workflow actions.

Of course, to use it, we need to go to our Logic App resource and enable managed identity by:

  • On your LogicApp, from the left menu, go to the Identity option present in the Settings group, and once there, on the Status property, click on On, this will generate an Object (principal) ID.

Later on, we will be setting the correct permissions.

Getting back to our Logic App, now that we have an action to invoke the Azure Logic App REST API to get the run history, we are going to use a Filter Array action to filter the action with the status equal to Failed like we describe in the first blog on this series:

  • Select Add an action.
  • In the search box, enter Filter array, and from the result panel, select the Data OperationsFilter array action
  • And provide the following information:
    • on the From property, place the following expression:
      • body(‘Call_Logic_App_Rest_API_To_Get_Run_History’)?[‘value’]
      • Note: that ‘Call_Logic_App_Rest_API_To_Get_Run_History‘ is the name of the previous HTTP action, so you need to adjust this value to your scenario
    • on the condition property, on the left textbox, place the following expression:
      • item()?[‘properties’]?[‘status’]
      • Note: this is always equal
    • Leave the operator as is equal to, and on the right textbox, place the following value:

So, in this last action, we will create an array object for all actions that contain the property status equal to Failed. However, that can also have the generic error that we need to discard to archive that we are going to create a For each action to travel that array and find the correct error:

As I mentioned in the last post, depending on the scenario and actions you are using, the error information may be in different places and structures of the JSON in the run history:

  • Usually, we have inside the properties a structure call error that has the error message inside the message property: action[“properties”][“error”][“message”]
  • But sometimes, this error structure doesn’t exist – the HTTP action is a good example of this – and we need to get uri information on the outputsLink structure: action[“properties”][“outputsLink”][“uri”], to invoke that URL to get to correct error message.

These are different behaviors that we need to handle inside our workflow. Therefore, we will add another Action, this time a Condition with the following configuration:

  • Items(‘For_each_Filter_Array_Run_History’)[‘properties’]?[‘error’] is equal to null

As you see in the picture below.

What does this mean? It means that if inside the properties of the JSON, the object error is null (does not exist):

  • Then the Logic App goes through the true side of the condition, and there we need to implement the logic to get and invoke the URI to get the error details
  • Otherwise, If it is false and indeed the properties-error contains an object error, then it goes through the False side of the condition, meaning we can grab the error message from there.

True branch

Let’s focus on the True side of the condition. As we mentioned above, the details of the error information will not be present directly in the run history. Instead, we will have something like this:

"status": "Failed",
        "code": "NotSpecified"
      },
      "id": "/subscriptions/XXXXXX/resourceGroups/RG-DEMO-LASTD-MAIN/providers/Microsoft.Logic/workflows/LA-catchError-POC/runs/08585259124251183879640913293CU37/actions/Condition_3",
      "name": "Condition_3",
      "type": "Microsoft.Logic/workflows/runs/actions"
    },
    {
      "properties": {
        "outputsLink": {
          "uri": "https://XXXX-XX.westeurope.logic.azure.com:XXX/workflows/XXXXXX/runs/08585259124251183879640913293CU37/actions/Condition_4/contents/ActionOutputs?api-version=2016-06-01&se=2023-02-06T20%3A00%3A00.0000000Z&sp=%2Fruns%2F08585259124251183879640913293CU37%2Factions%2FCondition_4%2Fcontents%2FActionOutputs%2Fread&sv=1.0&sig=XXXXX",
          "contentVersion": "XXXXXX==",
          "contentSize": 709,
          "contentHash": {
            "algorithm": "md5",
            "value": "XXXXX=="

The status is Failed, but there are no error message details to present in this situation, so the error must be somewhere else, and indeed it is. The error, in this case, is present as a URL, in the object uri, so if we follow this link and paste it into a browser, this is what we receive in our sample:

{"error":{"code":"AuthorizationFailed","message":"The authentication credentials are not valid."}}

That means for us to get the correct error message, we have to get that link and perform another HTTP call, and this is exactly what we are going to do with the next action:

  • So, on the true branch of the condition, add a new HTTP action with the following configuration:
    • Set the Method property as GET.
    • Set the URI property to be dynamic using the output of the filter array:
      • items(‘For_each_Filter_Array_Run_History’)?[‘properties’]?[‘outputsLink’]?[‘uri’]

But unfortunately, even the response that comes from this URI with the error detail can appear in different ways. We have already found two scenarios:

  • The error can appear in this time of structure:
{
  "statusCode": 404,
  "headers": {
    "Pragma": "no-cache",
    "x-ms-failure-cause": "gateway",
    "x-ms-request-id": "XXXXXXX",
    "x-ms-correlation-request-id": "XXXXXX",
    "x-ms-routing-request-id": "XXXXXX",
    "Strict-Transport-Security": "max-age=31536000; includeSubDomains",
    "X-Content-Type-Options": "nosniff",
    "Cache-Control": "no-cache",
    "Date": "Fri, 03 Feb 2023 12:19:12 GMT",
    "Content-Length": "302",
    "Content-Type": "application/json; charset=utf-8",
    "Expires": "-1"
  },
  "body": {
    "error": {
      "code": "InvalidResourceType",
      "message": "The resource type 'workflows' could not be found in the namespace 'Microsoft.Logic' for api version '2016-06-01''. The supported api-versions are '2015-02-01-preview,2015-08-01-preview,2016-06-01,2016-10-01,2017-07-01,2018-07-01-preview,2019-05-01'."
    }
  }
}
  • And sometimes like this:
[
  {
    "name": "Condition",
    "startTime": "2023-02-06T10:21:38.4195084Z",
    "endTime": "2023-02-06T10:21:38.4195084Z",
    "trackingId": "fd8b62ec-4745-4e85-84b4-da57b8e8b8c2",
    "clientTrackingId": "08585259279877955762218280603CU192",
    "code": "BadRequest",
    "status": "Failed",
    "error": {
      "code": "InvalidTemplate",
      "message": "Unable to process template language expressions for action 'Condition' at line '0' and column '0': 'The template language function 'startsWith' expects its first parameter to be of type string. The provided value is of type 'Null'. Please see https://aka.ms/logicexpressions#startswith for usage details.'."
    }
  }
]

For that reason, and in order to try to provide the best detail possible, we decide to create a variable to set up the type of error we are dealing with:

  • First, we add a Parse JSON action to parse the response of the previous HTTP Call using the following schema:
{
    "items": {
        "properties": {
            "body": {
                "properties": {
                    "error": {
                        "properties": {
                            "code": {
                                "type": "string"
                            },
                            "message": {
                                "type": "string"
                            }
                        },
                        "type": "object"
                    }
                },
                "type": "object"
            },
            "clientTrackingId": {
                "type": "string"
            },
            "code": {
                "type": "string"
            },
            "endTime": {
                "type": "string"
            },
            "error": {
                "properties": {
                    "code": {
                        "type": "string"
                    },
                    "message": {
                        "type": "string"
                    }
                },
                "type": "object"
            },
            "name": {
                "type": "string"
            },
            "startTime": {
                "type": "string"
            },
            "status": {
                "type": "string"
            },
            "trackingId": {
                "type": "string"
            }
        },
        "required": [
            "name",
            "startTime",
            "endTime",
            "trackingId",
            "clientTrackingId",
            "status"
        ],
        "type": "object"
    },
    "type": "array"
}
  • And then a Variables – Set variable action with the following condition to define the type of the error;
    • if(equals(first(body(‘Parse_Outputs_URL’))?[‘error’],null),if(equals(first(body(‘Parse_Outputs_URL’))?[‘body’]?[‘error’],null),’Default’,’BodyError’),’Error’)

What this expression does?

  1. It first checks if the error key’s value in the response body’s first element equals null.
    • If that evaluates to true, it then checks if the value of the error key in the body object of the first element of the response body is equal to null.
      • If this second check is true, it returns the string Default – this deals with unpredict structures.
      • If the second check is false, it returns the string BodyError.
    • If the first check is false, it returns the string Error.

To define the response with the correct error message structure, we will be adding a Switch action with 3 branches:

  • Case – Error
    • Inside this branch, we will be defining the value of another variable – that will be our response structure – to be:
{
"name": "@{items('For_each_Filter_Array_Run_History')?['name']}",
"type": "@{items('For_each_Filter_Array_Run_History')?['type']}",
"status": "@{items('For_each_Filter_Array_Run_History')?['properties']?['status']}",
"code": "@{items('For_each_Filter_Array_Run_History')?['properties']?['code']}",
"startTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['startTime']}",
"endTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['endTime']}",
"errorMessage": "@{first(body('Parse_Outputs_URL'))?['error']?['message']}"
}
  • Case – Body Error
    • Inside this branch, we will be defining the value of another variable – that will be our response structure – to be:
{
"name": "@{items('For_each_Filter_Array_Run_History')?['name']}",
"type": "@{items('For_each_Filter_Array_Run_History')?['type']}",
"status": "@{items('For_each_Filter_Array_Run_History')?['properties']?['status']}",
"code": "@{items('For_each_Filter_Array_Run_History')?['properties']?['code']}",
"startTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['startTime']}",
"endTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['endTime']}",
"errorMessage": "@{first(body('Parse_Outputs_URL'))?['body']['error']?['message']}"
}
  • and Default
    • And finally, inside this branch, we will be defining the value of another variable – that will be our response structure – to be:
{
"name": "@{items('For_each_Filter_Array_Run_History')?['name']}",
"type": "@{items('For_each_Filter_Array_Run_History')?['type']}",
"status": "@{items('For_each_Filter_Array_Run_History')?['properties']?['status']}",
"code": "@{items('For_each_Filter_Array_Run_History')?['properties']?['code']}",
"startTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['startTime']}",
"endTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['endTime']}",
"errorMessage": @{body('Call_Outputs_URL')}
}

False branch

Now, let’s focus on the False side of the condition. As we mentioned previously, in this scenario, we must carefully discard the following generic messages that can appear: An action failed. No dependent actions succeeded.

To do that, we will be adding a Condition with the following configuration:

  • items(‘For_each_Filter_Array_Run_History’)?[‘properties’]?[‘error’]?[‘message’] is not equal to An action failed. No dependent actions succeeded.

If the error is the generic one, it will ignore it, and it will go to the next error in the error history. Note: if we have this generic error message, we will always have two Failed actions – we will always want the other detail because it is there that we will have the real error detailed message.

If the error is not the generic one, then we are going to define the output message in our support variable using the following expression:

{
"name": "@{items('For_each_Filter_Array_Run_History')?['name']}",
"type": "@{items('For_each_Filter_Array_Run_History')?['type']}",
"status": "@{items('For_each_Filter_Array_Run_History')?['properties']?['status']}",
"code": "@{items('For_each_Filter_Array_Run_History')?['properties']?['code']}",
"startTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['startTime']}",
"endTime": "@{items('For_each_Filter_Array_Run_History')?['properties']?['endTime']}",
"errorMessage": "@{items('For_each_Filter_Array_Run_History')?['properties']?['error']?['message']}"
}

as you can see in this picture:

To finalize, we just need to add the Response action with the following configuration:

  • Set the Status Code as 200
  • Add the following header in the Headers properties:
    • Content-Type: application/json
  • Set the Body property with that value of our output message variable that we used previously

In the end, our Logic App will look like this:

To finalize and for this to work properly, we need to configure the Logic App managed identity and permissions.

Configure the Logic App managed identity and permissions

We already described in the beginning that we need to enable the managed identity, but in order for this Logic App to be able to extract the error information from other Logic App run histories, we need to give to that managed identity the Logic App Operator role in each resource group that contains the Logic App from which we want to access and read the run history.

For this last step:

  • On the Azure Portal, access the Resource Group that has our Logic App from where we want to grab the correct error message.
  • On the Resource Group page, select the option Access control (IAM).
  • On the Access control (IAM) panel, click Add Add role assignment
  • On the Add role assignment page, on the Role tab, search for Logic App and then select the option Logic App Operator, and then click Next.
  • On the Members tab, select the Assign access to be Managed identity, and from the Members:
    • Select your subscription on the subscription property.
    • On the Managed identity list of options, select our Logic App Catch Error
    • and on the Select property, select the managed identity of our function and then click Close.
  • Click on Review + Assign and then Review + Assign again.

We can now use this Generic Logic App to read the error detail inside our other Logic Apps.

Where can I download it

You can download the complete Azure Function source code here:

Credits

Kudu to my team member Luis Rigueira for participating in this proof-of-concept!

Hope you find this useful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Microsoft Integration Trends 2023 Webinar | March 2, 9 and 10 | Online

Microsoft Integration Trends 2023 Webinar | March 2, 9 and 10 | Online

Just less than one day to go for the webinar series on accelerating BizTalk Server to Azure Integration Services migration!

The first session will occur on March 2 (tomorrow) from 10 – 11AM GMT, hosted by Kent Weare, about Microsoft offerings to accelerate BizTalk to Azure migration.

  • Do you wonder what Microsoft offers to advance BizTalk to Azure migration? Kent Weare, Principal Product Manager at Microsoft, will highlight the key resources in this session.

The second session will be with me, which will occur on March 9 from 10 – 11AM GMT, about migration from BizTalk Server to Azure Integration Services.

  • How do Enterprises deal with BizTalk to Azure migration? Sandro Pereira, Head of Integration at DevScope, will share the migration strategies and best practices acquired from his experience with multiple Global Enterprises!

And finally, the third and last session will occur on March 10 from 10 – 11AM GMT, hosted by Michael Stephenson and Lex Hegt, about Addressing the Operational challenges in BizTalk Server to Azure Integration Services migration.

  • Michael Stephenson, Coach and Consultant, Microsoft Azure Adoption, Connected Systems Consulting Ltd, and Lex Hegt, Lead Product Consultant at BizTalk360 & Serverless360, will educate us on the Operations perspective of BizTalk Server to Azure migration highlighting the importance of defining a support strategy using reliable solutions available.

Hurry Up & Save your spot now, the webinar is free, and if you are in the Enterprise Integration space, you don’t want to miss it.

Key takeaways from the webinar series

  • Microsoft offerings to advance the BizTalk to Azure migration.
  • Strategy options and planning considerations for migrations.
  • Best practices to accelerate BizTalk to Azure migration.
  • Operations aspects of the migration from BizTalk to Azure.

Join us in this webinar series for exclusive expert advice on BizTalk to Azure migration!

Link for the registration

You can register for this webinar series here:

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

February 27, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

February 27, 2023 Weekly Update on Microsoft Integration Platform & Azure iPaaS

__CONFIG_colors_palette__{“active_palette”:0,”config”:{“colors”:{“f3080”:{“name”:”Main Accent”,”parent”:-1},”f2bba”:{“name”:”Main Light 10″,”parent”:”f3080″},”trewq”:{“name”:”Main Light 30″,”parent”:”f3080″},”poiuy”:{“name”:”Main Light 80″,”parent”:”f3080″},”f83d7″:{“name”:”Main Light 80″,”parent”:”f3080″},”frty6″:{“name”:”Main Light 45″,”parent”:”f3080″},”flktr”:{“name”:”Main Light 80″,”parent”:”f3080″}},”gradients”:[]},”palettes”:[{“name”:”Default”,”value”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]},”original”:{“colors”:{“f3080”:{“val”:”rgb(23, 23, 22)”,”hsl”:{“h”:60,”s”:0.02,”l”:0.09}},”f2bba”:{“val”:”rgba(23, 23, 22, 0.5)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.5}},”trewq”:{“val”:”rgba(23, 23, 22, 0.7)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.7}},”poiuy”:{“val”:”rgba(23, 23, 22, 0.35)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.35}},”f83d7″:{“val”:”rgba(23, 23, 22, 0.4)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.4}},”frty6″:{“val”:”rgba(23, 23, 22, 0.2)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.2}},”flktr”:{“val”:”rgba(23, 23, 22, 0.8)”,”hsl_parent_dependency”:{“h”:60,”s”:0.02,”l”:0.09,”a”:0.8}}},”gradients”:[]}}]}__CONFIG_colors_palette__

Logic App Consumption Deployment: The trigger ‘…’ of current version of workflow ‘…’ has concurrency runtime configuration specified. Trigger concurrency runtime configuration cannot be removed once specified.

Logic App Consumption Deployment: The trigger ‘…’ of current version of workflow ‘…’ has concurrency runtime configuration specified. Trigger concurrency runtime configuration cannot be removed once specified.

And yet another Logic App Consumption Deployment issue, and more to come! Like the previous posts, while trying to deploy an existing Logic App Consumption thru Visual Studio 2019 in our development environment, I got the following error message:

Resource Microsoft.Logic/workflows ‘LA-NAME’ failed with message ‘{
“error”: {
    “code”: “CannotDisableTriggerConcurrency”,
    “message”: “The trigger ‘When_one_or_more_messages_arrive_in_a_topic_(peek-lock)’ of current version of workflow ‘LA-NAME’ has concurrency runtime configuration specified. Trigger concurrency runtime configuration cannot be removed once specified.”
  }
}’

This error happened because I tried to modify the Logic App Consumption Trigger from When one or more messages arrive in a topic (peek lock) to When one or more messages arrive in a topic (auto-complete).

Cause

The cause of the problem is simple to understand based on the error message. The trigger of the currently deployed version of the Logic App has Concurrency Control settings enabled. We can validate that by:

  • Right-click on the 3 dots on the Trigger and select the Settings option.
  • On the Settings windows of the trigger, we can validate that the Concurrency Control option is enabled and defined to have a Degree of Parallelism of 5.

Saying that, and despite the cause of the problem being easy to understand, the reason why this happens is not that clear. Still, it seems for some internal reason in the Logic App Runtime, after you set up the trigger to have Concurrency Control enabled, you cannot revert that configuration. You cannot do it while trying to deploy a new version of the Logic App thru Visual Studio, nor go directly to the Azure Portal and perform the same actions.

From Azure Portal or Visual Studio (it doesn’t matter the tool you use), if you try to:

  • Update the existing trigger to disable the Concurrency Control option and try to save it. It doesn’t work.
  • Delete the current trigger and add a new one. It doesn’t work, either.

Again, this seems to be a limitation that exists at the moment in the Logic App Consumption Runtime – not sure at this point if you will have the same limitation/issue in Logic App Standard.

Solution

Fixing this issue is not that simple. We can’t fix it, but we can apply two workarounds:

  • Workaround 1: if you don’t need to keep the run history of the existing Logic App.
  • Workaround 2: if you need to keep the run history of the existing Logic App.

Workaround 1

  • Delete from the Azure Portal the existing Logic App by selecting the Logic App and then, on the top menu, click the Delete option.
  • And then redeploy the Logic App.

This will solve the deployment problem, but again you will lose the run history.

Workaround 2

If you need to keep the run history, then unfortunately, the only option you have is to provide a different name to your Logic App, something like:

  • LA-NAME-V2

Make sure you change that, at least in the LogicApp.parameters.json file, but I suggest you change that also in the LogicApp.json file.

Just make sure you disable the “old” one – LA-NAME – and create a TAG specifying that it is deprecated.

After you validate everything is working fine and you don’t need the run history of the old one anymore, delete this Logic App to avoid confusion and to be simple to manage the overall solution.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Logic App Consumption Deployment: Deployment template validation failed: ‘The template parameters ‘…’ in the parameters file are not valid

Logic App Consumption Deployment: Deployment template validation failed: ‘The template parameters ‘…’ in the parameters file are not valid

Like the previous post, while trying to deploy an existing Logic App Consumption thru Visual Studio 2019 in our development environment, I got the following error message:

Template deployment returned the following errors:

Error: Code=InvalidTemplate;

Message=Deployment template validation failed: ‘The template parameters ‘name-of-the-parameter’ in the parameters file are not valid; they are not present in the original template and can therefore not be provided at deployment time. The only supported parameters for this template are ‘list-of-parameters-present-in-the-LogicApp’. Please see https://aka.ms/arm-pass-parameter-values for usage details.’.

The deployment validation failed.

Cause

The cause of the problem is once again quite simple, and the error description is really good, not only describing the problem but providing the solution also.

In my case, the error says that “arm_ServiceBus_Subscription_A” doesn’t exist – is not valid – in the template parameter file that I’m using to deploy the Logic App Consumption thru Visual Studio. And it also says that the only supported parameters for this template are:

  • arm_ServiceBus_Subscription_ABC
  • arm_ServiceBus_Connection_Name
  • arm_ServiceBus_Connection_DisplayName
  • arm_ServiceBus_Topic
  • arm_LA_InitialState

Solution

Fixing this issue is simple, and you have three options that you need to choose according to your scenario:

  • Remove/delete this template parameter from the parameters file.
  • Rename this parameter to a valid one.
  • Or add this ARM parameter in the LogicApp.json file
    • Perhaps this last option is the most unlikely to happen since this would mean that you would have to change the code to include this parameter in some content or configuration of the actions or settings of the Logic App – what is the point of having an ARM parameter defined if you don’t really need it.
Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.