BizTalk Server: Automation Deployment with Azure DevOps – Deploying the Project

BizTalk Server: Automation Deployment with Azure DevOps – Deploying the Project

Following Sandros last post on BizTalk Server: Automation Deployment with Azure DevOps – Create a build agent, we’re going to show how to create the deployment steps, by creating the Pipeline and Release Pipeline, using a few DevOps tasks.

The standard BizTalk Deployment task does a decent job in deploying the application, but it doesn’t handle changing tokens or registering DLLs in GAC.

To deploy in multiple machines or to change your Bindings according to your environment, you have to make your file dynamic. This means, replacing your connections with variables.

Let’s start with the basic:

Creating the project and installing it in DEV

As always, it’s better to first create the DevOps repository and clone it in your machine.

Having this created, you need to get your project working and have a Deployment Project as well. This will contain the needed DLLs and Binding files pointers to your BTS project. This will also contain the Application name to be deployed and some other configurations.

You will see that you can set the Biztalk Assemblies path as well as other Assemblies, Pre and Post processing scripts and the Deployment Sequence. This is one of the most important steps, because, as you know, it does matter in which order you deploy your BT Assemblies.

When referencing your BT projects, do make sure that the Application Project is using the same framework version as your other projects. If it’s not the same version, it will not be able to copy the DLLs to the referenced Path and will not build successfully.

Building this project will generate a ZIP file that contains all that is needed. You can try to publish it directly, after configuring the application.

The bindings file that is created with the project is just an empty template, so you’ll want to deploy your application in your Dev Environment and create those bindings. It will make a difference if you export your application bindings when it’s started and when it’s stopped, so keep that in mind.

For this example, I’m going to export the bindings with the Application fully stopped.

Your standard Bindings export will carry the ports and URIs/connections straight from the Admin console. Through a little magic, we will configure these values to be dynamic and it’s super easy.

Making your Bindings dynamic for deployment

Now you’ve exported the bindings and you want to make it ready for DevOps and to accept multiple configurations.

From my example, you can see that the ReceiveLocation and ReceivePort names are static. If we tokenize this, you can call it whatever you want, therefore reducing the risk of colliding with other existing ports in your end systems.

So, keeping in mind the desired token, I’m going to replace these values, ReceiveLocation address included, with a variable and token identifier. With a few magic touches, we end up with something like this:

And that’s it. Of course, this is a very small and simple example, but even with a goliath project, it will still be the same pattern. You find what you want to make dynamic, tokenize it, save and upload your changes to your Repo.

Building your Pipeline and Release Pipeline

Now you have your source code in your Repository, your bindings ready for dynamic changes and you want to deploy it.

You will need to set up your build Pipeline before you can get your Release ready, so get to work.

The Pipeline itself doesn’t need to be too complicated, you just need to build your Solution, with or without the OutPath argument (I found that setting this would make my life easier in some projects) and publish the drop.

With your drop created, your Release pipeline needs the following tasks:

  • Extract Files – to unzip your file
  • Replace Tokens (a great extension by Guillaume Rouchon, more info here)
  • Archive Files – to zip it back
  • BizTalk Server Application Deployment – I recommend this, but you can do it with PowerShell

Extracting your file contents is straight forward, you just need to select your zip in your drop contents and a destination folder. Keep in mind that you will need to know where it lands, to zip it back.

Replacing the Tokens is just as before, you select the *.XML mask or point directly to your bindings and select the Token that it should be looking for. Remember, that the variables you define are case sensitive. You can also use a Variable Group, it is a great way of knowing your environment specific variables or common variables that your might have.

Once this is done, you can proceed to recreate the Zip file and it’s contents. The destination folder you’ve selected when Unzipping will now be the Root folder you are pointing to.

Remember to tick out the “Prepend root folder name to archive paths” option. If you keep this selected, your file will end up with a structure like “Zip / bindings” instead of just “bindings” and the deployment will fail, because it’s not the expected folder structure. Also, tick the “Replace existing archive” option, else you will create a copy and deploy the original version instead.

And for the final step, the Deployment Task. I chose to use the standard task instead of PowerShell, because I didn’t want to handle scripts at this point.

Select the Zip package and set the operation to Create. From what I’ve found out, this will Upsert your application, while Update will not create the app if it doesn’t exist.

And this is what you need. If you’ve set everything properly, your Release Pipeline will deploy your Application to your Server and get it up and running, with the parameters you’ve set in your bindings file.

It took a while to understand how this process worked but in the end, it turned out to be very simple and all it took was to apply the same concept we already used with the ARM deployment for Azure resources.

Happy coding!

The post BizTalk Server: Automation Deployment with Azure DevOps – Deploying the Project appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: CI/CD Part 1- Building your Logic App

Logic Apps: CI/CD Part 1- Building your Logic App

Continuous Integration/Continuous Delivery is a development practice that enables you to accelerate your deployments and delivery time to the customer, by reliably releasing software at any time and without manual intervention.

For this post series, I will explain how to enable this practice, oriented to Logic Apps and Azure Pipelines.

We will start by Building the Logic App, using Visual Studio. I will not approach Logic Apps Preview, because since it’s still a preview feature, many changes can happen and render all this useless.

As you may know, to create Logic Apps in Visual Studio, there are a few requirements, such as:

  • Visual Studio 2015, 2017, 2019 or greater, if available
  • Azure SDK
  • Azure Logic Apps Tools for Visual Studio Extension (if using VS)
  • An active Azure subscription
  • Time, will and patience.

After you have all this installed, you can begin to create and let your creativity flow!

We’ll start from scratch. Open you VS and start a new Project, by selecting the Azure Resource Group C# template and the Logic App template after that.

You will end with a new Project, and Solution if it’s the case, with 3 files. The PowerShell file is the deployment script that VS uses to automate the ARM deployment. Only in a special case do you need to fiddle with this file.

The other two files are the Logic App code and the Parameters file. You will need to create a new one, to be used as a Template for the Azure Pipeline. So go ahead and copy the Parameters file and change the name to LogicApp.parameters.template.json .

You should end with something like this.

This Parameters Template file will contain our Tokens, which will be replaced in the Pipeline using the “Replace Tokens” Task. In the coming posts, I will explain how it works and why we’re using it.

For the sake of simplicity, I’ll just use the Service Bus connector, where depending on the input, I’ll send a message to the Queue with the provided information.

After creating the connection, you will see that, in the back code, several parameters and a Resource node were created as well, that contain the link and inputs for this connection.

Even when working in a single Resource Group, it is a good practice to prepare this for CI CD, because even though it’s static, connections change and instead of having to re-do all of it, you just need to re-deploy the pipeline with the new configurations.

We will not be making any changes to the Resource node, but to the action path and parameters. This will define that instead of having a fixed value, it will point to the parameter itselft, making it possible to have an ARM parameter configurable in the Pipeline.

The post Logic Apps: CI/CD Part 1- Building your Logic App appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: Recursive Logic Apps

Logic Apps: Recursive Logic Apps

While dwelling in my thoughts, a memory came to my mind. In my college time, I was present a challenge to make a recursive Fibonacci algorithm, in LISP without using Loops.

This was a challenge, because as you may know…

Title

 (these are probably my favorite programming comics)

But this gave me the idea of testing this concept in Logic Apps.

I’ve built a fairly simple LA just to test and with minimal inputs.

Before I could add the recursive connection to the Logic App, I had to deploy it first, because you can only call  a LA or a Function if it’s already provisioned.

So, I’ve added the action after deployment, saved and tried to deploy again, and this came up:

This means that Logic Apps, by default, do not support recursive calls.

But I’m stubborn and I don’t give up easily. So, what would be the best way to call a LA knowing that I’d have to treat it like an external API?…

The answer is super simple. HTTP action!

We already have the URL, because we deployed it before, so there’s nothing stopping us from doing this.

No objections this time, so let’s test!

TA-DAAA! How easy was that? In my case, I’ve used a simple counter to add and loop, but you can use any other condition to recursively loop through your logic, for example until a SQL record is updated.

You can add delays to ensure that you won’t be making calls every second, or delay until a specific time. The possibilities are endless.

Happy coding!

The post Logic Apps: Recursive Logic Apps appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: Async processing pattern

Logic Apps: Async processing pattern

Sometimes we have the need to perform a kind of “fire and forget” pattern in Logic Apps. Todays post is a short one, but very useful one.

Usually, a Logic App will have a synchronous pattern, meaning you call it and you will have to wait for it to finish processing.

But how do we configure our LA to receive a request and continue processing without us having to wait for it?

It’s quite simple actually, although not a very pretty thing to do.

The way to achieve this is to set a Response action right after the trigger action and in the settings, set the “Asynchronous Response” to true. It’s not pretty as I’ve said, but it will set the path for the async pattern we’re looking for.

There should be a flag that you could set in the Trigger to automate this and send back a response like this, but so far, this feature is not available yet.

The response will be sent to the calling system, whatever it is, with the status code 202 Accepted.

You can also set custom headers and a body, but it might not help much.

As you can see, the Response will automatically set a location header for you to “ping” to check the status. By default, the engine will refresh every 20 seconds.

So that’s all there is to it, it’s a simple way to achieve an asynchronous pattern with Logic Apps, although not very pretty, but it works!

Happy coding!

The post Logic Apps: Async processing pattern appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: Catching errors

Logic Apps: Catching errors

As part of development guides, it’s always a good idea to have a fallback plan and handle errors.

You can be 99.999% confident that your code won’t fail, but that 0.001% chance happens. “Anything that can go wrong will go wrong” – Murphy’s Law

And so, we resort to our very dear friend, Try-Catch.

In Logic Apps, it’s not exactly an out-of-the-box functionality, but it’s actually quite simple to achieve this and with a few steps. Also, there are multiple ways to catch your errors.

In this post, we will try two approaches:

  • Using a For-Each loop
  • and a Filter Array action.

Since I’ve started developing LAs, I’ve used the For-each loop approach but it had some flaws. It involved using a Parse Json to catch only the error message, but not all actions have the same schema.

So, the idea of the Filter array came to play. It’s actually quite easy as well and easy to maintain. You’ll find the same issue with the schemas, but it’s a faster approach.

Let’s dig in. I started to build a simple Logic App, just creating a couple of variables and an HTTP call that I know that will fail. I mocked the results to ensure the outcome is what I needed.

I’ve also built a scope, creating the Try block and a second scope to handle the Catch block. You’ll have to set the “Run after” properties to only trigger on errors, skips or timeouts, if not, it will run on success.

It will always relate to the previous scope.

The For-Each loop approach

Now we start to build our Error handler. I’ve chosen the For Each loop first because it was faster to create since I’m used to it and even have some templates for it as well.

The For Each action requires an array action to iterate on, which means we need to find one. The Scope isn’t a loop, so what will we relate it to?

Well, the scope might not be, but there are N actions inside it, so if you search in the expressions box or the documentation, you’ll find the “result” expression, which records every result of the contained actions within a given scope.

Now, remember, this will need to point to the action you want, but you will not have it in the Dynamic Content, you need to write it using the _ for spaces, because this expression handles the JSON node name like if you’re working in the back code.

Once you have this set, you just need to create a condition to check if the status of the action was “Failed“. Pretty simple.

If you test the execution, you’ll see that the loop is working and iterating the actions batch that the “result” expression returns.

I’m just returning the action outputs in the error string, which will contain the StatusCode, Headers and Body. It should help to diagnose a possible error.

Let’s try the Filter array now.

The Filter Array action approach

Similar to the For Each, we need to iterate through an action that contains child actions. We use the same “result” expression pointing to the same scope as the “From” property and choose “item()?[‘status’]” as the node to search for. Also, we only want the failed actions, so the node should be equal to “Failed“.

As for the error message, it’s a bit different from the For Each type. We’re still picking up the Outputs but we need to get the first action from the Filter array action.

The end result should be the same, as we’re picking up the same info as with the For Each loop.

Usually, an action will return a JSON record as the result of its execution. There are some fields that will always be present, like “Status” and “Tracking ID”. There’s no easy way to find this info, so you have to deconstruct one or more actions to find it. With the information you have now, you can get it from anywhere, you just have to use the “Result” expression.

Here you can see some fields in the Set var action I created and how the status is recorded. For tracking purposes, the execution engine records the begining and end timestamps as well as other useful data.

Now that you know how, it’s time to get working and make your Logic Apps sturdier and with a proper error handling.

Happy coding!

The post Logic Apps: Catching errors appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: CSV Alphabetic sorting

Logic Apps: CSV Alphabetic sorting

Everyday we learn new things and when it comes to Logic Apps, we tend to learn even more, because it’s always shifting and new components are added. If we’re using ARM templates, the deployment brings out some challenges and with it, new things to learn (and lots of cute little things that make you want to bang your head against a brick wall).

Usually when we work with a CSV file we tend to keep the sorting according to the specification. It isn’t always alphabetical nor descending/ascending.

Sometimes, it’s just a real mess but it makes sense to the client and to the application that is consuming it.

A few days ago, whilst working on a client project and after dozens of tests, we started to see errors in our CSV file, where the headers and columns were arranged in a alphabetical sorting. This was not my intent, when I built the CSV array I wanted it to be in a certain order.

So why was this array now being sorted, who gave that command and how could I correct it?

Why and who:

As we dig in the Logic App code, we see that the Logic App is JSON in its core (my god, shocking development!). As such, it will follow JSON rules on sorting. If we set / append our variable with an array, even though that array won’t show up ordered in our code, it will when we deploy it to our Resource Group.

Lets prove this.

First, we set our LA in Visual Studio and initialize a string. Then we set two values to it ( “Append to string variable”) . One as a string and the other as an array.

{x} 
Append to string variable - pure string 
Name 
Value 
csvString 
•zulu": " 
•alpha": 
•charlie": 
romeu " 
•juliet": 
Append to string variable - array 
Name 
Value 
csvString 
•zulu": " 
•alpha": 
•charlie": 
romeu " 
•juliet":

Let’s look at the back code.

"Append_to 
string_variable -_pure_string" : 
"type": 
"AppendToStringVariabIe" , 
"inputs": 
"name": "csvString" 
"value : 
"runAfter": 
"Initialize variable": 
"Succeeded" 
"Append_to 
string_variable - 
array" : 
"type": 
"AppendToStringVariabIe" , 
"inputs": 
'name": "csvString" 
"value": 
"Zulu": " 
"alpha"• " 
"chat-lie": , 
"juliet": " 
"runAfter": 
"Succeeded" 
-_pure_string" :

Looking good so far. Our strings are set and it’s in the order we want.

Lets deploy it to our RG and check again.

Development Tools 
Logic app designer 
Logic app code view 
e 
Versions 
API connections 
Quick start guides 
Release notes 
Settings 
@ Workflow settings 
@ Access keys 
Identity 
Properties 
Locks 
Export template 
Monitoring 
Alerts 
Append to string variable - pure string 
o 
Name 
Value 
csvString 
"Zulu": 
"alpha": 
"charlie•: 
"romeu . 
"juliet": 
{x} 
Append to string variable - array 
Name 
Value 
csvString 
"alpha": 
"charlie": 
"juliet": " 
mmeu": " 
"Zulu":

Well, there it is. In ARM deployment, if we write a JSON object, on deployment it gets sorted and will appear like this in the designer tool in Portal.

Funny thing is that if we change our object to the string we want, the designer will not recognize this as a change and doesn’t let you save.

Save 
X Discard 
> Run 
Designer 
Code view Parameters Templates Connectors 
When a HTTP request is received 
Initialize variable 
Append to string variable - pure string 
• Value 
csvString 
"alpha": 
Append to string variable - array 
Value 
•alpha • : 
•juliet":

Even in Code View the changes are not recognized.

R save 
X Discard 
Run Designer 
"definition" 
"actions": 
Code view 
"*schema": "https://schema.management. 
"Append _ to string_variable - _ array 
"inputs" : 
" name" • "csvString" , 
"value : 
"Zulu": " 
"alpha" • " 
"charlie": 
"romeu" • 
"juliet"• "

But if we add other text to it, the changes are now recognized and Portal allows to save.

X Discard Run 
Designer 
"definition" 
"actions": 
Code view 
"*schema": "https://schema.management. 
"Zulu": 
"alpha" • " 
"charlie": 
"romeu" 
"juliet"• 
"Append _ to string_variable - _ arra) 
"inputs" : 
" name" • "csvString" , 
"value : 
"juliet"

But still, it won’t show you the changes and will still sort out your CSV array, once again because it’s JSON.

A few weeks ago, this behavior wasn’t noticeable I had a few Logic Apps in place with the string array in a specific order and when deploying it didn’t get sorted.

I searched in Azure updates to see if anything was mentioned but nothing came up.

How to bypass this issue?

If you’re working with a CSV file like I was, after you build your array, you’ll need to build a CSV table.

The action “Create CSV table” will take care of this from you, but as we know, it will not be in the same format we need.

{x} 
Append to array variable 
-2 
Name 
Value 
csvString 
"*Zulu": 
"Hromeu•: "2", 
"Halpha": 
"*juliet": 
"*charlie•: "5" 
{x} 
Append to array variable 
Parse JSON 
Create CSV table 
o 
From 
Body x 
Columns

(notice I’ve switched to array variable because I can’t parse the string in JSON)

So, leaving the Columns in automatic mode will mess up your integration as you can see. The output will be sorted and it won’t be what you want / need.

Create CSV table 
INPUTS 
CSV 
"Hcharlie": " 
"Hjuliet"• ' 
"Hromeu": " " 
"Hzulu": " 
"Halpha": " 
OUTPUTS 
Body 
Halphe , Hcharl ie, Hjul let , Hromeu , Hzulu 
Os 
Show raw inputs 
Show raw outputs

What a mess!! This is nothing like we wanted.

We will need to manually define the columns headers and the value they’re going to have.

Create CSV table 
• From 
• Columps 
Hromeu 
8,7'.l11et 
???? ? 
CustDT 
???? ? 
Hju11et ? 
? 
?

If you don’t have many fields, it’s quick to do this, but when you have lots of fields… well, let’s just say I hope you have plenty of time and don’t lose focus.

Create CSV table 
INPUTS 
CSV 
"Halpha": " " 
"Hcharlie": " 
"Hjuliet"• " " 
"Hromeu": ' 
"Hzulu": " 
OUTPUTS 
Body 
Hzulu, Hromeu,Ha1phe , Hjuliet, Hcherl ie 
18 
Show raw inputs 
Show raw outputs

And there we have it. Fields are now displayed correctly, the data is in the right place and we’ve managed to get around this annoying problem.

Happy coding!

The post Logic Apps: CSV Alphabetic sorting appeared first on SANDRO PEREIRA BIZTALK BLOG.

Introducing a new brand and Azure Functions: Moving from Azure Portal to VS

Introducing a new brand and Azure Functions: Moving from Azure Portal to VS

Before we start with the actual post, I’d like to introduce a new brand we will be developing. This is the first official post with the “It’s not Rocket Science!” brand.

RocketScienceBanner

With this concept, we intend to explain how some procedures are not as complicated as you may think and that they’re not Rocket science. I’m open to suggestions on posts to demystify Azure Logic Apps and other Azure services.

So, welcome to a new concept and I hope you find it useful.

Azure Functions: Moving from Azure Portal to VS

Creating Functions in Portal presents some challenges, like lack of Intellisense, which, as we know, helps a lot. Not having CI/CD is also a concern and of the worse case scenarios:

Someone deleting you function by mistake!

Panic!, all hell broke loose. I tried to apply the same method as I did with the Logic Apps accidental delete recovery, but the “Change history” tab isn’t available for Functions. You end up losing your code, your executions and maybe some sleep over this.

The obvious next step is to get your backup from your repository and re-create the function. IF you have it in a repo.

We can’t prevent someone deleting our resources, everyone makes mistakes. But you can prevent letting your code sit in Azure Portal without version control, CI/CD and repository control.

So, the best way to do this is to migrate it to a VS solution. In this post, I will use VS2019, but VSCode is also available as others IDEs. You can do this right from the start by choosing another development environment besides the Portal.

With an existing Function, you’ll need a few things before you can migrate it.  Besides your VS with an active subscription, you will also need the Azure SDK feature. This can be installed using the Installer you’ve downloaded from MS or in VS itself, in the “Tools”-> “Get Tools and Features” menu.

After a fresh install of Windows, I didn’t installed the Azure SDK, so I had to download it and install. According to the setup, it should take about 6,5GB.

If you had to install the SDK, Updates are also important, keep that in mind.

Lets return to the code.

The good thing about VS is that you don’t need to reference the required DLLs, they will probably already be there.

So you need to create a new Solution, with an Azure Function project.

Small note: you can also create a project using VB instead of C#, but… VB…

You can either choose a template with a trigger already created or you can choose an empty project, but when you try to Add a function, it will ask you again what template you want to use.

You can also add an empty class, but you will be missing some references.

As you can see, there are quite a few differences between the Class and the Function template.

I recommend you use the same template as the one you used to build your function, you will get more references that will be necessary for your code to work.

And now comes the easiest part. You can just copy-paste your code into VS, but leave the usings out. Most likely, if you haven’t used anything outstanding, they will already be there.

Now you have you code in VS, ready to run. Is it working properly? Well, you can just debug it locally, when pressing F5, the Azure emulator will start and you will be able to test your function like it was on the cloud.

You can just use your Postman or another web request tool to test your project.

Once you’ve tested everything and are ready to deploy to your subscription, you’ll need configure the publishing profile.

You have two major ways of doing this. You can choose the blue pill and configure your connection by hand or you take the red pill and download the profile from the existing FA.

My advice, get the publish profile. Saves you time and it’s Plug’n’Play.

And that’s it. Your newly migrated function is now ready for your CI/CD and you can manage and version it with VS and DevOps.

Now, this does have a catch, or not depending on how you look at it. Because we deploy using a Zip file, the code is no longer available in the Portal, you must now always use VS to view it.

I like this, because it means that from a Security perspective, noone will be accessing the source code through the Portal and it forces new Devs, and old ones too, to join the best practices policy and have everything in a proper Repo and version controlled.

The post Introducing a new brand and Azure Functions: Moving from Azure Portal to VS appeared first on SANDRO PEREIRA BIZTALK BLOG.

Revisiting “The Get-AzLogicApp command was not found”?

Revisiting “The Get-AzLogicApp command was not found”?

Last week, when preparing for a deployment, I bumped into this error again. As from my previous post, it could be easily fixed with a PowerShell module install.

You can review it here: https://www.linkedin.com/pulse/get-azlogicapp-command-found-pedro-almeida/?trackingId=LYhay%2BdvR7ieQ0XnCy9q0Q%3D%3D

But this time, the script was already fixed. Nothing had changed, as far as I knew. The last build and publish was in September, no errors there.

So what happened that killed my build?

It couldn’t be the Az module updates, because we were forcing the version. There had been several updates, so could this be it?

I tried to force a newer version, like 2.0.0, but still failed to execute the command. I even restricted the script to use only the command I needed, that was the CallbackUrl.

After a few other failures, my thinking was, “this can’t be the problem, the script was executing without issues, so it has to be something else.”

So I took another look at my pipeline. It was the same as before… Azure Powershell task to remove AzureRM, install Az module and Azure CLI task to execute the scri… wait!

Could this be the problem?

I switched the tasks and re-queued the pipeline.

And success! No more errors.

Azure CLI has received some updates in the past weeks, and the build I had before was 2.0.16 (Core 2.11.0) compared to 2.1.0 (Core 2.18.0) was running in these failed pipeline runs.

I looked into the Azure CLI release notes, but found nothing referring the LogicApp commands or Az.LogicApps.

This time, I can’t find a proper explanation for this error, but I’m suspecting some update broke the ability to run these Az module commands with Azure CLI or the Az.LogicApps commands specifically.

The post Revisiting “The Get-AzLogicApp command was not found”​ appeared first on SANDRO PEREIRA BIZTALK BLOG.

Controlling the initial state of a Logic App

Controlling the initial state of a Logic App

Sometimes I have the need to have a Logic App (LA) disabled when I deploy it. For instance, when deploying to Production, I like to have my LAs disabled, because I want to double check everything before starting the process.

This is helpful because usually, when using the “Recurrence” trigger, the LA will start immediately. If for some reason, a connector has the wrong configuration or is broken or the end system is offline, the execution will fail. Other scenarios can happen as well, but that’s another story.

An interesting fact is that you don’t have a proper way to control this in the Portal. You can add the control line to the code, but you won’t be able to control it with CI CD.

So, in comes Rocket science (or not).

The resource code contains a property that will allow you to control the state of a LA and it’s quite easy to set. If you do not specify this property, the LA will start enabled and will trigger if it can.

The property is called “state” and lies within the “properties” node. Setting this property as a global parameter, allows you to prepare your CI/CD pipeline also allowing to parameterize this in your release.

This is quite and easy and simple insert, that should take no more than 5 minutes for you to configure.

If you choose the “Disabled” state, the LA will not start unless you specifically activate it.

Happy coding!

The post Controlling the initial state of a Logic App appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: Moving from Azure Portal to Visual Studio

Logic Apps: Moving from Azure Portal to Visual Studio

Most developers start working with Logic Apps through Azure Portal, because it’s fast and direct. You just open the Portal, create your resource and start working. This is fine but it comes with a cost. There are several limitations to what you can do, specially when it comes to CI/CD (Continuous Integration/Continuous Delivery).

To handle this, there is the need to move to Visual Studio and start working from there. For this to happen, you need tools to help you and there’s a few available. In this post, I will approach a very good one and how to use it.

This tool is a collection of powershell scripts that will download to file your Logic App code and it also can create a parameters file.

The creator of this collection is Jeff Hollan, PM Lead for Microsoft. You can check his work at his GitHub repo. https://github.com/jeffhollan

The project that we’re going to use is LogicAppTemplateCreator. It’s a C# project that creates a DLL and that we will import and use.

https://github.com/jeffhollan/LogicAppTemplateCreator

Let’s begin our process. After cloning the solution and rebuilding it, the DLL will be in the usual folder ($sourcefolder/bin/debug/).

After the solution is built, open Powershell and import this DLL, using the following command:

 Import-Module C:{​​​​​​​pathToSolution}​​​​​​​LogicAppTemplateCreatorLogicAppTemplatebinDebugLogicAppTemplate.dll 

I dropped into a Shared folder, because I’ve referenced it in a Repo for other people to consume in our projects, but this is not necessary, although I recommend it so that it becomes easier for future developers in your company.

After executing this, you’ll be ready to download your ARM template. So, get your Resource Group ID, you Subscription ID and prepare an output folder. You will also need to enter your credentials to login by powershell.

The script should be changed according to your IDs. Do note that it should not be case sensitive. If you don’t set the Out-File, the output will be set in the powershell console, you’ll still be able to copy it and paste in a file, but it’s an unnecessary step.

armclient token {subscription ID} | Get-LogicAppTemplate -LogicApp {LogicAppName} -ResourceGroup {ResourceGroup} -SubscriptionId {subcriptionID}  | Out-File {ARMTemplateOutput}

After the ARM template is created, a parameter template file can be generated  by running the following script.

get-ParameterTemplate -GenerateExpression True -TemplateFile {ARMTemplateOutput} | Out-File {ParameterTemplateOutput}

You will now have both files you need to manage your Logic App, so just copy them to your VS Azure Resource Group project, with Logic Apps Template and you’re almost ready to go.

You will need to address the path links in the JSON code to make them CI/CD-able and fix the parameters in some connectors, but there’s not a lot of work to be done.

As you can see, the ARM template already provides all connection parameters and connections variables. I do recommend you changed them to a more appropriate naming convention like “arm_O365”, “arm_SQL” or “arm_ServiceBus”. This way you will know what it’s referring to with a very understandable pattern.

At the end of the day, your Logic App should be ready for deployment in your subscriptions and look something like this:

Happy coding!

The post Logic Apps: Moving from Azure Portal to Visual Studio appeared first on SANDRO PEREIRA BIZTALK BLOG.