As part of the Intergalactic Automation Summit 2022 online event organized by the Power Community that is taking place between 4-6th February 2022:
4th Feb- Power Automate Bootcamp
5th Feb- Azure Integration Bootcamp
6th Feb- Power Platform ALM DevOps
All of these events are free! And you can register here.
I choose to submit a session to the Global Automation Bootcamp, and I’m honored to be accepted as a guest speaker on a session about How to monitor your integrations solutions with Automation Account. My session will take place at 05:00 pm according to GMT/UTC.
How to monitor your integrations solutions with Automation Account
In this session, we will address how you can monitor your integrations solutions using Azure Integration Account running PowerShell Runbooks and Logic Apps to notify inconsistencies in your solutions. For those reasons, I would like to invite you to join me at the Global Automation Bootcamp virtual event on Friday, February 4, 2022.
Session name: How to monitor your integrations solutions with Automation Account
Abstract: In this session, we will address how you can monitor your integrations solutions using Azure Integration Account running PowerShell Runbooks and Logic Apps to notify inconsistencies in your solutions.
Join us and reserve your presence at the Global Automation Bootcamp virtual event on Friday, February 4, 2022, it is free!
Today, we are going over another real scenario, this time from one of our PowerBI Robots clients. For those unfamiliar with it, PowerBI Robots is part of DevScope’s suite of products for Microsoft Power BI. It automatically takes high-resolution screenshots of your reports and dashboards and sends them anywhere to an unlimited number of recipients (any users and any devices), regardless of being in your organization or even having a Power BI account.
Challenge
The COVID-19 pandemic massified remote work, and one of our PowerBI Robots clients asked us for a way to start receiving high-resolution screenshots of their reports and dashboards. On top of the devices at the client’s facilities (mainly TVs), these screenshots should also be available on a Microsoft Teams Channel where they could be seen by all users with access to it. PowerBI Robots allows users to “share” high-resolution screenshots of Power BI reports and dashboards in many ways, but it didn’t have this capability out-of-the-box, so we proactively introduced it using Azure Integration Services
This proof-of-concept will explain how you can extend the product’s features by making use of PowerBI Robots’ out-of-the-box ability to send a JSON message to an HTTP endpoint and then using Azure Integration Services such as Azure Blog Storage, Azure File Storage, Logic Apps, or even Power Platform features like Power Automate to share these report or dashboard images on platforms like Teams, SharePoint or virtually everywhere.
Create Blob Storage
In theory, we could send an image in base64 directly to Teams, but the problem is that messages on Teams have a size limit of approximately 28KB. This encompasses all HTML elements such as text, images, links, tables, mentions, and so on. If the message exceeds 28KB, the action will fail with an error stating: “Request Entity too large“.
To avoid and bypass this limitation, we have to use an additional Azure component to store the Power BI report images provided by PowerBI Robots. And to do that, we can choose from among resources such as:
Azure Blob Storage: Azure Blob storage is a feature of Microsoft Azure. It allows users to store large amounts of unstructured data on Microsoft’s data storage platform. In this case, Blob stands for Binary Large Object, which includes objects such as images and multimedia files.
Azure File Storage: Azure Files is an Azure File Storage service you can use to create file-sharing in the cloud. It is based on the Server Message Block (SMB) protocol and enables you to access files remotely or on-premises via API through encrypted communications.
Or even a SharePoint library, where you can store images and many other types of files.
We chose to use blob storage for its simplicity and low cost for this POC.
To start, let’s explain the structure of Azure Blob storage. It has three types of resources:
The storage Account
A container in the storage account
A blob
If you don’t have a Storage Account yet, the first step is to create one, and for that, you need to:
From the Azure portal menu or the Home page, select Create a resource.
On the Create a resource page, on the search type Storage account and from the list, select Storage account and click Create.
On the Create a storage account Basics page, you should provide the essential information for your storage account. After you complete the Basics tab, you can choose to further customize your new storage account by setting options on the other tabs, or you can select Review + create to accept the default options and proceed to validate and create the account:
Project details
Subscription: Select the subscription under which this new function app is created.
Resource Group: Select an existing Resource Group or create a new one in which your function app will be created.
Instance details
Storage account name: Choose a unique name for your storage account.
Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
Region: Choose a region near you or near other services your functions access.
Note: Not all regions are supported for all types of storage accounts or redundancy configurations
Performance: Standard or Premium Select
Standard performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios.
Select Premium for scenarios requiring low latency.
Redundancy: Select your desired redundancy configuration.
Now that we have the storage account created, we need to create our Blob Container. And for that we need:
In the left menu for the storage account, scroll to the Data storage section, then select Containers.
On the Containers page, click on + Container button.
From the New Container window:
Enter a name for your new container. You can use numbers, lowercase letters, and dash (-) characters.
Select the public access level to Blob (anonymous read access for blobs only).
Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container.
Click Create to create the container.
Create a Logic App
PowerBI Robots is capable of sending a JSON request with all the information regarding a configured playlist:
To receive and process requests from PowerBI Robots, we decided to use and create a Logic App, which is a cloud-based platform for creating and running automated workflows that integrate your apps, data, services, and systems. To simplify the solution, we will also use the Azure Portal to create the Logic App.
From the Azure portal menu or the Home page, select Create a resource.
In the Create a resource page, select Integration > Logic App.
On the Create Logic App Basics page, use the following Logic App settings:
Subscription: Select the subscription under which this new Logic App is created.
Resource Group: Select an existing Resource Group or create a new one in which your Logic app will be created.
Type: The logic app resource type and billing model for your resource. In this case, we will be using Consumption.
Consumption: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the Consumption billing model.
Standard: This logic app resource type runs in single-tenant Azure Logic Apps and uses the Standard billing model.
Logic App name: Your Logic App resource name. The name must be unique across regions.
Region: The Azure datacenter region where to store your app’s information. Choose a region near you or near other services your Logic app access.
Enable log analytics: Change this option only when you want to enable diagnostic logging. The default value in No.
When you’re ready, select Review + Create. Then, on the validation page, confirm the details you provided, and select Create.
After Azure successfully deploys your app, select Go to resource. Or, find and choose your Logic App resource by typing the name in the Azure search box.
Under Templates, select Blank Logic App. After selecting the template, the designer now shows an empty workflow surface.
In the workflow designer, under the search box, select Built-In. Then, from the Triggers list, select the Request trigger, When a HTTP request is received.
For us to tokenize the values of the message we are receiving from the PowerBI Robots, we can, on the Request trigger, click on Use sample payload to generate schema
And copy the JSON message provided earlier to the Enter or paste a sample JSON payload window and then click Done.
Under the Request trigger, select New step.
Select New step. In the search box, enter Variables, and from the result panel select the Variables, and choose the Initialize variable action and provide the following information:
Name: varDateTime
Type: String
Value: Select Expression and add the following expression formatDateTime(utcNow(), ‘yyyy-MM-dd HH:mm’)
Note: this variable will be used later in the business process to provide the data in a clear format on the message to be sent to the Teams channel.
Under the Request trigger, select New step.
Select New step. In the search box, enter Variables, and from the result panel select the Variables, and choose the Initialize variable action and provide the following information:
Name: varHTMLBody
Type: String
Value: (Empty)
Note: this variable will be used later in the business process to dynamically generate the message to be sent to the Teams channel in an HTML format.
Select New step. In the search box, enter Blob, and from the result panel select the Azure Blob Storage and choose the Create blob (v2)action.
If you don’t have yet a connection create you first need to create the connection by setting the following configurations and then click Create:
Connection name: Display connection name
Authentication type: the connector supports a variety of authentication types. In this POC, we will be using Access Key.
Azure Storage Account name: Name of the storage account the connector we create above. We will be using dvspocproductsstracc.
Azure Storage Account Access Key: Specify a valid primary/secondary storage account access key. You can get these values on the Access keys option under the Security + networking section on your storage account.
Then provide the following information:
Storage account name: Select from the dropdown list the storage account. The default should be Use connection settings (dvspocproductsrracc)
Folder path: navigate to the folder /robots-reports
Blob name: Dynamic set the name of the file to be created. To avoid overlap we decide to use the unique workflow id of the message as part of the name of the report we receive on the source message:
Blob content: the Base64 content we receive on the source message.
Note: by setting the name or the content on the Create blob action, this will automatically add a For Each loop statement on our business flow since these fields can occur multiple times inside the source message. And this is correct and what we want.
Select New step. In the search box, enter Variables, and from the result panel select the Variables, and choose the Set variable action and provide the following information:
And finally, select New step. In the search box, enter Teams, and choose from the result panel the Microsoft Teams, choose the Post message in a chat or channel action and provide the following information:
Post as: Select User
Post in: Select Channel
Team: Select the Team, in our case PowerBI Robots Webhooks
Channel: Select the Team channel, in our case General
Message: place the message we create above by using the varHTMLBody
Note: if you don’t have yet created a Teams Connector, you need to Sign in using the account that will be making these notifications.
As a result, once we receive a new request from the PowerBI Robots, will be a fancy message on teams with a thumbnail of the report:
You can click on it and see it in full size:
More About PowerBI Robots?
PowerBI Robots automatically takes screenshots of your Microsoft Power BI dashboards and reports and sends them anywhere, to an unlimited number of recipients. Simply tell PowerBI when and where you want your BI data, and it will take care of delivering it on time.
Monitoring a BizTalk Server environment can sometimes be a complex task due to the infrastructure and complexity layers behind the BizTalk Server. Apart from that, the administrator teams need to monitor all the applications deployed to the environment.
Ideally, the administration team should use all monitoring tools at their disposal, whether they are included with the product, such as BizTalk Server Administrative console, Event Viewer, HAT, or BAM. But the main problem with these tools is that:
They need manually intervention.
Almost all of them requires remote access to the environment.
When an administrator must manually check each server or application by events that may have occurred, that is not a very efficient and effective way to allocate the team’s time nor to monitor the environment.
Of course, they can also use other monitoring tools from Microsoft, such as Microsoft System Center Operation Manager (SCOM), or third-party monitoring solutions such as BizTalk360. These tools should be able to read events from all layers of the infrastructure and help the administration team to take preventive measures, notifying them when a particular incident is about to happen, for example, when the free space of a hard drive is below 10%. Furthermore, they should allow the automation of operations when a specific event occurs, for example, restart a service when the amount of memory used by it exceeds 200MB, thereby preventing incidents or failures, without requiring human intervention.
But the question is: and if you don’t have these tools?
You can archive these tasks in several ways. Many people create custom web portals to emulate some of the most basic tasks of the admin console. One of my favorite options is using a mix of PowerShell, schedule tasks, and/or Azure Services like Logic Apps and Functions. But today I will show you a different or alternative way:
Create a Windows Service to monitor suspended Instances and automatically terminate them
Note: of course, this solution can be expanded to other kinds of stuff or add new funcionalities.
BizTalk Monitor Suspend Instance Terminator Service
This is a Windows Service that will be continually monitoring BizTalk Server for specific suspended messages (with an interval of x seconds/minutes/hours defined on code) and termites them automatically.
This tool allows you to configure:
The type of suspended messages you want to terminate
Terminate without saving the messages or saving them to a specific folder before terminating them.
These configurations are made on the app config of the service:
This is a topic that has been asked to me a few times, making me wonder how hard it actually was. Working with this nearly every day makes us assume some things are very easy, but not everyone has this insight.
So, exactly do we set variables for different environments and how does it work when we want to replace tokens?
Variables for different environments
Having multiple environments creates the need to have different values assigned to your variables, because, for example, that Test Webservice won’t work in PROD and you definitely don’t want to use that PROD file share and delete files in your DEV/Test environment.
Using Pipeline Variables helps you to set different values to different Stages.
This is extremely helpful because, even though you have to duplicate/triplicate variables, you won’t need to worry about the incorrect value going to the wrong stage. Also, having the Scope set to Release, it will affect all stages.
So, it’s a win-win situation.
But! It’s only valid for this Release Pipeline in specific. If you have another Release and some variables are common, you have to re-do everything… all, over, again.
Send in the Variable Groups!
Variable Groups
The Variable Groups are containers for variables that can be used in multiple Releases and Pipelines. Think of it as a common class in your project that you can reference anywhere.
You can define the Groups and their variables in the Library. Inside the group, you can set all the variables you need, and add to it any time as well, and assign the values right away.
Keep in mind that this is thought of as a static group, it’s not supposed to change often.
If you change a variable value or add a new one, it will not be considered in the already created releases. If anything changes in here, you will need to create new releases (not the pipelines) and redeploy them. When you create the release, it takes a snapshot of the values and uses them as they are. Thus the need to create a new one to get those new values.
After linking the group to the Release, you will see that you can also set a Scope. This works exactly like the pipeline variables, they will only be used in that specific Stage and nowhere else.
Also, when expanded, you can see the values that are set for that group.
Now, how does the Token Replacement task works with this?
Replace Tokens
This task, our savior (yes, I like it very very much), comes to our rescue once again.
I’ve explained before how to use it and how it works.
But for this post, I’ll explain again. The task searches in the folders/files you’ve defined and tries to match the token that you’re setting in the definition with the one in the file(s). As the token is found, it uses a string.Replace function to inject the values in the files.
It will scour the Variables for a match and take the value to insert in the file.
But how does this link with the Variable Groups?
Well, at runtime, DevOps does a magical thing and sees the groups you’ve defined for a Stage as variables. So technically, it’s as if you’ve defined all the variables in one place and not in groups.
Pretty sweet, right?
So, the Replace Tokens will use all those variables and will try to replace them in your files. You don’t have to define the group or anything, it will just see the whole picture.
Hope this helps you with your automations and deployments.
We finally reach the last part of this small blog season on monitoring the status of your Azure API Connections. We start by using a simple PowerShell script locally on our machine to progress to an automated way using Azure Function Apps and Logic Apps. I mentioned in my last post that this previous option had a considerable handicap associated with costs since we couldn’t use the Consumption plan, and instead, we had to use an App Service plan.
Today we will go to address the best solution in my personal opinion:
Using a Schedule PowerShell Runbook on an Automation Account to check the Azure API Connection status
And once again, using a Logic App, this time with an HTTP- When a HTTP request is received trigger, to notify the internal support team if any findings (broken API Connections) were detected.
Note: the Logic App will only be triggered if the Runbook detects/find any non-coherent situations.
Solution 3: Using Automation Account and Logic App
Create Automation Account
The first step, if you don’t have an Automation account yet, is to create one, and for that, you need:
From the Azure portal menu or the Home page, select Create a resource.
In the Create a resource page, select IT & Management Tools > Automation.
On the Create an Automation Account Basics page, use the following settings:
Subscription: Select the subscription under which this new Automation Account will be created.
Resource Group: Select an existing Resource Group or create a new one in which your Automation Account will be created.
Automation account name: Name that identifies your new Automation Account.
Region: Choose a region near you or near other services your Automation Account access.
You can customize the other option according to your intentions or leave the default values. For this demo, we will now select Review + create to review the app configuration selections.
On the Review + create page, review your settings, and then select Create to provision and deploy the Automation Account.
Create Automation PowerShell runbook
The next step is to create a PowerShell runbook. For that, you need to:
From the left menu of the Automation Account window, select Runbooks, then select Create a runbook from the top menu.
From the Create a runbook window, use the following settings:
Name: Name the runbook
Runbook type: From the Runbook type drop-down menu, select PowerShell.
Runtime version: From the Runtime time drop-down menu, select 7.1 (preview).
Description: Provide a description for this runbook (not mandatory filed)
Finally if everything works properly you can publish the runbook.
Now we need to schedule the runbook. For that, we need:
From the left menu of the Automation Account window, select Schedules, then select Add a schedule from the top menu.
From the New Schedule window, use the following settings:
Name: Name of the Schedule
Description: Provide a description for this schedule (not mandatory filed)
Starts: Datetime to start the schedule
Time zone: Time zone configured for this schedule, in my case Portugal – Western European Time
Recurrence: Select whether the schedule runs once or on a reoccurring schedule by selecting Once or Recurring. We are going to use Recurring
If you select Once, specify a start time and then select Create.
If you select Recurring, specify a start time.
Recur every: select how often you want the runbook to repeat. Select by hour, day, week, or month. In hour case, 1 per day
Set expiration: Leave the default property, No.
When you’re finished, select Create.
Now that we have our runbook and our schedule created, we need to bind these two, and for that, we need to:
Access to the previous runbook the we create above, and on the runbook page select Link to schedule
On the Schedule Runbook page, select Link a schedule to your runbook.
On the Schedule page, select the schedule we create above from the schedule list
And then select OK.
Create a Logic App
Finally, we need to create a Logic App with an HTTP- When a HTTP request is received trigger to notify if any API Connection is broken. To simplify the solution, we will be using the Azure Portal to create also the Logic App.
Note: once again, the Logic App will only be triggered if the Runbook detects/finds any non-coherent situations..
To accomplish that, we need to:
From the Azure portal menu or the Home page, select Create a resource.
In the Create a resource page, select Integration > Logic App.
On the Create Logic App Basics page, use the following Logic app settings:
Subscription: Select the subscription under which this new Logic app is created.
Resource Group: Select an existing Resource Group or create a new one in which your Logic app will be created.
Type: The logic app resource type and billing model to use for your resource, in this case we will be using Consumption
Consumption: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the Consumption billing model.
Standard: This logic app resource type runs in single-tenant Azure Logic Apps and uses the Standard billing model.
Logic App name: Your logic app resource name, which must be unique across regions.
Region: The Azure datacenter region where to store your app’s information. Choose a region near you or near other services your Logic app access.
Enable log analytics: Change this option only when you want to enable diagnostic logging. The default value in No.
When you’re ready, select Review + Create. On the validation page, confirm the details that you provided, and select Create.
After Azure successfully deploys your app, select Go to resource. Or, find and select your logic app resource by typing the name in the Azure search box.
Under Templates, select Blank Logic App. After you select the template, the designer now shows an empty workflow surface.
In the workflow designer, under the search box, select Built-In. From the Triggers list, select the Request connector, and the When a HTTP request is received trigger.
Use the following sample payload to generate the schema
Then we be using the following actions to notify the support team:
Choose an Azure function: I’m calling and Azure Function to transform the list of broken API’s in a HTML table.
Set variable: I’m setting the varEmailBody with my default HTML email body Template and add the HTML table that the Azure Function returned
Send an email (v2) – Office 365 Outlook: To send the email to the support team
The result, once you try to execute the Logic App, will be a fancy HTML email:
Although this approach required quick learning about Azure Automation, that was quite simple, and for me, this is the best approach in terms of cost and architecture design.
In the previous posts of these series, we’ve talked about how to build and prepare your Logic App for CI/CD. In this last post, I’ll show you how to build your Azure Pipeline, making it prepared for any environment you need.
If you’ve missed the other posts, here are the links for them:
Assuming you already have your repo configured, building the pipeline is fairly simple and quick. I’m not a big fan of using YAML, I find it easier to use the classic editor, having the GUI seems more appealing to me.
Having your repo in place and all the code ready, you need create the Pipeline.
As such, you need to choose the classic editor (or venture yourself in YAML) and select your repo and branch.
The available templates are helpful but if you’re just trying to deploy logic apps, I’d suggest you start with an empty job, because you might have actions that are not necessary and you’ll have to delete them.
The first thing we’re going to do, is configure the pipeline for continuous integration. It doesn’t take much to achieve this, you just need to activate the needed triggers. By default, it will filter to your main branch, but you can change this and trigger for specific projects and branches. This comes in handy when you have multiple projects and you only want to include some in the build.
After enabling the triggers, you’ll need to add the required tasks to get your pipeline going. You might be getting a few secrets in Key vault, if that’s the case, do remember to add the Azure Key Vault task. This will pull either all the secrets or the filtered ones you’ve selected, keeping them in cache for the pipeline execution. This will be used in the Replace Tokens task, which I’ll discuss a bit down the road.
As you can see, it doesn’t take many tasks to have a functional pipeline, ready to deploy your Logic App to the CI environment.
The required tasks are:
Visual Studio build – to build your solution, obviously
Copy files – which will copy the desired files over to a folder in the Drop
Publish build artifacts – makes the drop available to use in the pipeline and the release
Replace Tokens – a very handy tool that allows you to replace your tokens with the variables or group variables values
ARM template deployment
The Copy files task is very simple and easy to use. You take the input folder, copy the files you want/need to the target folder. Easy-peasy-lemon-squeezy.
I’d advise you to set the Target Folder as a named one, when you’re building the Release, it will be easier to find what you need if you divide your assets by name.
After copying the files, we will replace the tokens. How does this work?
Simply put, the task collects all the variables in memory and searches for the token pattern in all the target files. Given that we wrote our parameters with the __ … __ token, if we use other tokens in the files, it should not affect them. This is by far, in my opinion, the most helpful task in multi-environment deployment. It takes out the need to have multiple files by environment and having tons of variables.
Having the files copied, tokens replaced, our Logic App is ready for deployment in the CI environment. Now, this is not mandatory, you might not want to deploy your LA from the pipeline, you might want to use the Release instead. This is fine, you just need to move the ARM deployment tasks to the Release, it will not affect the outcome nor the pipeline.
As you can see, after selecting the Azure details (Subscription, RG, Location, etc) it becomes easy to select your LA to deploy. Since we used the LogicApps folder, we just need to reference the JSON files and the task will pick them up from the drop folder and deploy them.
Final notes
You’re now ready to go on your adventures and build your Logic Apps, get them ready for Continuous Integration and deploy them. I didn’t approached the Release Pipeline because it’s also very simple. You will only require to create your variables, replace your tokens and deploy the ARM templates.
You can fiddle around with gates, automated deployments, pre-deployment approvals and all, but that is a more advanced feature.
Having multiple releases that you want to joint deploy, you can even build Orchestrations (I can hear all the BizTalk bells ringing in our heads). This is not as simple as isolated deployments, because it does involve some orchestration of the parts (well, duhh).
I hope this small series of posts helped you to solve issues and to improve your deployments.
In the last post we talked about building a Logic App from scratch and gave a few hints on what we would change to prepare for CI/CD.
In this post, we will show you how to prepare your Logic App and template files, how to set and rename your parameters and will hint on how it will correlate with the Azure Pipeline.
So lets recap. We saw that the needed requirements are having VS installed, Azure SDK, Logic Apps for Visual Studio tools extension and an active Azure subscription. We built a new Azure Resource Group project with the Logic Apps template and added a few actions to our LA, nothing too fancy, just enough to show what’s needed.
Now, let’s look at how we will change the code to get it ready.
Changing the JSON code to prepare it for CI/CD is simple but requires attention, because if not done properly, you won’t be able to deploy your template and it might take you a while to find where the problem is. Even though VS gives you a few hints, because Intellisense helps, it might still not explain why it’s failing.
The first thing I like to do is to rename the connection parameters, having “servicebus_1_connectionString” is just horrible and does not help you understand what kind of connection you have. For this case, because we only have one connection, I’ll rename it to “arm_serviceBus_connectionString”, because we’re using an ARM (Azure Resource Manager) template and because this is the type of parameter. I will also add a template variable, named “SingleQuote”, which will be, as you’ve might have guessed, a single quote mark.
If you have other connectors, I suggest you continue changing names to match the same naming convention. It will help you and others to know what that is supposed to be.
After the Logic App file is taken care of, you will also need to apply these changes in the Parameters file.
By default, it will be almost empty, just having the logicAppName parameter with a Null value. This will make your deployment fail, because the template isn’t valid.
In fact, you won’t even be able to deploy it, because VS is smart enough to prompt you for the missing values, taking the default ones from the LogicApp.
At this point, we’re no longer dealing with the definition, we’re dealing with the values we want the Logic App parameters to have. So, “type” and “defaultValue” no longer apply, you should use “value” directly or, if you’re dealing with KeyVault secrets, you can just reference KV and the secret name.
In this example, I’m setting the SB connection string both ways, to show how it can be done.
If you’ve done everything right, you’re Logic App should be deployed without any fuss.
Now comes the fun part, that is dealing with the Parameters Template file. It is incredibly difficult to do this and it’s going to take several hours. So grab that coffee and get confortable.
You will need to change your values to a token and an identifier, to later use in the Pipeline and releases.
Wow, that took us… 30 seconds, maybe. I’m exhausted and I need a break. You can even get that KV value with the token, you just need to change the identifier to the KV secret name.
We’re sweating over here with all this work.
In the next blog post, we will build the Pipeline and give the hints for the Release as well.
This is a so common task on BizTalk Server that I already forgot how many times I did it. Depending on several scenarios, like:
Testing
Certain parts of the application are not yet ready to production
Or even discarded unwanted messages
We want/need to create a send port and subscribe specific messages to be discarded on a folder. Otherwise, they will get stuck on the administration console, and we don’t want that.
After a while, the problem is that the folder will get a considerable amount of messages, and writing a large number of files to disk will get progressively slower as the number of files in the target directory gets large. This is because your computer’s operating system must keep track of all files in a directory. Even bulk deleting all of these files will take a longer time. Moving or deleting files from the target directory on a regular basis will ensure that the performance is not adversely affected.
A large number of small files make more impact than a small number of large files, and most of the time, BizTalk Server consumes/produces small messages. However, at some point, you may completely fill the hard drive, which is more critical.
With this script, you can easily configure the folders and the type of files you want to monitor and delete.
Recently I wrote my version of a script that Mike Stephenson initially created: Find Orphaned Azure API Connectors with PowerShell. This PowerShell script will look at all of the API Connections in a specific resource group and then inspect every Logic App in your resource group to check if the API Connections are being used or not. The goal of this script, of course, is to identify orphaned API Connections in a single Resource Group quickly and effectively.
I modify the original script to have a better output or at least a different output that works better for my needs. Automatically add a Deprecated tag on all the API Connectors with the value True or False. And add additional capabilities on the generation of the output report in a CSV format.
The only limitation of this script is that it only checks a specific Resource Group. So, if you have 3 or 4 Resources Groups, you need to configure this script and run it 3 or 4 times.
To streamline this process and not waste so much time, I decided to create a new version of this script. This new script will look at all the API Connections in all resource groups on a single Azure Subscription and then inspect every Logic App in that specific Resource Group (RG) to check if the API Connections of that RG are being used or not.
What’s new on this PowerShell script:
It will check in all Resources Groups available on a single Subscription if API Connections are being used or not.
Subscription Details output is improved and with coloring to better read
List of available API Connectors group by Resource Group output is improved and with coloring to better read
List of Logic Apps and API Connectors association group by Resource Group and Logic App output is improved and with coloring to better read
List of Orphaned API Connectors order by Resource Group output is improved and with coloring to better read
Download
THIS POWERSHELL SCRIPT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND.
You can download Find Orphaned API Connectors in all Resource Groups from GitHub here:
The standard BizTalk Deployment task does a decent job in deploying the application, but it doesn’t handle changing tokens or registering DLLs in GAC.
To deploy in multiple machines or to change your Bindings according to your environment, you have to make your file dynamic. This means, replacing your connections with variables.
Let’s start with the basic:
Creating the project and installing it in DEV
As always, it’s better to first create the DevOps repository and clone it in your machine.
Having this created, you need to get your project working and have a Deployment Project as well. This will contain the needed DLLs and Binding files pointers to your BTS project. This will also contain the Application name to be deployed and some other configurations.
You will see that you can set the Biztalk Assemblies path as well as other Assemblies, Pre and Post processing scripts and the Deployment Sequence. This is one of the most important steps, because, as you know, it does matter in which order you deploy your BT Assemblies.
When referencing your BT projects, do make sure that the Application Project is using the same framework version as your other projects. If it’s not the same version, it will not be able to copy the DLLs to the referenced Path and will not build successfully.
Building this project will generate a ZIP file that contains all that is needed. You can try to publish it directly, after configuring the application.
The bindings file that is created with the project is just an empty template, so you’ll want to deploy your application in your Dev Environment and create those bindings. It will make a difference if you export your application bindings when it’s started and when it’s stopped, so keep that in mind.
For this example, I’m going to export the bindings with the Application fully stopped.
Your standard Bindings export will carry the ports and URIs/connections straight from the Admin console. Through a little magic, we will configure these values to be dynamic and it’s super easy.
Making your Bindings dynamic for deployment
Now you’ve exported the bindings and you want to make it ready for DevOps and to accept multiple configurations.
From my example, you can see that the ReceiveLocation and ReceivePort names are static. If we tokenize this, you can call it whatever you want, therefore reducing the risk of colliding with other existing ports in your end systems.
So, keeping in mind the desired token, I’m going to replace these values, ReceiveLocation address included, with a variable and token identifier. With a few magic touches, we end up with something like this:
And that’s it. Of course, this is a very small and simple example, but even with a goliath project, it will still be the same pattern. You find what you want to make dynamic, tokenize it, save and upload your changes to your Repo.
Building your Pipeline and Release Pipeline
Now you have your source code in your Repository, your bindings ready for dynamic changes and you want to deploy it.
You will need to set up your build Pipeline before you can get your Release ready, so get to work.
The Pipeline itself doesn’t need to be too complicated, you just need to build your Solution, with or without the OutPath argument (I found that setting this would make my life easier in some projects) and publish the drop.
With your drop created, your Release pipeline needs the following tasks:
Extract Files – to unzip your file
Replace Tokens (a great extension by Guillaume Rouchon, more info here)
Archive Files – to zip it back
BizTalk Server Application Deployment – I recommend this, but you can do it with PowerShell
Extracting your file contents is straight forward, you just need to select your zip in your drop contents and a destination folder. Keep in mind that you will need to know where it lands, to zip it back.
Replacing the Tokens is just as before, you select the *.XML mask or point directly to your bindings and select the Token that it should be looking for. Remember, that the variables you define are case sensitive. You can also use a Variable Group, it is a great way of knowing your environment specific variables or common variables that your might have.
Once this is done, you can proceed to recreate the Zip file and it’s contents. The destination folder you’ve selected when Unzipping will now be the Root folder you are pointing to.
Remember to tick out the “Prepend root folder name to archive paths” option. If you keep this selected, your file will end up with a structure like “Zip / bindings” instead of just “bindings” and the deployment will fail, because it’s not the expected folder structure. Also, tick the “Replace existing archive” option, else you will create a copy and deploy the original version instead.
And for the final step, the Deployment Task. I chose to use the standard task instead of PowerShell, because I didn’t want to handle scripts at this point.
Select the Zip package and set the operation to Create. From what I’ve found out, this will Upsert your application, while Update will not create the app if it doesn’t exist.
And this is what you need. If you’ve set everything properly, your Release Pipeline will deploy your Application to your Server and get it up and running, with the parameters you’ve set in your bindings file.
It took a while to understand how this process worked but in the end, it turned out to be very simple and all it took was to apply the same concept we already used with the ARM deployment for Azure resources.