How to remove/unmark Mark of the Web (MOTW) files with PowerShell

How to remove/unmark Mark of the Web (MOTW) files with PowerShell

When we work with multiple clients and, in some cases, with restricted access to their servers or environments, sometimes we have the need to copy the source code from our machines into their developer environment. This happened recently when I needed to transfer some modifications I made in a BizTalk Server Visual Studio solution into the client developer environment. I couldn’t connect a USB device for security reasons, so I decided to copy it to OneDrive and download it from the server.

However, by doing this, some of the files, especially the project and solution files, got marked as Mark of the Web (MOTW). By default, the mark of the web is added to files only from the Internet or restricted site zones. (you can know more about it here: Mark of the Web and zones) By the way, according to MSFT documentation, Mark of the Web only applies to files saved on an NTFS file system, not files saved to FAT32 formatted devices.

The main problem with MotW is that it will cause problems while trying to compile your Visual Studio solutions:

Error Couldn't process file .resx due to its being in the Internet or Restricted zone or having the mark of the web on the file. Remove the mark of the web if you want to process these files.

There are many ways to remove the Mark of the Web (MOTW) flag. Ideally, this needs to be done with Visual Studio closed. Here are two options:

  • Option 1: Using the File Properties
    • Right-click on the file in Windows Explorer and select properties.
    • On the properties window, on the bottom, select the check box Unblock and click OK.

Note: The problem with this solution is that you need to do it manually for all the marked files.

Option 2: Using PowerShell

  • Using a simple PowerShell to go through all the files and unblock them. You can accomplish that, for example, by using the following script:
Get-ChildItem -Path . -Recurse | Unblock-File

Download

THIS POWERSHELL SCRIPT IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND.

You can download PowerShell scripts to unblock files from GitHub here:

If you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Resurrecting BizTalk’s Wisdom: Rediscovering Forgotten BizTalk Whitepapers

Resurrecting BizTalk’s Wisdom: Rediscovering Forgotten BizTalk Whitepapers

Lately, I’ve been engaged in the process of cleaning and reorganizing my hard drives. The sheer volume of resources I’ve accumulated over the years is quite substantial. However, there are occasions when I find myself a bit frustrated in my quest to locate them. Additionally, I’ve been revisiting a few of my online profiles and initiating the process of consolidating my resources on GitHub.

As I embarked on this quest, I stumbled upon some of my vintage articles originally crafted for Microsoft TechNet Wiki. Astonishingly, I recognized that a handful of these articles had laid the foundation for my first book: BizTalk Mapping Patterns and Best Practices.

Considering the product’s inherent characteristics, they all remain exceptionally current and relevant, so I decided to add them to my GitHub repo.

BizTalk Server: Basic Principles of Maps

Maps or transformations are among the most common components in the integration processes. They act as essential translators in the decoupling between the different systems to connect. In this article, as we explore the BizTalk Mapper Designer, we will explain its main concepts, covering slight themes such as
product architecture, BizTalk Schemas, and some of the mo st widely used standards in the translation of
messages.

This whitepaper aims to serve as an introductory guide for individuals embarking on their initial journey with this technology.

For Portuguese speakers, the same whitepaper can be found here as well: BizTalk Server: Princípios básicos dos Mapas.

BizTalk Server: How Maps Work

Maps or transformations are a cornerstone of integration processes, functioning as essential translators that facilitate the seamless exchange of information among different systems. This whitepaper is aimed at demystifying how maps are processed internally by the engine of the product as we explore the map editor BizTalk Server.

For Portuguese speakers, the same whitepaper can also be found here: BizTalk Server: Como funcionam os mapas.

Microsoft BizTalk Server seen by the programmer’s eyes

Originally featured in the Portuguese magazine ‘Programar’, this whitepaper has since been translated into English. Its primary objective is to explore the advantages that the BizTalk Server platform offers to developers.

The same whitepaper for Portuguese speakers can also be found here: BizTalk Server aos olhos dos programadores.

Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Checking for the existence of a property by Thomas Canter

Checking for the existence of a property by Thomas Canter

Today, I will bring back to life another old BizTalk Server blog post by an old friend of mine, Thomas Canter, with his permission, that I find pretty interesting and useful: Checking for the existence of a property. This was initially published on http://geekswithblogs.net/ThomasCanter, now retired.

NOTE:

  • The variable PropExists as bool has already been created
  • The Property of interest is BTS.RetryCount
  • The Message is Message_In

The list from Using Operators in Expressions (https://learn.microsoft.com/en-us/biztalk/core/using-operators-in-expressions) has the typical list of stuff that you expect in C#, multiplication, bit operations (shift left and right), and Boolean operators, but a couple of extremely useful constructs are available that are unique to BizTalk.

The most important of these (in my humble opinion) is the exists operator.

As you are all aware, to even check whether a property exists in an expression throws an exception… as in the following case:

PropExists = (Message_In(BTS.RetryCount) != null && Message_In(BTS.RetryCount) != “”);

If BTS.RetryCount does not exist in the message context, then the MissingPropertyException (Windows Event Log Event ID 10019) is thrown.

Without having to resort to a scope shape and exception handler, the exists operator allows you to check if a property exists in a message and is used in the following format:

PropExists = BTS.RetryCount exists Message_In;

OR

if (BTS.RetryCount exists Message_In)
{
     …;
}

Conclusion

Using the XLANG/s exists operator in your orchestration allows you to test for the existence of a property in a message without resorting to a scope shape and exception handler.

Below are a few more XLANG/s functions that can provide some value to your Orchestrations:

Operator Description Example
checked() raise error on arithmetic overflow checked(x = y * 1000)
unchecked() ignore arithmetic overflow unchecked(x = y * 1000)
succeeded() test for successful completion of transactional scope or orchestration succeeded()
exists test for the existence of a message context property BTS.RetryCount exists Message_In

Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Note to myself: How to run Windows File Explorer as a different user

Note to myself: How to run Windows File Explorer as a different user

This is just another post for the sake of my mental sanity because I’m always tired of looking up for this over and over again. While working on BizTalk Server projects, and in many other scenarios, we have to check if a specific user has access to specific resources in a file share or the file share itself, and we don’t want to disconnect (log off) from the machine and log on on the same machine using a different account. Sometimes, this account doesn’t have remote desktop access either.

So, the main question is: How to run Windows File Explorer as a different user?

There are several ways to accomplish this, but if you need to run File Explorer as a different user, the simplest way to accomplish that is by:

  • Open the File Explorer, normally as we usually do, and access the following folder:
    • C:Windows
  • Scroll down until you find the explorer.exe executable, or search for this file in the search field in the upper right corner.
  • Press the Shift key, and with the Shift key pressed, right-click on the explorer.exe file, and on the context menu, select Run as different user.
  • In the Windows Security window that appears, you need to specify the name and password of the user under whose account you want to run the application and click OK.
  • After this, a new File Explorer is open, running under the specified user account.

Any Windows user can run a program in his current session on behalf of another user using RunAs. This feature allows you to run any scripts (.bat, .cmd, .vbs, .ps1), executable files (.exe), or install applications (.msi, .cab) with the permissions of another user without the need to log off and log in on the machine with different users/credentials.

Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Azure API Management Monitoring and Alerting made simpler

Azure API Management Monitoring and Alerting made simpler

In today’s world, APIs have definitely advanced the way how applications communicate with each other. However, when several APIs are utilized in a business scenario, it would be challenging to retain insight into each API to make sure they work as intended. 

This is where an Azure service like API Management turns out to be a significant aspect. It offers a centralized interface to publish, transform and manage numerous APIs, guaranteeing that they are secure and consumable. 

Significance of Monitoring your Azure APIM 

Since Azure APIM instances manage such business-critical APIs, monitoring them and their operations is crucial to better know their health and efficiency. Here are the top benefits that can be achieved with Azure APIM Monitoring, 

  • Eliminate Bottlenecks:?Monitoring how your APIM APIs and Products (group of one or more APIs) perform is essential to quickly spot problems that might adversely affect the end-user experience. 
  • Reduce latency gaps: API response time has a profound effect on the performance of an application. So with APIM monitoring, get to identify in case of response time delays, thereby eliminating latency gaps. 
  • Ensure availability: When there are any API-related issues, an effective APIM monitoring setup will send instant alerts, allowing you to take the required remedial actions. 
  • Detect failure anomalies: Rapidly figure out if there are outages or abnormal deviations like sudden rise in the rate of failed requests.

Understanding the importance, Azure itself offers its own suite of built-in tools for monitoring Azure APIM Instances.  

Wide range of monitoring options available for Azure APIM  

Native-Azure monitoring tools (Azure Monitor)  

Azure Monitor is one of the primary built-in tools for monitoring Azure APIM instances. Basically, it enables the collection of metrics and logs from APIM, which can be further used for monitoring, visualizing, and alerting. 

Capabilities 

  • Monitor your APIM Instances on metrics like capacity and request rate 
  • Set up alert rules based on metrics to get notified of critical issues 
  • Provides dashboards for visualizing monitoring metrics 
  • Get insights into the operations performed on your APIM Instances with Activity logs 
  • Integration with App Insights lets you know the dependencies between APIM instances and other services.  

Moreover, Azure Monitor focuses more on reactive monitoring – you get to react only after an incident has occurred.  

But when it comes to APIM monitoring, business tends to be more proactive, constantly attempting to spot and fix potential issues before they have an impact on end users.  

The limitations in using Azure Monitor for APIM 

  • Azure Monitor allows you to configure only a limited number of metrics per alert rule 
  • Monitoring an APIM instance on various metrics demands configuring a number of alerts, resulting in a cost spike. 
  • No consolidated error reporting for multiple APIM instances 
  • Doesn’t support visualizing how an API call traverses through various Azure services 
  • It would be hard to perform root cause analysis in case of a performance or latency issue without end-to-end tracing. 
  • Lack of automated features to execute remedial actions without manual intervention 

This is where the necessity of having enterprise-grade monitoring tools comes in place. One such tool that can assist you in overcoming the above-listed drawbacks and help proactively monitor your Azure APIM is Serverless360. 

How to be proactive and overcome limitations in Azure Monitor? 

Serverless360 is an advanced cloud management platform that enables managing and monitoring different Azure services involved in your application from a unified view. 

Considering APIs are critical in simplifying how an end user interacts with your application, Serverless360 offers out-of-the-box monitoring support for Azure APIM APIs, operations, and products. 

Here is how Serverless360 can be extensively used for Azure APIM Monitoring. 

Proactive monitoring for Azure APIM: Monitor all your APIM instances on multiple metrics and properties (Failed Requests, Successful Requests, etc) at no additional cost by setting up maximum thresholds to get an alert whenever there is a violation. 

With this, get to overcome one of the major limitations in Azure Monitor – The restriction to monitor only countable metrics under a single alert. 

Real-time consolidated error reports: In any sort of traditional monitoring, error reports will be generated for each APIM API, operation, or product, making it very difficult to identify the root cause of an issue.  

But Serverless360 can mitigate the challenge by sending you a consolidated report on all the APIM Instances at desired time intervals, eliminating false or alert storms. 

Discover failure trends: Serverless360 offers customizable, plug-and-play dashboards to provide a unified view of the metrics monitored. For instance, Visualize business-centric metrics like the response rates of APIs in a single place to avoid latency issues. 

End-to-end tracking: An Azure application will have various Azure services along with other APIM Instances, so tracking how an API call flows through each of those services is required to perform root cause analysis and troubleshoot issues faster than ever. 

App Insights in Azure Monitor just lets you visualize how services interact with each other whereas Serverless360 supports end-to-end tracking along with dependency mapping. 

Auto-correct the status of APIM Products: Automation is very crucial to reduce the manual workload involved in resolving recurring incidents.  

Serverless360 offers unique functionality to monitor the status of APIM products and auto-correct them during unintended interruptions. Also, it lets you configure various automated actions to be triggered whenever there is a threshold violation. 

Optimize costs associated with APIM Instances: Save time and cost by auto-generating documentation on your entire Azure environment/infrastructure. These documents help keep track of the costs associated with your APIM APIs, operations, and products. They enable you to compare the costs incurred across various time periods and to gain a full analysis of the expenditures made for each of those components. 

No room for security breaches: Understanding the importance of governance and auditing in enhancing security, Serverless360 comes with features to audit every action performed on your APIM instances and enable advanced role-based access control. 

Decode App Insights and Log Analytics: Enabling App Insights and Log Analytics can derive useful data on the APIM performance, however, Serverless360 can make this information more usable to the support team. Refer to this blog to learn more: Serverless360 to enable your Security Manager to Azure WAF data 

Offload support: Serverless360 offers an operation-friendly interface simple and straightforward for support users to infer the status of the resource and remediate identified issues. This can facilitate offloading support from the Azure team allowing them to innovate in business. 

Conclusion 

Having a solid tool for Azure APIM monitoring is mandatory for any organization, as the failure of an API could result in critical performance issues for the whole application.  

But there are plenty of choices available for Azure APIM monitoring and it is important that you choose the most apt one for your business. Thus, this blog discusses the features of native-Azure monitoring tools, their drawbacks, and a solution (Serverless360) to overcome them with consolidated monitoring for Azure APIM APIs, Products, and Operations. 

The post Azure API Management Monitoring and Alerting made simpler appeared first on Steef-Jan Wiggers Blog.

The value of having a Third-party Monitoring solution for Azure Integration Services

The value of having a Third-party Monitoring solution for Azure Integration Services

My day-to-day job focuses on enterprise integration between systems in the cloud and/or on-premises. Currently, it involves integration with D365 Finance and Operations (or Finance and Supply Change Management). One aspect of the integrations is monitoring. When a business has one or more Azure Integration Service running in production, the operation aspect comes into play. Especially integrations that support crucial business processes. The operations team requires the correct procedures, tools, and notifications (alerts) to run these processes. Procedures and receiving notifications are essential; however, team members need help identifying issues and troubleshooting. Azure provides tools, and so do third-party solutions. This blog post will discuss the value of having third-party monitoring in place, such as Serverless360.

Serverless360

Many of you who read blogs on Serverless360 know what the tool is. Moreover, it is a service hosted as a Software as a Service (SaaS). Therefore, operation teams can require access once a subscription is acquired or through a trial. Subsequently, they can leverage the primary business application, business activity monitoring, and documenter feature within the service. We will briefly discuss each feature and its benefits and value in the upcoming paragraphs.

BUSINESS APPLICATIONS

A team can configure, and group integration components with the business applications feature a so-called “Business Application” to monitor. It does not matter where the resources reside – within one or more subscriptions/resource groups.

The overview shown above is the grouping of several resources belonging to an integration solution. In one blink of an eye, a team member of the operations team can see the components’ state and potential issues that need to be addressed. Can the same be done in Azure with available features such as Azure Monitor, including components like Application Insights? Yes, it can be done. However, it takes time to build a dashboard. Furthermore, when operations are divided into multiple tiers, first-tier support professionals might not be familiar with the Azure Portal. In a nutshell, an overview provided by Business Application is not present in Azure out-of-the-box.

As Lex Hegt, Lead Product Consultant at BizTalk360 points out:

Integration solutions can span multiple technologies, resource groups, tags, and even Azure subscriptions. With the Azure portal having the involved components in all those different places, it is hard to keep track of the well-being of those components. Serverless360 helps you utilize the concept of Business Applications. A Business Application is a container to which you can add all the components that belong to the same integration. Once you have added your components to a Business Application, you can set up monitoring for those components, provide access permissions, and administer them.

The Business Application brings another feature that provides an overview of the integration components and dependencies. You might be familiar with the service map feature in Application Insights on a more fine-grained level. The service map in Serverless360 is intended to show the state of each component and dependency on a higher level.

Within a business application, the configuration of monitoring components is straightforward. By selecting the component and choosing the monitoring section, you can set thresholds of performance counters and set the state.

The value of Business Applications is a quick view of the integrations state and the ability to dive into any issue quickly, leading to time-saving by spending far less time identifying the problem (see, for instance, Application Insights health check with Serverless360, and Integrating Log Analytics in Serverless360). With more time on their hand’s operations teams can focus on various other matters during a workday or shift. Furthermore, the ease of use of Business Applications doesn’t require support people in a first-tier support team to have a clear understanding and experience of the Azure portal.
Having a clear overview is one thing. However, it also helps operations teams get notifications or finetune metrics based on thresholds and only receive information when it matters. In addition, it’s essential to keep integrations operational when they support critical business processes, as any outage costs a significant amount of money.

BUSINESS ACTIVITY MONITORING

The second feature of Serverless360 is the end-to-end tracking capability called Business Activity Monitoring (BAM). The BAM feature organization can instrument their Azure resources that support integrations between systems. Through a custom connector and SDK, you can add tracking to Logic Apps and Azure Functions that are a part of your integration. A unique generated transaction instance-id in the first component will be carried forward to the subsequent stages in more functions and Logic Apps.

The operations team must do some work to leverage the BAM functionality. They need to set up the hosting of the BAM infrastructure, define the business process, instrument the business process and add monitoring (see, for instance, Azure Service Bus Logging with BAM – Walk-through). Once that is done, a clear view of the process and its stages are available.

The benefit of the BAM feature is a concise overview of the configured business processes. Moreover, you get an overview of the complete process and potentially see where things go wrong.

AZURE DOCUMENTER

The final feature Serverless360 offers Azure Documentation Generator, which is intended to generate documentation. Operations teams can generate documentation for the subscription that contains the integrations with the documenter. It is good to have a dedicated subscription for integration solutions to govern better and manage Azure resources.

When operations teams like to generate documentation, they can choose between different templates, storing of the document, and billing range.

The benefit of having documentation of the integrations in a subscription is having a clear overview of the components, details, and costs (consumption). While the Azure portal offers a similar capability, you will have to go to the Cost management and billing to see consumption and cost, Azure Advisor, and other places. Furthermore, there is no feature to generate documentation to help report the Azure resources’ state.
The value of the Azure Documenter is the flexibility for generating documentation on a different level of granularity. Furthermore, by frequently running the documenter, you can spot differences like an unexpected increase in cost provide executive reports and information for your knowledge base for integrations.

Conclusion

Features and benefits of Serverless360 have been outlined in this blog post. Of course, there are many more features. Yet, we focused on the most significant one that provides Operations teams the most value. That is a clear overview of the state of integrations in a single-pane-of-glass and the ability to quickly drill down into integration components and spot issues at a fine-grained level. Furthermore, Business Activity Monitoring and Azure Documenter provide end-to-end tracking and generated documentation.

Serverless360 offers an off-the-shelf product for monitoring not directly available in the Azure Portal. As an organization, you can decide whether to buy a product or build a custom solution, or both to fulfill monitoring requirements for integration solutions. Serverless360 can be the solution for organizations looking for a product to meet their needs. It has unique features which are not directly available in Azure or require a substantial investment to achieve.
For more details and Serverless360 in action, see the webinar of Michael Stephenson: Support Strategy for Event-Driven Integration Architectures and the latest features blog.

The post The value of having a Third-party Monitoring solution for Azure Integration Services appeared first on Steef-Jan Wiggers Blog.

July 18, 2022 Weekly Update on Microsoft Integration Platform & Azure iPaaS

July 18, 2022 Weekly Update on Microsoft Integration Platform & Azure iPaaS

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform and Azure iPaaS?

Integration weekly updates can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

Microsoft Announcements and Updates

 Community Blog Posts

Videos

Podcasts

How to get started with iPaaS design & development in Azure?

  • Robust Cloud Integration with Azure
  • Microsoft Azure for Developers: What to Use When
  • Serverless Computing: The Big Picture
  • Azure Logic Apps: Getting Started
  • Azure Logic Apps: Fundamentals
  • Microsoft Azure Developer: Creating Enterprise Logic Apps
  • Microsoft Azure API Management Essentials
  • Azure Functions Fundamentals
  • Cloud Design Patterns for Azure: Availability and Resilience
  • Architecting for High Availability in Microsoft Azure

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.

The post July 18, 2022 Weekly Update on Microsoft Integration Platform & Azure iPaaS appeared first on Hooking Stuff Together.

Logic Apps: CSV Alphabetic sorting

Logic Apps: CSV Alphabetic sorting

Everyday we learn new things and when it comes to Logic Apps, we tend to learn even more, because it’s always shifting and new components are added. If we’re using ARM templates, the deployment brings out some challenges and with it, new things to learn (and lots of cute little things that make you want to bang your head against a brick wall).

Usually when we work with a CSV file we tend to keep the sorting according to the specification. It isn’t always alphabetical nor descending/ascending.

Sometimes, it’s just a real mess but it makes sense to the client and to the application that is consuming it.

A few days ago, whilst working on a client project and after dozens of tests, we started to see errors in our CSV file, where the headers and columns were arranged in a alphabetical sorting. This was not my intent, when I built the CSV array I wanted it to be in a certain order.

So why was this array now being sorted, who gave that command and how could I correct it?

Why and who:

As we dig in the Logic App code, we see that the Logic App is JSON in its core (my god, shocking development!). As such, it will follow JSON rules on sorting. If we set / append our variable with an array, even though that array won’t show up ordered in our code, it will when we deploy it to our Resource Group.

Lets prove this.

First, we set our LA in Visual Studio and initialize a string. Then we set two values to it ( “Append to string variable”) . One as a string and the other as an array.

{x} 
Append to string variable - pure string 
Name 
Value 
csvString 
•zulu": " 
•alpha": 
•charlie": 
romeu " 
•juliet": 
Append to string variable - array 
Name 
Value 
csvString 
•zulu": " 
•alpha": 
•charlie": 
romeu " 
•juliet":

Let’s look at the back code.

"Append_to 
string_variable -_pure_string" : 
"type": 
"AppendToStringVariabIe" , 
"inputs": 
"name": "csvString" 
"value : 
"runAfter": 
"Initialize variable": 
"Succeeded" 
"Append_to 
string_variable - 
array" : 
"type": 
"AppendToStringVariabIe" , 
"inputs": 
'name": "csvString" 
"value": 
"Zulu": " 
"alpha"• " 
"chat-lie": , 
"juliet": " 
"runAfter": 
"Succeeded" 
-_pure_string" :

Looking good so far. Our strings are set and it’s in the order we want.

Lets deploy it to our RG and check again.

Development Tools 
Logic app designer 
Logic app code view 
e 
Versions 
API connections 
Quick start guides 
Release notes 
Settings 
@ Workflow settings 
@ Access keys 
Identity 
Properties 
Locks 
Export template 
Monitoring 
Alerts 
Append to string variable - pure string 
o 
Name 
Value 
csvString 
"Zulu": 
"alpha": 
"charlie•: 
"romeu . 
"juliet": 
{x} 
Append to string variable - array 
Name 
Value 
csvString 
"alpha": 
"charlie": 
"juliet": " 
mmeu": " 
"Zulu":

Well, there it is. In ARM deployment, if we write a JSON object, on deployment it gets sorted and will appear like this in the designer tool in Portal.

Funny thing is that if we change our object to the string we want, the designer will not recognize this as a change and doesn’t let you save.

Save 
X Discard 
> Run 
Designer 
Code view Parameters Templates Connectors 
When a HTTP request is received 
Initialize variable 
Append to string variable - pure string 
• Value 
csvString 
"alpha": 
Append to string variable - array 
Value 
•alpha • : 
•juliet":

Even in Code View the changes are not recognized.

R save 
X Discard 
Run Designer 
"definition" 
"actions": 
Code view 
"*schema": "https://schema.management. 
"Append _ to string_variable - _ array 
"inputs" : 
" name" • "csvString" , 
"value : 
"Zulu": " 
"alpha" • " 
"charlie": 
"romeu" • 
"juliet"• "

But if we add other text to it, the changes are now recognized and Portal allows to save.

X Discard Run 
Designer 
"definition" 
"actions": 
Code view 
"*schema": "https://schema.management. 
"Zulu": 
"alpha" • " 
"charlie": 
"romeu" 
"juliet"• 
"Append _ to string_variable - _ arra) 
"inputs" : 
" name" • "csvString" , 
"value : 
"juliet"

But still, it won’t show you the changes and will still sort out your CSV array, once again because it’s JSON.

A few weeks ago, this behavior wasn’t noticeable I had a few Logic Apps in place with the string array in a specific order and when deploying it didn’t get sorted.

I searched in Azure updates to see if anything was mentioned but nothing came up.

How to bypass this issue?

If you’re working with a CSV file like I was, after you build your array, you’ll need to build a CSV table.

The action “Create CSV table” will take care of this from you, but as we know, it will not be in the same format we need.

{x} 
Append to array variable 
-2 
Name 
Value 
csvString 
"*Zulu": 
"Hromeu•: "2", 
"Halpha": 
"*juliet": 
"*charlie•: "5" 
{x} 
Append to array variable 
Parse JSON 
Create CSV table 
o 
From 
Body x 
Columns

(notice I’ve switched to array variable because I can’t parse the string in JSON)

So, leaving the Columns in automatic mode will mess up your integration as you can see. The output will be sorted and it won’t be what you want / need.

Create CSV table 
INPUTS 
CSV 
"Hcharlie": " 
"Hjuliet"• ' 
"Hromeu": " " 
"Hzulu": " 
"Halpha": " 
OUTPUTS 
Body 
Halphe , Hcharl ie, Hjul let , Hromeu , Hzulu 
Os 
Show raw inputs 
Show raw outputs

What a mess!! This is nothing like we wanted.

We will need to manually define the columns headers and the value they’re going to have.

Create CSV table 
• From 
• Columps 
Hromeu 
8,7'.l11et 
???? ? 
CustDT 
???? ? 
Hju11et ? 
? 
?

If you don’t have many fields, it’s quick to do this, but when you have lots of fields… well, let’s just say I hope you have plenty of time and don’t lose focus.

Create CSV table 
INPUTS 
CSV 
"Halpha": " " 
"Hcharlie": " 
"Hjuliet"• " " 
"Hromeu": ' 
"Hzulu": " 
OUTPUTS 
Body 
Hzulu, Hromeu,Ha1phe , Hjuliet, Hcherl ie 
18 
Show raw inputs 
Show raw outputs

And there we have it. Fields are now displayed correctly, the data is in the right place and we’ve managed to get around this annoying problem.

Happy coding!

The post Logic Apps: CSV Alphabetic sorting appeared first on SANDRO PEREIRA BIZTALK BLOG.

Controlling the initial state of a Logic App

Controlling the initial state of a Logic App

Sometimes I have the need to have a Logic App (LA) disabled when I deploy it. For instance, when deploying to Production, I like to have my LAs disabled, because I want to double check everything before starting the process.

This is helpful because usually, when using the “Recurrence” trigger, the LA will start immediately. If for some reason, a connector has the wrong configuration or is broken or the end system is offline, the execution will fail. Other scenarios can happen as well, but that’s another story.

An interesting fact is that you don’t have a proper way to control this in the Portal. You can add the control line to the code, but you won’t be able to control it with CI CD.

So, in comes Rocket science (or not).

The resource code contains a property that will allow you to control the state of a LA and it’s quite easy to set. If you do not specify this property, the LA will start enabled and will trigger if it can.

The property is called “state” and lies within the “properties” node. Setting this property as a global parameter, allows you to prepare your CI/CD pipeline also allowing to parameterize this in your release.

This is quite and easy and simple insert, that should take no more than 5 minutes for you to configure.

If you choose the “Disabled” state, the LA will not start unless you specifically activate it.

Happy coding!

The post Controlling the initial state of a Logic App appeared first on SANDRO PEREIRA BIZTALK BLOG.

Logic Apps: Moving from Azure Portal to Visual Studio

Logic Apps: Moving from Azure Portal to Visual Studio

Most developers start working with Logic Apps through Azure Portal, because it’s fast and direct. You just open the Portal, create your resource and start working. This is fine but it comes with a cost. There are several limitations to what you can do, specially when it comes to CI/CD (Continuous Integration/Continuous Delivery).

To handle this, there is the need to move to Visual Studio and start working from there. For this to happen, you need tools to help you and there’s a few available. In this post, I will approach a very good one and how to use it.

This tool is a collection of powershell scripts that will download to file your Logic App code and it also can create a parameters file.

The creator of this collection is Jeff Hollan, PM Lead for Microsoft. You can check his work at his GitHub repo. https://github.com/jeffhollan

The project that we’re going to use is LogicAppTemplateCreator. It’s a C# project that creates a DLL and that we will import and use.

https://github.com/jeffhollan/LogicAppTemplateCreator

Let’s begin our process. After cloning the solution and rebuilding it, the DLL will be in the usual folder ($sourcefolder/bin/debug/).

After the solution is built, open Powershell and import this DLL, using the following command:

 Import-Module C:{​​​​​​​pathToSolution}​​​​​​​LogicAppTemplateCreatorLogicAppTemplatebinDebugLogicAppTemplate.dll 

I dropped into a Shared folder, because I’ve referenced it in a Repo for other people to consume in our projects, but this is not necessary, although I recommend it so that it becomes easier for future developers in your company.

After executing this, you’ll be ready to download your ARM template. So, get your Resource Group ID, you Subscription ID and prepare an output folder. You will also need to enter your credentials to login by powershell.

The script should be changed according to your IDs. Do note that it should not be case sensitive. If you don’t set the Out-File, the output will be set in the powershell console, you’ll still be able to copy it and paste in a file, but it’s an unnecessary step.

armclient token {subscription ID} | Get-LogicAppTemplate -LogicApp {LogicAppName} -ResourceGroup {ResourceGroup} -SubscriptionId {subcriptionID}  | Out-File {ARMTemplateOutput}

After the ARM template is created, a parameter template file can be generated  by running the following script.

get-ParameterTemplate -GenerateExpression True -TemplateFile {ARMTemplateOutput} | Out-File {ParameterTemplateOutput}

You will now have both files you need to manage your Logic App, so just copy them to your VS Azure Resource Group project, with Logic Apps Template and you’re almost ready to go.

You will need to address the path links in the JSON code to make them CI/CD-able and fix the parameters in some connectors, but there’s not a lot of work to be done.

As you can see, the ARM template already provides all connection parameters and connections variables. I do recommend you changed them to a more appropriate naming convention like “arm_O365”, “arm_SQL” or “arm_ServiceBus”. This way you will know what it’s referring to with a very understandable pattern.

At the end of the day, your Logic App should be ready for deployment in your subscriptions and look something like this:

Happy coding!

The post Logic Apps: Moving from Azure Portal to Visual Studio appeared first on SANDRO PEREIRA BIZTALK BLOG.