Adding circuit breakers to your .NET applications

Adding circuit breakers to your .NET applications

Apps fail. Hardware fails. Networks fail. None of this should surprise you. As we build more distributed systems, these failures create unpredictability. Remote calls between components might experience latency, faults, unresponsiveness, or worse. How do you keep a failure in one component from creating a cascading failure across your whole environment?

In his seminal book Release It!, Michael Nygard introduced the “circuit breaker” software pattern. Basically, you wrap calls to downstream services, and watch for failure. If there are too many failures, the circuit “trips” and the downstream services isn’t called any longer. Or at least for a period of time until the service heals itself.

How do we use this pattern in our apps? Enter Hystrix from Netflix OSS. Released in 2012, this library executes each call on a separate thread, watches for failures in Java calls, invokes a fallback operation upon failure, trips a circuit if needed, and periodically checks to see if the downstream service is healthy. And it has a handy dashboard to visualize your circuits. It’s wicked. The Spring team worked with Netflix and created a easy-to-use version for Spring Boot developers. Spring Cloud Hystrix is the result. You can learn all about it in my most recent Pluralsight course.

But why do Java developers get to have all the fun? Pivotal released an open-source library called Steeltoe last year. This library brings microservices patterns to .NET developers. It started out with things like a Git-backed configuration store, and service discovery. The brand new update offers management endpoints and … an implementation of Hystrix for .NET apps. Note that this is for .NET Framework OR .NET Core apps. Everybody gets in on the action.

Let’s see how Steeltoe Hystrix works. I built an ASP.NET Core service, and than called it from a front-end app. I wrapped the calls to the service using Steeltoe Hystrix, which protects my app when failures occur.

Dependency: the recommendation service

This service returns recommended products to buy, based on your past purchasing history. In reality, it returns four products that I’ve hard-coded into a controller. LOWER YOUR EXPECTATIONS OF ME.

This is an ASP.NET Core MVC Web API. The code is in GitHub, but here’s the controller for review:

namespace core_hystrix_recommendation_service.Controllers
    public class RecommendationsController : Controller
        // GET api/recommendations
        public IEnumerable<Recommendations> Get()
            Recommendations r1 = new Recommendations();
            r1.ProductId = "10023";
            r1.ProductDescription = "Women's Triblend T-Shirt";
            r1.ProductImage = "";

            Recommendations r2 = new Recommendations();
            r2.ProductId = "10040";
            r2.ProductDescription = "Men's Bring Back Your Weekend T-Shirt";
            r2.ProductImage = "";

            Recommendations r3 = new Recommendations();
            r3.ProductId = "10057";
            r3.ProductDescription = "H2Go Force Water Bottle";
            r3.ProductImage = "";

            Recommendations r4 = new Recommendations();
            r4.ProductId = "10059";
            r4.ProductDescription = "Migrating to Cloud Native Application Architectures by Matt Stine";
            r4.ProductImage = "";

            return new Recommendations[] { r1, r2, r3, r4 };

Note that the dependency service has no knowledge of Hystrix or how the caller invokes it.

Caller: the recommendations UI

The front-end app calls the recommendation service, but it shouldn’t tip over just because the service is unavailable. Rather, bad calls should fail quickly, and gracefully. We could return cached or static results, as an example. Be aware that a circuit breaker is much more than fancy exception handling. One big piece is that each call executes in its own thread. This implementation of the bulkhead patterns prevents runaway resource consumption, among other things. Besides that, circuit breakers are also machinery to watch failures over time, and allow the failing service to recover before allowing more requests.

This ASP.NET Core app uses the mvc template. I’ve added the Steeltoe packages to the project. There are a few Nuget packages to choose from. If you’re running this in Pivotal Cloud Foundry, there’s a set of packages that make it easy to integrate with Hystrix dashboard embedded there. Here, let’s assume we’re running this app somewhere else. That means I need the base package “Steeltoe.CircuitBreaker.Hystrix” and “Steeltoe.CircuitBreaker.Hystrix.MetricsEvents” which gives me a stream of real-time data to analyze.

<Project Sdk="Microsoft.NET.Sdk.Web">
    <PackageReference Include="Microsoft.AspNet.WebApi.Client" Version="5.2.3" />
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix" Version="1.1.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix.MetricsEvents" Version="1.1.0" />
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />

I built a class (“RecommendationService”) that calls the dependent service. This class inherits from HystrixCommand. There are a few ways to use these commands in calling code. I’m adding it to the ASP.NET Core service container, so my constructor takes in a IHystrixCommandOptions.

//HystrixCommand means no result, HystrixCommand<string> means a string comes back
public class RecommendationService: HystrixCommand<List<Recommendations>>
  public RecommendationService(IHystrixCommandOptions options):base(options) {

I’ve got inherited methods to use thanks to the base class. I call my dependent service by overriding Run (or RunAsync). If failure happens, the RunFallback (or RunFallbackAsync) is invoked and I just return some static data. Here’s the code:

protected override List<Recommendations> Run()
  var client = new HttpClient();
  var response = client.GetAsync("http://localhost:5000/api/recommendations").Result;

  var recommendations = response.Content.ReadAsAsync<List<Recommendations>>().Result;

  return recommendations;

protected override List<Recommendations> RunFallback()
  Recommendations r1 = new Recommendations();
  r1.ProductId = "10007";
  r1.ProductDescription = "Black Hat";
  r1.ProductImage = "";

  List<Recommendations> recommendations = new List<Recommendations>();

  return recommendations;

My ASP.NET Core controller uses the RecommendationService class to call its dependency. Notice that I’ve got an object of that type coming into my constructor. Then I call the Execute method (that’s part of the base class) to trigger the Hystrix-protected call.

public class HomeController : Controller
  public HomeController(RecommendationService rs) { = rs;

  RecommendationService rs;

  public IActionResult Index()
    //call Hystrix-protected service
    List<Recommendations> recommendations = rs.Execute();

    //add results to property bag for view
    ViewData["Recommendations"] = recommendations;

    return View();

Last thing? Tying it all together. In the Startup.cs class, I added two things to the ConfigureServices operation. First, I added a HystrixCommand to the service container. Second, I added the Hystrix metrics stream.

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)

  //add QueryCommand to service container, and inject into controller so it gets config values
  services.AddHystrixCommand<RecommendationService>("RecommendationGroup", Configuration);

  //added to get Metrics stream

In the Configure method, I added couple pieces to the application pipeline.

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
   if (env.IsDevelopment())



   app.UseMvc(routes =>
       name: "default",
       template: "{controller=Home}/{action=Index}/{id?}");


That’s it. Notice that I took advantage of ASP.NET Core’s dependency injection, and known extensibility points. Nothing unnatural here.

You can grab the source code for this from my GitHub repo.

Testing the circuit

Let’s test this out. First, I started up the recommendation service. Pinging the endpoint proved that I got back four recommended products.


Great. Next I started up the MVC app that acts as the front-end. Loading the page in the browser showed the four recommendations returned by the service.


That works. No big deal. Now let’s turn off the downstream service. Maybe it’s down for maintenance, or just misbehaving. What happens?


The Hystrix wrapper detected a failure, and invoked the fallback operation. That’s cool. Let’s see what Hystrix is tracking in the metrics stream. Just append /hystrix/ to the URL and you get a data stream that’s fully compatible with Spring Cloud Hystrix.


Here, we see a whole bunch of data that Hystrix is tracking. It’s watching request count, error rate, and lots more. What if you want to change the behavior of Hystrix? Amazingly, the .NET version of Hystrix in Steeltoe has the same broad configuration surface that classic Hystrix does. By adding overrides to the appsettings.json file, you can tweak the behavior of commands, the thread pool, and more. In order to see the circuit actually open, I stretched the evaluation window (from 10 to 20 seconds), and reduced the error limit (from 20 to 3). Here’s what that looked like:

"hystrix": {
  "command": {
      "default": {
        "circuitBreaker": {
          "requestVolumeThreshold": 3
        "metrics" : {
          "rollingStats": {
            "timeInMilliseconds" : 20000

Restarting my service shows new threshold in the Hystrix stream. Super easy, and very powerful.


BONUS: Using the Hystrix Dashboard

Look, I like reading gobs of JSON in the browser as much as the next person with too much free time. However, normal people like dense visualizations that help them make decisions quickly. Fortunately, Hystrix comes with an extremely data-rich dashboard that makes it simple to see what’s going on.

This is still a Java component, so I spun up a new project from and added a Hystrix Dashboard dependency to my Boot app. After adding a single annotation to my class, I spun up the project. The Hystrix dashboard asks for a metrics endpoint. Hey, I have one of those! After plugging in my stream URL, I can immediately see tons of info.


As a service owner or operator, this is a goldmine. I see request volumes, circuit status, failure counts, number of hosts, latency, and much more. If you’ve got a couple services, or a couple hundred, visualizations like this are a life saver.


As someone who started out their career as a .NET developer, I’m tickled to see things like this surface. Steeltoe adds serious juice to your .NET apps and the addition of things like circuit breakers makes it a must-have. Circuit breakers are a proven way to deliver more resilient service environments, so download my sample apps and give this a spin right now!


Categories: .NET, Cloud, Microservices, Pivotal, Spring

Microsoft Integration (Azure and much more) Stencils Pack v2.6 for Visio 2016/2013: Azure Event Grid, BizMan, IoT and much more

Microsoft Integration (Azure and much more) Stencils Pack v2.6 for Visio 2016/2013: Azure Event Grid, BizMan, IoT and much more

I decided to update my Microsoft Integration (Azure and much more) Stencils Pack with a set of 24 new shapes (maybe the smallest update I ever did to this package) mainly to add the Azure Event Grid shapes.

One of the main reasons for me to initially create the package was to have a nice set of Integration (Messaging) shapes that I could use in my diagrams, and during the time it scaled to a lot of other things.

With these new additions, this package now contains an astounding total of ~1311 shapes (symbols/icons) that will help you visually represent Integration architectures (On-premise, Cloud or Hybrid scenarios) and Cloud solutions diagrams in Visio 2016/2013. It will provide symbols/icons to visually represent features, systems, processes, and architectures that use BizTalk Server, API Management, Logic Apps, Microsoft Azure and related technologies.

  • BizTalk Server
  • Microsoft Azure
    • Azure App Service (API Apps, Web Apps, Mobile Apps and Logic Apps)
    • API Management
    • Event Hubs & Event Grid
    • Service Bus
    • Azure IoT and Docker
    • SQL Server, DocumentDB, CosmosDB, MySQL, …
    • Machine Learning, Stream Analytics, Data Factory, Data Pipelines
    • and so on
  • Microsoft Flow
  • PowerApps
  • Power BI
  • Office365, SharePoint
  • DevOpps: PowerShell, Containers
  • And much more…

The Microsoft Integration (Azure and much more) Stencils Pack v2.6 is composed by 13 files:

  • Microsoft Integration Stencils v2.6
  • MIS Apps and Systems Logo Stencils v2.6
  • MIS Azure Portal, Services and VSTS Stencils v2.6
  • MIS Azure SDK and Tools Stencils v2.6
  • MIS Azure Services Stencils v2.6
  • MIS Deprecated Stencils v2.6
  • MIS Developer v2.6
  • MIS Devices Stencils v2.6
  • MIS IoT Devices Stencils v2.6
  • MIS Power BI v2.6
  • MIS Servers and Hardware Stencils v2.6
  • MIS Support Stencils v2.6
  • MIS Users and Roles Stencils v2.6

These are some of the new shapes you can find in this new version:

Microsoft Integration (Azure and much more) Stencils Pack v2.6 for Visio 2016/2013

  • Azure Event Grid
  • Azure Event Subscriptions
  • Azure Event Topics
  • BizMan
  • Integration Developer
  • OpenAPI
  • Load Testing
  • API Testing
  • Performance Testing
  • Bot Services
  • Azure Advisor
  • Azure Monitoring
  • Azure IoT Hub Device Provisioning Service
  • Azure Time Series Insights
  • And much more

You can download Microsoft Integration (Azure and much more) Stencils Pack from:
Microsoft Integration Stencils Pack for Visio 2016/2013 (11,4 MB)
Microsoft | TechNet Gallery

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Can we create Custom widgets with cross domain URL?

Can we create Custom widgets with cross domain URL?

In my previous blog, I spoke about one of the issues we encountered during our support. In this blog, I will specifically be talking about the custom widgets in BizTalk360, which a lot of our customers use to display data that is important to them. Users are able to create custom widgets and associate them with a dashboard. Custom widgets allow integration with BizTalk360’s own API’s, as well as third-party API’s.

The below code in a custom widget shows analytical data in graphical form. In recent times, we received a support case where the customer was trying to create a custom widget referencing the blog. Following should be the result of the custom widget in the dashboard.

Can we create Custom widgets with cross domain URL?

Can we create Custom widgets with cross domain URL?

But the customer ended up with the below error after integrating the custom widget.

Can we create Custom widgets with cross domain URL?

A script for the custom widget:

//URL used to get JSON Data for the Charts
this.URL = '';

//Refresh interval in milliseconds 60000 milliseconds = 60 seconds
this.REFRESH_INTERVAL = 60000;
//flag to check if widget should auto Refresh
//Highcharts Options
            chart: {
                type: 'line',
                zoomType: 'x'
            exporting: {
                enabled: false
            credits: {
                enabled: false,
                title: '',
                style: {
                    display: 'none'
            title: {
                text: 'Sales of 2016'
            xAxis: {
                type: 'datetime'
            yAxis: {
                title: {
                text: 'Units'
            tooltip: {
                crosshairs: true,
                shared: true,
                valueSuffix: 'Units'
            legend: {
                enabled: false
            series: [{
                name: 'Sales',
                data: []
this.widgetDetails = ko.observable();
this.error = ko.observable(null);
var _this = this;
var getdata = function() 
$.getJSON(_this.URL, function (data) 
                _this.HIGHCHART_OPTIONS.series[0].data = data;
//loading data for the first time
//handles auto refresh
    setInterval(getdata, this.REFRESH_INTERVAL);
<!-- ko if: error() == null -->
<div data-bind="highCharts: widgetDetails()" style="height:380px; width:700px"></div>
<!-- /ko -->
<!-- ko if: error() != null -->
<div class="row">
  <div class="col-md-offset-4 col-md-4 bg-danger">
      <p>Error occured while trying to fetch data </p>
        <span>Status : </span><span data-bind="text:error().status"></span><br>
      <span>Status Text : </span><span data-bind="text:error().statusText"></span>
<!-- /ko -->

Script Explained

This code consists of 4 configuration variables.


URL variable will allow you configure the API URL from where you fetch the JSON data. Based on the High charts options, the formatting for the data may also change. Here in the above example, the API returns the data as JSON array and date-time stamp as a Unix Timestamp.


This flag determines if your widget should be auto-refreshed or not. For instance, if the service call that feeds your chart is very expensive and you don’t want to call that every now and then, then you can probably disable this flag or set the refresh interval to a higher value.


Refresh interval lets you configure the interval after which the widget data should be refreshed. Note that the interval is in milliseconds. So, if you want it to refresh every one minute then you should set the refresh interval to 60000. Note that for an auto-refresh to work, AUTO_REFRESH_ENABLED flag must be set to true.


BizTalk360 uses High charts for all data analytics. We already have the underlying binding handler framework to apply the options and this makes analytic widget creation a lot easier. To modify the charts, you simply need to update the HIGHCHART_OPTIONS. In this example,

the “data” property inside series array (where the data is supposed to be) is left as an empty array intentionally. It will be filled with the data that is retrieved from the URL that you have specified. High charts support a variety of charts and you can follow this link to get the type of chart that you want to bring into your custom widget.

The line chart that we have created here can be converted into an Area chart or a Column chart by simply changing “type” under “charts” options in HIGHCHART_OPTIONS.

Investigation of the Issue

Initially, we suspected that the customer might not be able to fetch the JSON data using the URL. When we asked to browse the URL and they could browse and view the results. So, the next step was to isolate the case at the customer end. Whenever such a situation arises we require more information about the customer’s environment, we would go for a web meeting with a screen sharing session. We went for the screen sharing session and we started with the basic troubleshooting steps like checking the configuration, environment etc.

At last, we found that the customer is using https://localhot/biztalk360 and he is trying to monitor

Using HTTPS, there is a security code being generated and shared to accept the information between computers. (Say, in client and server architecture). This keeps the information safe from the hackers.

They use the “code” on a Secure Sockets Layer (SSL), sometimes called Transport Layer Security (TLS) to send the information back and forth.

Resolution Provided

When the HTTP is used inside the HTTPS URL, the HTTPS expected a “code” from HTTP. When the response from the widget URL was coming without the code it threw the error message “Access is denied” on the widget.

Hence, It is not possible to create custom widgets with the cross-domain URL. If the HTTPS is used, all the related URL must use the HTTPS.

If you have any questions, contact us at Also, feel free to leave your feedback in our forum.

Author: Sivaramakrishnan Arumugam

Sivaramakrishnan is our Support Engineer with quite a few certifications under his belt. He has been instrumental in handling the customer support area. He believes Travelling makes happy of anyone.

Validating Json Schema in Azure Logic Apps

Validating Json Schema in Azure Logic Apps

Hi All,

In this post we will show how we can validate Json schema against the message  in Logic Apps. I and Krishna Pochanapeddi working on building an interface service in Logic Apps. We need to send a request message to Logic Apps request connector. We need to validate the same message against the Json schema. There is no capability within Logic Apps to validate the names of the fields in the Json message. We can do this easily using Azure Function by passing the Schema and request message using Azure Function connector. However, we do not want to use Azure function to validate schema. We want to use Logic Apps as a complete solution for these validation issues. It is easy to validate XML Schema using Integration Account but Json message cannot be validated.

After wondering for few hours; reading Json best practices; we  found the basic and powerful Json capability which is the object option “required”. Now we can mentioned the required fields (field names) in the schema itself.

We created below mentioned Json schema

“$schema”: “;,
“definitions”: {},
“id”: “;,
“properties”: {
“ChangePasswordRequest”: {
“id”: “/properties/ChangePasswordRequest”,
“properties”: {
“CurrentPassword”: {
“default”: “currenthashedpassword”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/CurrentPassword”,
“title”: “The currentpassword schema”,
“type”: “string”
“Identifier”: {
“default”: “126”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/Identifier”,
“title”: “The identifier schema”,
“type”: “string”
“IdentifierScheme”: {
“default”: “test”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/IdentifierScheme”,
“title”: “The identifierscheme schema”,
“type”: “string”
“MessageIdentifier”: {
“default”: “f7b351fb-ade4-4361-bfc3-9bb7df783880”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/MessageIdentifiers”,
“title”: “The messageidentifiers schema”,
“type”: “string”
“MessageTimeStamp”: {
“default”: “2016-04-04T14:15:02.6476354+10:00”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/MessageTimeStamp”,
“title”: “The messagetimestamp schema”,
“type”: “string”
“NewPassword”: {
“default”: “Pass126”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/NewPassword”,
“title”: “The newpassword schema”,
“type”: “string”
“NotificationAddress”: {
“default”: “”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/NotificationAddress”,
“title”: “The notificationaddress schema”,
“type”: “string”
“NotificationPreference”: {
“default”: “Email”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/NotificationPreference”,
“title”: “The notificationpreference schema”,
“type”: “string”
“OriginatingSystem”: {
“default”: “test”,
“description”: “An explanation about the purpose of this instance.”,
“id”: “/properties/ChangePasswordRequest/properties/OriginatingSystem”,
“title”: “The originatingsystem schema”,
“type”: “string”
required”: [
“type”: “object”
“required”: [
“type”: “object”

Added this schema in the Logic app request connector and Parse Json connector.


If we send a invalid message  as per below:

“ChangePasswordRequest”: {
“MessageIdentifier”: “f7b351fb-ade4-4361-bfc3-9bb7df783880”,
“OriginatingSystem”: “test”,
“MessageTimeStamp”: “2016-04-04T14:15:02.6476354+10:00”,
“Identifie”: “125”, <Invalid field name; valid is Identifier>
“IdentifierScheme”: “test”,
“CurrentPassword”: “currenthashedpassword”,
“NewPassword”: “YGU0PCPEM”,
“NotificationPreference”: “Email”,
“NotificationAddress”: “”

The above message fails in the parse Json with the message “Requred properties are missing from object…” as per the below screen:


Now this can be handled in Logic Apps using “Add a Parallel branch->Add an Action” as per below screen.


Click on eclipses on action “Execute stored procedure” and “Response 4” and click on configure run after to configure the appropriate action.


Yay!!! Well Done Krishna Smile.




Partner Post: Monitoring of BizTalk Server using BizTalk360

Partner Post: Monitoring of BizTalk Server using BizTalk360

Microsoft has a lot of great partners, and one of our missions is to highlight these, if you want to do a partner post on our team blog reach out to us either over mail or through comments on this post.

This post is written by BizTalk360 to highlight how they take advantage of BizTalk Server to help customers achieve more with our product.

Today BizTalk360 has become an inherent part for anyone using Microsoft BizTalk Server.  Since its inception in 2011, the product has matured significantly with continuous improvement mainly driven by industry & customer feedback. BizTalk360 takes one critical pain point of every Microsoft BizTalk Server customer BizTalk Server Operations & Monitoring and solves the problem extremely well.

Today the product comes with some 70+ features to address the BizTalk Server Operations and Monitoring challenges; in this blog let’s take a look at the top 5 reasons why every Microsoft BizTalk Server customer should consider using BizTalk360.

1. Modern Web based Management Experience

The out of the box BizTalk Server Administration tool that comes with BizTalk Server is a MMC based Windows Application. It requires installing on every administrator or support person’s computer.

BizTalk360, is a modern on-premise web based management tool that provides the user with a modern web experience and increases productivity as a result with regard to BizTalk Server Administration and Management.

With the single installation of BizTalk360, many BizTalk Server environments (Production, UAT, Staging etc.) in the organization can be configured and managed. Administrators and operators can access the web portal from any modern web browser.

BizTalk360 comes with a lot of out of the box dashboards and widgets to make management of BizTalk Server easier.

2. Enterprise grade Security and Auditing

The majority of the tasks performed by using the BizTalk Admin console require elevated Administrator rights and it also lacks the ability to restrict users to specific BizTalk Applications running in the environment.

The other important limitation of BizTalk Admin console is the auditing and governance capability. The tool does not audit any user operational activities. For example: If someone accidentally or purposely stops an important Orchestration or Send Port it will heavily interrupt the message processing. It will not be possible to pinpoint who performed that action.

Whereas when you are using BizTalk360 every single user operational activity on the BizTalk Environments is stored for governance and audit purpose. The customers can keep the data for as long as they want by a simple setting.

3. Monitoring designed for BizTalk Server

There are many general purpose monitoring tools in the market which claim they cover BizTalk Server monitoring, but when you start using them you’ll start realizing they just scratch the surface with generic things like Memory, CPU, Event Log etc. and won’t go any deeper. Whereas BizTalk360 is designed from the ground up to address the monitoring challenges of Microsoft BizTalk Server. It covers the breadth and width of BizTalk Server Monitoring needs.

There are unique requirements like Transaction or Data Monitoring which is specific to BizTalk Server, for example did we process 1000 messages (invoices) from SAP today (or every hour), this is addressed only in BizTalk360, a general purpose monitoring tool.

Auto healing is another important aspect, where BizTalk360 will try to rectify the problem itself whenever possible. Example: An FTP Receive Port may go down for various transient reasons like a network outage, temporary unavailability of FTP location and so on. In these circumstances, BizTalk360 will bring the FTP Receive Port back online with automatic healing capability.

4. Productivity with Single Unified Tooling

On a day to day basis, a BizTalk Server Administrator can use anywhere from 5-8 different tools like BizTalk Admin console, SQL Management Studio, BAM Portal, ESB Portal, Windows Event Viewers, Perfmon, BizTalk Health Monitor, SCOM console to name a few. This creates several challenges, lost productivity due to context switching between tools, security concerns (every single tool across environments needs to be secured), and training people for day-to-day support is time-consuming and expensive (you need very skilled resources).

BizTalk360 addresses all these challenges by providing a unified web based management administration tool for BizTalk Server. All the features are built from the ground up within BizTalk360 for example BizTalk360 comes with its own enhanced BAM portal and ESB portal. BizTalk360 also comes with some key productivity features like a centralized Event Viewer, team knowledge base, Secure SQL Query management, Throttling analyzer, Web based Rules Composer etc.

5. Analytics for Environment Transparency

Most organizations treat their BizTalk Environments like a black box. The standard tools like BizTalk Admin console provide very little transparency on the health of the environment. It doesn’t come with any analytical information like charts and graphs to showcase the failure rates, transaction volume, message processing latency, messaging patterns, throttling analysis etc.

BizTalk360 has a dedicated section for Analytics to address all the above challenges. It makes it super easy for administrators to view the health of your BizTalk Environments, you can create your own custom dashboards based on your scenarios like SAP to Dynamics CRM integration, Oracle to IBM MQ integration etc.


The above 5 points explain clearly why BizTalk Server customer should use BizTalk360. Microsoft as a platform company always focus on the scalability and reliability of the platform and it depends on partners like BizTalk360 to address the tooling gap. At one end of the spectrum is the platform (Microsoft BizTalk Server) and at the other end are the custom solutions built using BizTalk Server (by customers and consulting companies). BizTalk360 positions itself in the middle and bridges the gap with regard to BizTalk Server Administration and Management.

Today there are over 2500 installations of BizTalk360 in the world; some of the mission critical businesses including Microsoft IT (responsible for the entire retail and supply chain operations) rely on BizTalk360 for their day to day operations and monitoring.

Trial Download: You can try BizTalk360 trial version for 14 days on your own BizTalk Environments and validate the benefits.


10 Differences between Azure Functions and Logic Apps


Developer experience

A popular comparison states that Azure Functions is code being triggered by an event, whereas Logic Apps is a workflow triggered by an event. This is reflected in the developer experience. Azure Functions are completely written in code, with currently supports JavaScript, C#, F#, Node.js, Python, PHP, batch, bash and PowerShell. In Logic Apps, workflows are created with an easy-to-use visual designer, combined with a simple workflow definition language in the code view. Each developer has of course his/her personal preference. Logic Apps is much simpler to use, but this can sometimes cause limitations in complex scenarios. Azure Functions gives a lot more flexibility and responsibility to the developer.


Logic Apps connects to an enormous variety of cloud / on-premise applications, going from Azure and Microsoft services over SaaS applications and social media to LOB systems. You can find the impressive list of connectors here. Each connector comes with an API connection, that stores the required credentials in a secure way. These API connections can be reused from within multiple Logic Apps, which is great! Azure Functions have the concept of triggers, input and output bindings. Most of these bindings connect your Azure Functions to other Azure services, such as Event Hubs, Storage, DocumentDb, etc… Consult the complete list here. The HTTP binding is probably the most popular one, as it allows the creation of serverless API’s. At the moment, there are no signs that Azure Functions aims to support that many bindings as what Logic Apps offers.

Exception handling

Cloud solutions need to deal with transient fault handling. Logic Apps provides out-of-the-box functionality that allows you to configure automatic retries on every action. In case this doesn’t solve the problem, the workflow gets a failed status and can be resubmitted after human intervention. This guarantees an at-least-once execution model, which is pretty reliable! In Azure Functions, you have the typical try/catch options available. If you want to enable retries, you need to do the plumbing yourself, by introducing for example Polly. The way you can handle exceptions in the output binding, depends on the used language and type of output binding. This doesn’t always give you the desired outcome. No resume / resubmit capabilities, except if you develop them yourself!


Until recently, Azure Functions always needed to be stateless and preferably idempotent. With the announcement of Azure Durable Functions, Microsoft brings state and long-running capabilities to Azure Functions, by leveraging the Durable Task Framework. This new framework allows sequential and parallel execution of several Functions, it supports long-running tasks with pre-defined timeouts and provides statefull actors without the need for external storage. The state is automatically stored in Azure Storage queues, tables and blobs, which is disaster proof. I am looking forward how this will evolve. These long-running / statefull processes are inherent available in Logic Apps, except for the statefull actor model.


Hybrid integration is reality nowadays. Cloud services must be able to connect to on-premises resources in a secure and high performing way. Azure Logic Apps performs this task via the On Premises Data Gateway, that needs to be installed on premises. Behind the scenes, it uses Azure Service Bus Relay to connect to the cloud in a firewall friendly way, through encrypted channels. When using Azure Functions within an App Service Plan, you have more convenient hybrid connectivity options that reside on the network level. App Service Plans offer support for many networking options like Hybrid Connections, VNET Integration and App Service Environment. Via these options, you can integrate Azure Functions with your local network through a Site-to-Site VPN or ExpressRoute.


Azure Resource Manager templates are the way to deploy resources across the Microsoft Azure platform. Fortunately, both Azure Functions and Logic Apps have built-in support for ARM deployments, through for example Visual Studio Release Management. Next to this, Azure Functions allows easy setup of continuous deployments triggered from sources like BitBucket, Dropbox, Git, GitHub, OneDrive and VSTS. This is ideal in case multiple and frequent contributions need to be consolidated and tested. Additionally, Azure Functions now has deployment slots in preview. This allows deploying and testing a vNext first, before you swap that tested deployment slot with the current version in production.


Logic Apps run only in the cloud, as it has a dependency on Microsoft-managed connectors. As a consequence, you cannot debug, test or run Logic Apps locally. Azure Functions can be easily developed and debugged on your local workstation, which is a big plus to increase developer productivity. Via the Azure Functions Runtime (still in preview) you are able to deploy them on premises in Windows Containers, with SQL Server as a storage layer. Azure Functions is also supported to run on Azure Stack and it has been announced as part of Azure IoT Edge to execute on small devices. This hosting flexibility is a big asset in phased migration scenarios towards the cloud.


Per Logic App, you have a nice overview of the previous runs and their corresponding outcome. You can filter this history, based on a time period and the resulting run status. The monitoring view of a workflow run is the same as the designer view, which makes it very intuitive. For each action, you can see the status and all inputs/outputs. With one button click, you can enable integration with OMS, where you can search on tracked properties. It’s on the roadmap to have a user-friendly and cross Logic Apps dashboard on top of this OMS integration. Each Azure Function comes with a Monitor tab, where you can see the execution history. There is also a live event stream that shows the almost real-time processing statistics in nice graphs. On top of that, there’s full integration with Application Insights, where you can take advantage of the powerful Analytics queries.

Pricing Model

Logic Apps has a pure pay-per-usage billing model. You pay for each action that gets executed. It’s important to be aware that you also need to pay for polling triggers, which can be a hidden cost. If you want to benefit from the capabilities of the Integration Account, you should be aware that this comes with a fixed monthly bill. With Azure Functions, you have two options qua pricing. You can opt for a fixed cost of an App Service Plan. In that option you reserve compute power on which you can run Azure Functions, but also Web, Mobile and API Apps. The second option is completely serverless, with a consumption plan based on resource consumption (memory/s) and number of executions. Don’t forget that the Azure Storage layer also comes with a rather small cost.


Each particular binding or connector comes with its own security. In this section, I focus on the security of Logic Apps and Azure Functions exposed as an API. In order to access a Logic App with the HTTP trigger, the client must include a Shared Access Signature in the URL. The signature is generated via a secret key that can be regenerated at all time. There is also the ability to restrict access, based on incoming IP addresses. To add more authorization logic, you can put Azure API Management in front of it. Azure Functions has a similar concept of API keys. The API key can be shared for the whole Function App (host key) or you can create a specific one for your Function. If you run your Azure Function in an App Service Plan, you can leverage its codeless authentication functionality with Active Directory, Google, Facebook, etc… Real authorization requires a small code change. Azure Function Proxies can be a light-weight alternative of full-blown API Management, to add security on top of your HTTP triggered Functions.


Based on the comparison above, you’ll notice that a lot of factors are involved when deciding between the two technologies. At first, it’s important to see what technology supports the connectivity that you require. Do you want to write it yourself or you want to leverage out-of-the-box bindings / connectors? Next to that, my general guidance is as follows:

When dealing with synchronous request/response calls, that execute more complex logic, Azure Functions is the preferred option. Logic Apps is better suited for asynchronous integration and fire-and-forget messaging that requires reliable processing. When using Logic Apps, they can be perfectly extended with Azure Functions to execute stateless tasks that cannot be fulfilled by the out-of-the-box Logic Apps capabilities.

Web API’s are often composed of both sync and async operations. If you follow the guidance stated above, you might end-up with an API that uses both Azure Functions and Logic Apps. This is where Azure Functions Proxies has its value, as it can expose these separate microservices as a unified API. This will be discussed in another blog post. 

Stay tuned for more!

Test Infected – Functional Ignorance


In this series of Test Infected, I will show you how we can increase the Test Ignorance of our tests by applying Functional approaches to our Imperative code.
If you don’t quite understand what I mean with “ignorance”, I recommend my previous post about the topic. In this post, we will go through with the journey of increasing the Code’s Intent by increasing the Ignorance in a Functional way.

Functional Ignorance


The fixture-phase of your test can become very large, several previous posts have already proved this.
How can functional programming help?
Well, let’s assume you want to setup an object with some properties, you would:

  • Declare a new variable
  • Initialize the variable with a newly created instance of the type of the variable
  • Assign the needed properties to setup the fixture

Note that we’re most interested in our test in the last item; so how can we make sure that the last part is the most visible?

Following example shows what I mean:

We would like to test something with the subject property of the message, but note that this is not the first thing which catches your eye (especially if we use the object-initializer syntax). We must also initialize something in a context.

We could, of course, extract the creation functionality with a Parameterized Creation Method and extract the insertion functionality that accepts a message instance.

But note that we do not use the message elsewhere in the test. We could extract the whole functionality and just accept the subject name, but we will have to use an explicit method name to make clear that we will insert a message in the context AND will assign the given subject name to that inserted message. What if we want to test something else? Another explicit method?

What I sometimes do is extract only the assigning functionality like this:

We don’t use the name of the method to state our intentions, we use our code.

In the extracted method, we can do whatever necessary to create an ignored message. If we do need another way to create a message initially, we can always create a new method that only inserts the incoming message and call this from our functional method.

If would be nice if we had immutable values and could use something like F# “Copy-And-Replace Expressions.


Several times, when you want to test your code branches from an external SUT endpoint, the creation of the SUT doesn’t change, but rather the info you send to the endpoint. Since we have a value that does not change across several tests; we could say that the value is not that important to the test case but rather the changing values.

When you come across such a scenario, you can use the approach I will describe in here.

The idea is to split the exercise logic from the SUT creation. If you have different endpoints you want to test for the same SUT fixture, you can even extend this approach by letting the client code decide what endpoint to call.

Following example shows two test cases where the SUT creation is the same:

Note that we have the same pattern: (1) create SUT, (2) exercise SUT. Compare with the following code where the SUT is being exercised differently.

We ignore the unnecessary info by Functional Thinking:

We can extend this idea by letting the client choose the return value. This is rather useful if we want to test the SUT with the same Fixture but with different member calls:

I use this approach in almost every Class Test I write. This idea is simple: Encapsulate what varies. Only we think in Functions rather than in Objects. Functions can be treated as Objects!


The last topic I will discuss in a Functional approach is the Result Verification phase of the Four-Phase Test.

When I applied some techniques in this phase, I always come back to the same principle: I ask myself the same question: “What is really important?” What interests me the most?

In the Result Verification phase, this is the Assertion itself. WHAT do you assert in the test to make it a Self-Evaluating Test? What makes the test succeed or fail?
That’s what’s important; all the other clutter should be removed.

A good example (I think) is when I needed to write some assertion code to Spy on a datastore. When the SUT was exercised, I needed to check whether there was any change in the database and if this correspondeded with my expectations.
Of course, I needed some logic to call the datastore, retrieve the entities, assert the entities, Tear Down some datastore-related items. But the test only cares whether the updated happened or not.

As you can see, the assertion itself is baked-in into the called method and we must rename the method to a more declarative name in order for the test reader to know what we’re asserting on.

Now, as you can see in the next example, I extracted the assertion, so the test itself can state what the assertion should be.
Also note that when I extract this part, I can reuse this Higher-Order Function in any test that needs to verify the datastore, which is exactly what I did:


Test Ignorance can be interpreted in many ways, this post explored some basic concepts of how Functional Programming can help us to write more Declarative Tests. By extracting not only hard-coded values, but hard-coded functions, we can make complex behavior by composing smaller functions.

Functional Programming hasn’t been a fully mainstream language (yet), but by introducing Functional Concepts into Imperative Languages such as: lambda functions, pattern matching, inline functions, pipelines, higher-order functions, … we can maybe convince the Imperative programmer to at least try the Functional way of thinking.

Microsoft Integration Weekly Update: Sep 18, 2017

Microsoft Integration Weekly Update: Sep 18, 2017

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:


Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.


BizTalk Server Administration Console: Concurrency Violation encountered while Updating the ReceivePort. The local, cached version of the BizTalk Server group configuration is out of date.

BizTalk Server Administration Console: Concurrency Violation encountered while Updating the ReceivePort. The local, cached version of the BizTalk Server group configuration is out of date.

The funniest part of having an intern working and learning from you, especially for a person like me that really love to write about “Errors and Warnings, Causes and Solutions”, is that they are able to make the funniest and unexpected questions, they will find things that because we are so accustomed to doing the job and to avoiding them, that we do not even realize anymore that they exist and they are there.

So yesterday, after a solution deployment, my intern was complaining that he was always receiving the following error while creating a receive port:

TITLE: BizTalk Server Administration
Concurrency Violation encountered while Updating the ReceivePort ‘rprtReceive’. The local, cached version of the BizTalk Server group configuration is out of date. You must refresh the BizTalk Server group configuration before making further changes. (Microsoft.BizTalk.ExplorerOM)

For help, click:

Concurrency Violation encountered while Updating the ReceivePort


While open and navigating in the BizTalk Administration Console you will notice that the first request on expanding the navigation tree or open a BizTalk Server Application or Host Instance will take longer time that further interactions, this because in the first request the Administration Console will query your BizTalk databases for information and after that in cache the result.

So, unless you force a refresh it will use that cache information, nevertheless it will have some kind of mechanism (like a hash) to verify if the cache is updated will the current BizTalk Server Configuration on the database when you try to perform an operation like create or change a Receive Location.

Assuming that you deploy a new version or new artifact to a BizTalk Application, you need to refresh that Application before you perform any kind of operations, otherwise, you will receive this or similar errors.

The reason for this error to happen was that my intern already had the BizTalk Administration Console running and he was already focused (selected) the Application in concern before he went and deploy a new version of the solution from Visual Studio. After he deploy


This is, in fact, a simple problem to solve:

  • Concurrency Violation encountered while Updating the ReceivePort – Option 1:
    • Close the BizTalk Administration Console and open again (stupid and simple)
  • Concurrency Violation encountered while Updating the ReceivePort – Option 2:
    • Right-click on the BizTalk Application you want to refresh, and then select the “Refresh” option

Concurrency Violation encountered while Updating the ReceivePort: BizTalk Application Refreshed

  • Concurrency Violation encountered while Updating the ReceivePort – Option 3:
    • Select the resource inside your BizTalk Application you want to refresh, for example, “Receive Locations” and press F5

There are several alternatives. The important is to refresh the BizTalk Administration Console if something had changed in your BizTalk Server environment.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

BizTalk Server is finally engaging the open source community – What we can do now.

BizTalk Server is finally engaging the open source community – What we can do now.

3 days ago Microsoft started sharing the BizTalk Server stack source code in GitHub.

This is an historical decision for me, more than two years ago I started fighting to make this happen and I am very happy for this Microsoft decision.

I strongly believe in the source code approach from the big companies and for many reasons but let have a look what we can really do now.

How to start?

The source code is under Contributor License Agreement (CLA), to start contributing you need to sign the license agreement here
You need to sign the document using DocSign and you will be able to access to the repo.

The Git repo is in GitHub at this address

What we can do?

Microsoft started shared some main areas, let have a look what we can really do for each of them.


This is a very critical area, we know about some many issue around them, one for example is the famous zero file adapter in the BizTalk File adapter.
We can extend the adapters with new capabilities and these will be immediately available to the product.

We can fix all known issues, the difference now is that, we have a development collaboration portal to work together.


We can extend pipeline capabilities and new components, very interesting will be the possibility to create a repo with an extensive catalogue of components.


We can add the documentation to may important schemas and we can extend all the EDI stack with many new EDI formats.


Many people know how much I’m care about tools and productivity, this area is critical.

Years ago I develop a tool named BizTalk NOS Addin which was able to improve the BizTalk development productivity about the 1000%.
We can add a lot of new tools here.

We are full of tools that we normally use every day, we just only need to put them in the repo and working all together in the refactoring.

I’d like to see the development tool area (Visual Studio), I hope it will be included in the Tools area.


How many time we reuse the same code and template for any new project, this is the area to put this our amazing stuff.

What I’d like to see more

I’m working in BizTalk since the 2000 version and I see 3 main areas in BizTalk where we can work and able to give an amazing contribution to the product.

  1. Development

    1. Think about the possibility to extend the Visual Studio development capabilities with, searching, automation test and development, fix many well-known issue, improving the
      Visual Studio UI integrating new capabilities in the designer.
      Improve migration, we can extend a lot the migration capabilities into Azure, extend orchestration capabilities, extend the pipeline development capabilities and testing, integrate PowerShell in the UI and more.
      This are is the most critical in order to provide the maximum value to the product.
  2. Admin Console

    1. We can do so much here, improving the console for monitoring, management, searching, integrating PowerShell in the UI, we are full of amazing PowerShell scripts able to provide many interesting features to the admin UI.
      Alerting, advisory, auditing, integration with other stack like Visual studio and more.
  3. Migration

I am sure we will be able to do a lot for the product now.