Microsoft Integration (Azure and much more) Stencils Pack v2.6.1 for Visio 2016/2013: the new Azure logo

Microsoft Integration (Azure and much more) Stencils Pack v2.6.1 for Visio 2016/2013: the new Azure logo

This is probably the quickest and smallest update that I made in my Microsoft Integration (Azure and much more) Stencils Pack: only 1 new stencil and I only do it for its importance, since it is definitely one of Microsoft’s fastest growing business these days, and the Ignite context: the new Azure logo.

Microsoft Integration (Azure and much more) Stencils Pack: new Azure Logo

This will be probably the first Visio pack containing this shape.

The Microsoft Integration (Azure and much more) Stencils Pack v2.6.1 is composed by 13 files:

  • Microsoft Integration Stencils v2.6.1
  • MIS Apps and Systems Logo Stencils v2.6.1
  • MIS Azure Portal, Services and VSTS Stencils v2.6.1
  • MIS Azure SDK and Tools Stencils v2.6.1
  • MIS Azure Services Stencils v2.6.1
  • MIS Deprecated Stencils v2.6.1
  • MIS Developer v2.6.1
  • MIS Devices Stencils v2.6.1
  • MIS IoT Devices Stencils v2.6.1
  • MIS Power BI v2.6.1
  • MIS Servers and Hardware Stencils v2.6.1
  • MIS Support Stencils v2.6.1
  • MIS Users and Roles Stencils v2.6.1

That will help you visually represent Integration architectures (On-premise, Cloud or Hybrid scenarios) and Cloud solutions diagrams in Visio 2016/2013. It will provide symbols/icons to visually represent features, systems, processes and architectures that use BizTalk Server, API Management, Logic Apps, Microsoft Azure and related technologies.

  • BizTalk Server
  • Microsoft Azure
    • · Azure App Service (API Apps, Web Apps, Mobile Apps and Logic Apps)
    • API Management
    • Event Hubs
    • Service Bus
    • Azure IoT and Docker
    • SQL Server, DocumentDB, CosmosDB, MySQL, …
    • Machine Learning, Stream Analytics, Data Factory, Data Pipelines
    • and so on
  • Microsoft Flow
  • PowerApps
  • Power BI
  • Office365, SharePoint
  • DevOpps: PowerShell, Containers
  • And many more…

You can download Microsoft Integration (Azure and much more) Stencils Pack from:
Microsoft Integration Stencils Pack for Visio 2016/2013 (11,4 MB)
Microsoft | TechNet Galler

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Permissions required to setup Monitoring SQL Jobs

Permissions required to setup Monitoring SQL Jobs

Biztalk360 comes with a lot of exciting features in every release. One of the important functionalities in BizTalk360 is the monitoring with the autocorrect options. BizTalk360 is the one-stop monitoring solution for BizTalk server. We can not only monitor the artefacts, but also the SQL jobs. Yes, the SQL jobs present in the SQL server can also be monitored. We can also set the autocorrect (enable/disable) functionality for the SQL jobs.

There may be separate servers for BizTalk databases and BizTalk360 database or even single server hosting all the databases. The jobs in all these servers can be monitored via BizTalk360. But then, can all the users have access to monitor and autocorrect the SQL jobs? In this blog, I am going to explain about the permissions required by the users for monitoring SQL jobs and setting autocorrect functionality which we learnt from one of the support tickets.

Customer’s case:

Our support team often get some interesting tickets which do not directly deal with the functionality and features of BizTalk360. Some tickets may be related to performance, access permissions, AD users etc. Each ticket experience is a new learning for our support engineers. Let’s see one such case of a customer related to the access permission for the databases in the SQL server.

The customer got the below exception when they tried to set up monitoring for SQL jobs.

Permissions required to setup monitoring SQL jobs

The sp_help_job is the stored procedure used to list the SQL jobs running in the server. This job returns information about jobs that are used by SQL Server Agent to perform automated activities in SQL Server. There are SQL jobs that get installed and scheduled automatically to maintain the health of the BizTalk environment.

BizTalk360 allows to set the threshold for SQL jobs (Monitoring -> Manage Mapping ->SQL Server Instances ->SQL Jobs) to list out those SQL jobs and perform the automatic operation this “sp_help_job” job is being used.

The exception in the above screenshot comes because of a missing permission for the BizTalk360 service account while accessing the SQL server. We have our support article in place which describes about the permissions for the SQL jobs. The customer has given the permission according to this article. But they were facing the error again when trying to enable autocorrect feature for SQL jobs.

Permissions required to setup monitoring SQL jobs

In this error message, it says “Only members of sysadmin role are allowed to update or delete jobs owned by a different login”

This means that only if the service account has the SYSADMIN permission, then it can enable/disable the sql jobs from BizTalk360. But some of the customers would not prefer to provide SYSADMIN permission for the service account due to some security policies. So, what happens in such case? Let’s go ahead and check the resolution given. Before that lets have a quick glance at SQl jobs and permissions.

The SQL jobs:

A job is a series of operations performed by SQL Server Agent sequentially. A job can run on one local server or on multiple remote servers. The jobs are used to define administrative tasks that can be run one or more times and monitored for success or failure. SQL server agent runs these scheduled jobs. A job can be edited only by its owner or by the members of the sysadmin role.

The SQL job permissions:

SQL server has the following msdb database fixed roles through which the SQL server can be accessed and controlled. The roles from least to most privileged are:

  • SQLAgentUserRole
  • SQLAgentReaderRole
  • SQLAgentOperatorRole

Can we have a brief look at each one of them?

SQLAgentUserRole:

This is the least privileged role. It has permissions on only operators, local jobs, and job schedules. Members of SQLAgentUserRole have permissions on only local jobs and job schedules that they own. They cannot use multi-server jobs (master and target server jobs), and they cannot change job ownership to gain access to jobs that they do not already own.

SQLAgentReaderRole:

This role includes all the SQLAgentUserRole permissions as well as permissions to view the list of available multi-server jobs, their properties, and their history. Members of this role can also view the list of all available jobs and job schedules and their properties, not just those jobs and job schedules that they own. SQLAgentReaderRole members cannot change job ownership to gain access to jobs that they do not already own.

SQLAgentOperatorRole:

This is the most privileged role which includes all the permissions of the above-mentioned roles. They have additional permissions on local jobs and schedules. They can execute, stop, or start all local jobs, and they can delete the job history for any local job on the server. They can also enable or disable all local jobs and schedules on the server. SQLAgentOperatorRole members cannot change job ownership to gain access to jobs that they do not already own.

The below table summarizes some of the properties for all these roles.

Database RoleAction – Create/modify/delete

Action – Enable/Disable

Local JobsMultiserver jobsJob schedules
SQLAgentUserRoleYes

Yes

(Owned jobs)

No

No

Yes

Yes

(Owned schedules)

SQLAgentReaderRoleYes

Yes

(Owned jobs)

No

No

Yes

Yes

(Owned schedules)

SQLAgentOperatorRoleYes

Yes

No

No

Yes (Owned schedules)

Yes

Of all the above-mentioned SQL database roles, the SYSADMIN is the highest privileged role which has the administrator rights on the SQL server.

The resolution provided:

As mentioned earlier, the BizTalk360 service account would require the SYSADMIN permission to monitor and autocorrect the SQL jobs. But in some customer scenarios, they would not prefer to provide the SYSADMIN permissions. In that case, we need to see what is the minimum level of permission that we can provide to the service account for monitoring the SQL jobs.

Our support team did an extensive testing to check for various scenarios and permissions for the service account. The outcome of the testing is given below:

As the table summarizes, when the BizTalk360 service account is given the permissions as SQLAgentUserRole or SQLAgentReaderRole, it can only view the SQL jobs and cannot perform any operations on them. But when the SQLAgentOperatorRole is given for the service account, the auto correct functionality will work for the SQL jobs. The SYSADMIN permission is not required for this. This role is the highest privileged role next to the SYSADMIN.

Permissions required to setup monitoring SQL jobs

Conclusion:

Hence, for setting the autocorrect functionality (enable/disable) the SQL jobs, the BizTalk360 service account needs to be given the SQLAgentOperatorRole permission to the system database, if SYSADMIN permission is not preferred to be given.

PS: BizTalk360 will not do any operation by itself until monitoring has configured for any of the available SQL Jobs and enable the Auto-correction ability.  In case, you don’t wish to monitor the SQL jobs you can avoid the permissions shown in the above image.

If you have any questions, contact us at support@biztalk360.com. Also, feel free to leave your feedback in our forum.

Author: Praveena Jayanarayanan

I am working as Senior Support Engineer at BizTalk360. I always believe in team work leading to success because “We all cannot do everything or solve every issue. ‘It’s impossible’. However, if we each simply do our part, make our own contribution, regardless of how small we may think it is…. together it adds up and great things get accomplished.”

Azure Logic Apps Monthly Update – September 2017

Azure Logic Apps Monthly Update – September 2017

This episode of Azure Logic Apps Monthly Update comes to us directly from #MSIgnite. It is one of those episodes with a special guest and this episode featured Sarah Fender from the Azure Security Center team. The Pro Integration team are at #MSIgnite that’s happening between September 25-29, 2017 at Orlando, FL. I’ll try to give you a very crisp recap of the proceedings during the event and the important announcements from the #MSIgnite event.

Azure Security Center

Sarah started off talking about the Azure Security Center feature. Security Center provides unified security management and threat protection for Azure workloads, workloads running on-premises and on other cloud platforms. It basically assesses the security of the cloud and on-premise workloads and offers out of the box insights. In addition, Security Center offers some built in security controls such as Just in Time VM access that will help to lock down access to virtual machines, and Adaptive Access Controls that help to lock down on machines to prevent any malware execution. Security Center also monitors the hybrid cloud using advanced concepts like Machine Learning and provides rich graphical data to administrators.

Security Center keeps a look into all the different incidents in the environment such as SQL Injection, security incidents, suspicious processes and so on and provides insights which will be very helpful for IT teams to keep a track of the issues in the environment.

At #MSIgnite, the Azure Security Center team introduced the new experience of Investigation Dashboard. With this feature, organizations can easily respond to the incident and understand the intricate details about the security incident. The investigation path defines the attack path and the graphical view displays the detailed information such as severity of the attack, attack detected by information and so on. The investigation dashboard also lists the entities and now supports the Playbooks that are nothing but Logic Apps being triggered from Security Center when a certain alert is fired.

You can run a Playbook from the Security Center through the integration with Azure Logic Apps. Users can pre-define a Logic App that will actually take a corrective action when there is an attack you can allow the investigation dashboard to automatically execute that particular Logic App (through Playbook) to execute the corrective action. For e.g., when a vulnerability attack is detected with a very high severity, post a message on the slack channel for the users to get notified.

After all these updates from Sarah, it was time for the Logic Apps trio comprising of Jeff Hollan, Kevin Lam and Jon Fancey to provide the latest updates on Logic Apps. Kevin Lam started off by giving the latest updates-

What’s New in Azure Logic Apps?

  1. Custom Connectors – Enables the option to extend your endpoints and register them as connectors in Logic Apps.
  2. Large Message Support – This functionality is now available in the designer. Using this functionality, you can move large files up to 1 GB (between) for specific connectors (blob, FTP).
  3. Variables append to array – append capability to aggregate data within loops in the designer. Kevin Lam gave a pro tip here for all users –

    Remember to turn on sequential for for-each to achieve this scenario.

  4. Nested foreach and do-until – is now available in the designer.
  5. Enable high throughput scenarios – You can configure the number of scale units within the code view to enable the high throughput scenarios. Say, you can take one Logic App definition that runs in a scale unit and span it across 16/32/64 scale units to get increased throughput. This is called ludicrous mode (as Kevin had it on the PPT).
  6.  Maximum retries count (Custom Retry Policy) has been increased from 4 to 10.
  7. Now you can export (Publish) Logic Apps to PowerApps and Flow
  8. Emit correlation tracking id from the trigger to OMS – This gives full traceability across the process that’s happening across the Logic App.
  9. Expression intellisense – This is now available in the designer. When you are typing an expression, you will see the same intelligent view that you see when you are typing in Visual studio.
  10. Schedule based batching – In addition to batching based on message count, you can batch messages based on the schedule.

New Connectors

  • Azure Security Center Trigger
  • Log Analytics Data Collector – add information to Log Analytics from Log Analytics
  • ServiceNow – create tickets, read & write into ServiceNow
  • DateTime Actions
  • Azure Event Grid Publish
  • Adobe Sign – This was a big announcement from Microsoft at #MSIgnite – collaboration with Adobe
  • O365 Groups
  • Skype for Business
  • LinkedIn
  • Apache Impala
  • FlowForma
  • Bizzy

What’s in Progress?

  1. Concurrency Control (code-view live) – Say, your Logic App is executing in a faster way than you want it to actually work. In this case, you can make Logic Apps to slow down (restrict the number of Logic Apps running in parallel). This is possible today in the code-view where you can define say, only 10 Logic Apps can execute at a particular time in parallel. Therefore, when 10 Logic Apps are executing in parallel, the Logic Apps logic will stop polling until one of the 10 Logic Apps finish execution and then start polling for data.
  2. SOAP – Native SOAP support to consume cloud and on-premise SOAP services. This is one of the most requested features on UserVoice.
  3. Expression Tracing –  You can actually get to see the intermediate values for complex expressions
  4. Foreach failure navigation – If there are lots of iterations in the foreach loop and few of them failed; instead of having to look for which one actually failed, you can navigate to the next failed action inside a for each loop easily to see what happened.
  5. Functions + Swagger – You can automatically render the Azure functions annotated with Swagger. This functionality will be going live by end of August.
  6. HTTP OAuth with Certificates
  7. Complex Conditions within the designer
  8. Bulk resubmit in OMS
  9. Batch configuration in Integration Account
  10. Connectors
    1. Workday
    2. Marketo
    3. Compute
    4. Containers

Watch the recording of this session here

[embedded content]

Community Events Logic Apps team are a part of

  1. INTEGRATE 2017 USA – October 25 – 27, 2017 at Redmond. Register for the event today. Scott Guthrie, Executive Vice President at Microsoft will be delivering the keynote speech. You can also avail Day Passes for the event (available for Wednesday and Thursday).
  2. ServerlessConf – 2 days of sessions on Serverless with Hackathon during October 2017
  3. Workday Rising – October 9 – 12 at Chicago
  4. CONNECT 2017 on October 9, 2017 at DeFabrique, Utrecht

Feedback

If you are working on Logic Apps and have something interesting, feel free to share them with the Azure Logic Apps team via email or you can tweet to them at @logicappsio. You can also vote for features that you feel are important and that you’d like to see in logic apps here.

The Logic Apps team are currently running a survey to know how the product/features are useful for you as a user. The team would like to understand your experiences with the product. You can take the survey here.

If you ever wanted to get in touch with the Azure Logic Apps team, here’s how you do it!
Reach Out Azure Logic Apps Team

Previous Updates

In case you missed the earlier updates from the Logic Apps team, take a look at our recap blogs here –

Author: Sriram Hariharan

Sriram Hariharan is the Senior Technical and Content Writer at BizTalk360. He has over 9 years of experience working as documentation specialist for different products and domains. Writing is his passion and he believes in the following quote – “As wings are for an aircraft, a technical document is for a product — be it a product document, user guide, or release notes”.

Stef’s Monthtly Update – September 2017

Stef’s Monthtly Update – September 2017

September 2017, the last month at Macaw and about to onboard on a new journey at Codit Company. And I looking forward to it. It will mean more travelling, speaking engagements and other cool things. #Cyanblue is the new blue.

Below a picture of Tomasso, Eldert, me, Dominic (NoBuG), and Kristian in Olso (top floor or Communicate office).

I did a talk about Event Grid at NoBug wearing my Codit shirt for the first time.

Month September

September was a month filled with new challenges. I onboarded the Middleware Friday team and released two episodes (31 and 33):

Moreover, I really enjoyed doing these type of videos and looking forward to create a few more as I will be presenting an episide every alternating week. Subsequently, Kent will continu with episodes focussed around Microsoft Cloud offerings such as Microsoft Flow. And my focus will be integration in general.

In September I did a few blog posts on my own blog and BizTalk360 blog:

This month I only read one book. Yet it was a good book called: The Subtle Art of Not Giving a F*ck from Mark Manson.

Music

My favorite albums in September were:

  • Chelsea Wolfe – Hiss Spun
  • Satyricon – Deep Calleth Upon Deep
  • Cradle Of Filth – Cryptoriana: The Seductiveness Of Decay
  • Enter Shikari – The Spark
  • Myrkur – Mareridt
  • Arch Enemy – Will To Power
  • Wolves In The Throne Room – Thrice Woven

Running

In September I continued with training and preparing for next months half marathons in London and Amsterdam.

October will be filled with speaking engagements ranging from Integration Monday to Integrate US 2017 in Redmond.

Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, health care, agriculture, (local) government, bio-sciences, retail, travel and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 7 years.

F# Coin Change Kata with Property-Based Testing

F# Coin Change Kata with Property-Based Testing

Introduction

The one way to become an expert in something is to practice; and Programming Katas are a very good approach to keep practicing your programming skills.

In this post, I will solve the Coin Change Kata with F# and use Property-Based Testing (with FsCheck) to drive the design.
For me, this was a lesson in writing properties and not so much solving the kata. It was a fun exercise. I thought it would be useful to share my code with you.
If you haven’t any experience with Property-Based Testing or F#, I recommend you look first at those topics.

Coin Change Kata

Description

Ok, there are several different descriptions of this Kata; so, I’ll show you what I want to accomplish first.

“Given an amount and a series of Coin Values, give me the best possible solution that requires the least amount of Coins and the remaining value if there is any”

So, by looking at the definition of the Coin Kata; I need two inputs and two outputs:

Signature

The first thing I did before describing my properties, is defining my signature of my function. My first mistake was I thought I could use integers all over the place. Something like this:

int list -> int -> int * int list

But we can’t have negative coin values, so I started with making a coin type. In the Kata, there are several amounts they use so I chose to use the same:

Note that, by describing the coin values in this way, I have restricted the input values of the coins. This makes Illegal States Unpresentable (from Yaron Minsky).

And so my signature, after type interference, is the following:

Coin list -> int -> int * Coin list

Strictly speaking this is not the right signature, because we are still handling negative amounts, but for the sake of this exercise; I will leave this as an exercise for you.

Properties

First Property: Ice Breaker

So, let’s start coding. The first property should be some kind of Ice Breaker property. I came up with the following:

“Change amount with nothing to change gives back the initial amount”

This is the property when we do not have any coin values, so we just get back the same amount as remaining value. Note that I use ‘byte’ as a input value so I make sure I have a positive value, and for the sake of this exercise. The maximum byte value is enough for this demonstration.

We can easily implement this:

We can play the Devil’s Advocate and intentionally use a fake implementation, for example:

Which will still pass.

Second Property: Boundaries

The next property I wrote was the other way around: what if I haven’t got any amount to change?

“Change zero amount result in original coin values”

Note that we have FsCheck to generate us the random list of our Coins. We don’t care what coins we’re about the use for the change and that’s why we can use FsCheck to generate some for us.

I think this is a good implementation example of how Test Ignorance can be accomplished with Property-Based Testing.

And our implementation:

Now we can’t fill the list so we’re back with the first implementation which makes the two properties pass.

Third Property: Find the Constants

I’m not quite sure this is a proper example of this; because you could state this property for other values as well with some effort and it’s possibly also covered with a latter property. Although, this was the next property I wanted to write because it drives me more in the actual implementation of the function and it’s a constant in this implementation.

“Change is always One if there’s no other coin values”

We can implement this property (and respect the others) with this implementation:

When I have only ‘One’ coins, the change for a random amount is always a list of ‘One’ coins that has the same length of the initial amount to-be changed. I can of course play the Devil’s advocate and change the remaining amount to 42 for example (because 42 is the answer to life):

And so, we can stricken our property to also assert on the remaining amount:

Because of FsCheck this is hardly an issue. I added some Labels (from FsCheck) to clearly state in the output WHAT failed in the property. This is good thing for Defect Localization.

Also note that playing the Devil’s Advocate makes sure that I implement the right implementation and that my properties this state in the most strictly solution.

Fourth Property: Some Things Never Change

For the fourth property, I thought even further about the result and came up with this property. The constant that I found was that whatever I change into coins, the initial to-be-changed amount should always be the sum of the remaining change and the changed coins.

“Sum of changed coins and remaining amount is always the initial to-be-changed amount”

What this property needs, is a non-empty list of coins because we otherwise re-testing the already written property of the empty coins. This is also no issue for FsCheck, with Conditional Properties we can easily define this with the List.length coins <> 0 ==> lazy expression.

This will make sure that the other part of the function only gets evaluated, and so verified if the condition is met.

The rest of the function is the mapping of all the coins into values, sum this and add the remaining amount to this. All this together should be the same as the initial amount.

This is the first time I need to get the actual value of coins, so a made a function for this:

How do we know how much coins we get in a given amount? That’s a division of the amount by that coin value. We need several coin values and so we must also do the division by the other coin values; so, we need the remaining value of the division. That’s a modulo of the amount by that coin value.

We need to do this for all the different coins we have.

Does this pattern sounds familiar?

We have an initial value, we need to loop over a list, and do something with a given value that can be passed to the next loop iteration.

In an imperative language, that’s the for-loop we’re stating:

Something like this (sort of).

But, we’re in a functional language now; so, what’s the alternative? Fold!

Here is some implementation using fold:

One of the things I like to do is to specify the different arguments on its own instead of inlining them (in this case into the List.fold function). I think this increases Readability and shows more the Code’s Intent: that the core and return value is the result of a List.fold operation.

This reminds me of the Formatting Guidelines I described in a previous post that the return value of a method should be placed on a different line to increase readability and highlights “The Plot” of the method.

This is very similar; we want to show the what we’re doing as “The Plot” of the function by specifying the argumented-functions on separate.

Also note that we can use the function valueOfCoin that we needed in our Property. People not familiar with TDD and the Test-First mindset sometimes say that they don’t like to see if the test is the only place where some functionality is used; but if you use TDD the test is the first client that will use those functionalities!

Fifth Property: Final Observation

We’re almost there; there’s just one last thing we didn’t do right in our implementation. The Kata stated that we must find “the best possible solution” for the amount in change. We now have an implementation that finds “some” solution but not the “best” solution. Why? Because we don’t use the order in which the different coin values are passed in; we just loop over them. We need the best solution for the least amount of coins.

How do we get from “some” solution to the “best” solution? Well, we need to check first with the highest coin values and then gradually to the least coin value.

How do we specify this in a Property? I must admit that it did not come to me very fast, so I think this was a good exercise in Property-Based Testing for me. This was the Property I came up with:

“Non-One Coin value is always part of the change when the amount is that Coin value”

Why do we need a non-one Coin value? Why do we need a non-empty Coin list? Because we otherwise testing an already specified property.

That’s why we use the Conditional expression: (nonOneCoin <> One && List.length coins <> 0) ==> lazy.

Now, the other part of the Property. We need to check if the given random list of coins (with a non-one Coin in it) result that the non-one Coin is part of the change we get if the amount to-be-changed is the value of the non-one Coin.

That’s seems reasonable. If I want the change the value 50 in coins and I have the Coin value 50, I want that as return value. That would be the solution if I need the least amount of coins. I don’t care if a have Coins of 50 and 25 for example, the order of the different Coin values doesn’t matter; just give me the change with the least amount of coins.

Note that we first use the Gen.shuffle function to shuffle the random list of coins with the non-One Coin. After that, we’re sure that we have a list with a non-One coin. If I would specify this condition inside the Conditional expression of FsCheck, I would have a lot of tests cases that are skipped because the condition wouldn’t be met. If I set the condition on a signal Coin value; I will have a lot more test cases.

The change that I get a Coin that isn’t One is much higher than I get a list that contains a non-One coin I guess. But not only that; I get more Code’s Intent in my Property if I state my non-one Coin value like this I guess.

How do we implement this?

We sort the Coins by their Coin value. Note how we again can use the already defined valueOfCoin function.

Conclusion

As I said before: this wasn’t exactly an exercise in solving the Coin Change Kata but rather in specifying Properties to drive this implementation. I noticed that I must think on a higher level about the implementation instead of hard-coding the test values.

I don’t know which values FsCheck will provide me and that’s OK; I don’t need to know that. I just need to constrain the inputs so that I can predict the output without specifying exactly what that output should look like. Just specifying some Properties about the output.

Hopefully you found this a nice read and have enjoyed the way we write Properties in this example. Maybe now you’re inspired to write Properties for your own implementations. The full code can be found at my GitHub.

FsCheck can also be used from a C# environment instead of F# so you don’t have to be a F# expert to write Properties. It’s a concept of looking at tests and how we constrain inputs to predict outputs.

Thank you.

Route Azure Storage Events to multiple subscribers with Event Grid

Route Azure Storage Events to multiple subscribers with Event Grid

A couple of weeks ago Azure Event Grid service became available in public preview. This service enables centralized management of events in a uniform way. Moreover, it scales with you when the number of events increases. This is made possible by the foundation the Event Grid relies on Service Fabric. Not only does it auto scale you also do not have to provision anything besides an Event Topic to support custom events (see the blog post Routing an Event with a custom Event Topic).

Event Grid is serverless, therefore you only pay for each action (Ingress events, Advanced matches, Delivery attempts, Management calls). Moreover, the price will be 30 cents per million actions in the preview and will be 60 cents once the service will be GA.

Azure Event Grid can be described as an event broker that has one of more event publishers and subscribers. Furthermore, Event publishers are currently Azure blob storage, resource groups, subscriptions, event hubs and custom events. Finally, more will be available in the coming months like IoT Hub, Service Bus, and Azure Active Directory. Subsequently, there are consumers of events (subscribers) like Azure Functions, Logic Apps, and WebHooks. And on the subscriber side too more will be available with Azure Data Factory, Service Bus and Storage Queues for instance.

To view Microsoft’s Roadmap for Event Grid please watch the Webinar of the 24th of August on YouTube.

Event Grid Preview for Azure Storage

Currently, to capture Azure Blob Storage events you will need to register your subscription through a preview program. Once you have registered your subscription, which could take a day or two, you can leverage Event Grid in Azure Blob Storage only in Central West US!

Registered Azure Storage in a Azure Subscription for Event Grid.

The Microsoft documentation on Event Grid has a section “Reacting to Blob storage events”, which contains a walk-through to try out the Azure Blob Storage as an event publisher.

Scenario

Having registered the subscription to the preview program, we can start exploring its capabilities. Since the landing page of Event Grid provides us some sample scenarios, let’s try out the serverless architecture sample, where one can use Event Grid to instantly trigger a Serverless function to run image analysis each time a new photo is added to a blob storage container. Hence, we will build a demo according to the diagram below that resembles that sample.

Image Analysis Scenario with Event Grid.

An image will be uploaded to a Storage blob container, which will be the event source (publisher). Subsequently, the Storage blob container belongs to a Storage Account containing the Event Grid capability. And finally, the Event Grid has three subscribers, a WebHook (Request Bin) to capture the output of the event, a Logic App to notify me a blob has been created and an Azure Function that will analyze the image created in the blob storage, by extracting the URL from the event message and use it to analyze the actual image.

Intelligent routing

The screenshot below depicts the subscriptions on the events on the Blob Storage account. The WebHook will subscribe to each event, while the Logic App and Azure Function are only interested in the BlobCreated event, in a particular container(prefix filter) and type (suffix filter).

Route Azure Storage Events to multiple subscribers with Event Grid

Besides being centrally managed Event Grid offers intelligent routing, which is the core feature of Event Grid. You can use filters for event type, or subject pattern (pre- and suffix). Moreover, the filters are intended for the subscribers to indicate what type of event and/or subject they are interested in. When we look at our scenario the event subscription for Azure Functions is as follows.

  • Event Type : Blob Created
  • Prefix : /blobServices/default/containers/testcontainer/
  • Suffix : .jpg                       

The prefix, a filter object, looks for the beginsWith in the subject field in the event. And in addition the suffix looks for the subjectEndsWith in again the subject. Consequently, in the event above, you will see that the subject has the specified Prefix and Suffix. See also Event Grid subscription schema in the documentation as it will explain the properties of the subscription schema. The subscription schema of the function is as follows:

<pre>{
"properties": {
"destination": {
"endpointType": "webhook",
"properties": {
"endpointUrl": "https://imageanalysisfunctions.azurewebsites.net/api/AnalyseImage?code=Nf301gnvyHy4J44JAKssv23578D5D492f7KbRCaAhcEKkWw/vEM/9Q=="
}
},
"filter": {
"includedEventTypes": [ "<strong>blobCreated</strong>"],
"subjectBeginsWith": "<strong>/blobServices/default/containers/testcontainer/</strong>",
"subjectEndsWith": "<strong>.jpg</strong>",
"subjectIsCaseSensitive": "true"
}
}
}</pre>

Azure Function Event Handler

The Azure Function is only interested in a Blob Created event with a particular subject and content type (image .jpg). This will be apparent once you inspect the incoming event to the function.

<pre>[{
"topic": "/subscriptions/0bf166ac-9aa8-4597-bb2a-a845afe01415/resourceGroups/rgtest/providers/Microsoft.Storage/storageAccounts/teststorage666",
"<strong>subject</strong>": "<strong>/blobServices/default/containers/testcontainer/</strong>blobs/NinoCrudele.<strong>jpg</strong>",
"<strong>eventType</strong>": "<strong>Microsoft.Storage.BlobCreated</strong>",
"eventTime": "2017-09-01T13:40:33.1306645Z",
"id": "ff28299b-001e-0045-7227-23b99106c4ae",
"data": {
"api": "PutBlob",
"clientRequestId": "206999d0-8f1b-11e7-a160-45670ee5a425",
"requestId": "ff28299b-001e-0045-7227-23b991000000",
"eTag": "0x8D4F13F04C48E95",
"contentType": "image/jpeg",
"contentLength": 32905,
"blobType": "<strong>BlockBlob</strong>",
"url": "https://teststorage666.blob.core.windows.net/testcontainer/NinoCrudele.jpg",
"sequencer": "0000000000000AB100000000000437A7",
"storageDiagnostics": {
"batchId": "f11739ce-c83d-425c-8a00-6bd76c403d03"
}
}
}]</pre>

The same intelligence applies for the Logic App that is interested in the same event. The WebHook subscribes to all the events and lacks any filters.

The scenario solution

The solution contains a storage account (blob), a registered subscription for Event Grid Azure Storage, a Request Bin (WebHook), a Logic App and a Function App containing an Azure function. The Logic App and Azure Function subscribe to the BlobCreated event with the filter settings.

The Logic App subscribes to the event once the trigger action is defined. The definition is shown in the picture below.

Event Grid properties in a Logic App Trigger Action.

Note that the resource name has to be specified explicitly (custom value) as the resource type Microsoft.Storage has been set explicitly too. The resource types currently available are Resource Groups, Subscriptions, Event Grid Topics and Event Hub Namespaces, while Storage is still in a preview program. Therefore, registration as described earlier is required. As a result with the above configuration, the desired events can be evaluated and processed. In case of the Logic App, it is parsing the event and sending an email notification.

Image Analysis Function

The Azure Function is interested in the same event. And as soon as the event is pushed to Event Grid once a blob has been created, it will process the event. The URL in the event https://teststorage666.blob.core.windows.net/testcontainer/NinoCrudele.jpg will be used to analyse the image. The image is a picture of my good friend Nino Crudele.

Route Azure Storage Events to multiple subscribers with Event Grid

This image will be streamed from the function to the Cognitive Services Computer Vision API. The result of the analysis can be seen in the monitor tab of the Azure Function.

Route Azure Storage Events to multiple subscribers with Event Grid

The result of the analysis with high confidence is that Nino is smiling for the camera. We, as humans, would say that this is obvious, however do take into consideration that a computer is making the analysis. Hence, the Computer Vision API is a form of Artificial Intelligence (AI).

The Logic App in our scenario will parse the event and sent out an email. The Request Bin will show the raw event as is. And in case I, for instance, delete a blob, then this event will only be caught by the WebHook (Request Bin) as it is interested in any event on the Storage account.

Route Azure Storage Events to multiple subscribers with Event Grid

Summary

Azure Event Grid is unique in its kind as now other Cloud vendor has this type of service that can handle events in a uniform and serverless way. Although it is still early days as this service is in preview a few weeks. However, with expansion of event publishers and subscribers, management capabilities and other features it will mature in the next couple of months.

The service is currently only available in, West Central US and West US. However, over the course of time it will become available in every region. And once it will become GA the price will increase.

Working with Storage Account as a source (publisher) of events unlocked new insights in the Event Grid mechanisms. Moreover, it shows the benefits of having one central service in Azure for events. And the pub-sub and push of events are the key differentiators towards the other two services Service Bus and Event Hubs. Therefore, no longer do you have to poll for events and/or develop a solution for it. To conclude the Service Bus Team has completed the picture for messaging and event handling.

Author: Steef-Jan Wiggers

Steef-Jan Wiggers has over 15 years’ experience as a technical lead developer, application architect and consultant, specializing in custom applications, enterprise application integration (BizTalk), Web services and Windows Azure. Steef-Jan is very active in the BizTalk community as a blogger, Wiki author/editor, forum moderator, writer and public speaker in the Netherlands and Europe. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 5 years.

F# Agent Pipeline Processing in Concurrent Applications

F# Agent Pipeline Processing in Concurrent Applications

Introduction

In the context of Concurrent Applications, there are several architectural systems that describe the communication of the system. The Actor Model or an Agent-Based Architecture, is one of those systems we can use to write robust Concurrent Applications.

One of the challenges in Concurrent Programming is that several computations want to communicate with each other and share a state safely.

F# has implemented this model (or at least a part of it) with the Mailbox Processor or shorted version: Agent. See Thomas Petricek’s blog post for more information.

Agent Pipeline

Introduction

The way agents can communicate with each other is by itself a lot harder to comprehend than in a sequential or even a parallel system. Each agent can communicate with each other by sending messages. The agent itself can alter its Private State, but no other “external” state, they communicate the state by sending messages to other agents.

This Isolation of the agents in Active Objects can increase the system complexity. In this post, I will talk about some brain-dump exercise of mine to create an Agent Pipeline. I think the way we think of a pipeline is a rather easier approach to think about agents because we still have that feeling of “sequential”.

 

In F# this is rather easy to express, so let’s try it!

F# Agent Template

First, let’s define the agent we are going to use. Our agent must receive a message, but must also send some kind of response, so the next agent can process it. The agent itself is an asynchronous computation, so the result of the message will be wrapped inside an Async type.

Our signature of calling our pipeline agent should thereby be something like this:

‘a -> Async<‘a>

Ok, lets define some basic prerequisites and our basic agent template. We’ll use a basic string as our message just for example purposes.

Now that we have our basic setup, we can start looking at the body of our agent. We must receive a message and send back a reply. This can be done with the basic method Receive() which will return a tuple with the message itself and the channel we have to send our reply to. This call to “receive” will block the loop till the next message arrives. Agents run on a single logical thread and all messages send to agents are queued.

The body can be defined like this (print function to simulate the processing):

F# Async Binding

Ok, now we have our basic agent; we can look at how we can bind agents together.

Just like the previous diagram; I would like to express in code how messages are “piped” to other agents. When we think of piping, we can think of two approaches: Applicative (<*>) and Monadic (>>=). Since we need the result of the previous call in our next call, I’m going to use the Monadic style (>>=).

We see by looking at the signature that we must bind two separate worlds: the world of strings and the world of Async. Only looking at the signature makes me want to write some bind functions; so first, before we go any further; lets define some helper functions for our Async world:

These two functions should be enough to define our pipeline. Look at the signature of our bind:

(‘a -> Async<’b>) -> Async<’a> -> Async<’b>

This is just what we want in our agent signature. Now, I’m going to create some agents to simulate a pipeline:

Note that the pipeline of the agents is almost exactly like we have designed in our diagram. This is one the many reasons that I like F# so much. Much more than C#, you can express declaratively exact how you see the problem. C# Async/await variant is inspired by the F# Asynchronous Workflows.

Or if you like a Klesli style (which I like to use sometimes). This will make sure that we don’t have to return the message as an Async:

Conclusion

“Functional programming is the most practical way to write concurrent programs. Trying to write concurrent programs in imperative languages isn’t only difficult, it leads to bugs that are difficult to discover, reproduce, and fix”

– Riccardo Terrell (Functional Concurrency)

This is just a brain-dump of some exercise for myself in training Monadic Binds and Agents, and how to combine them. What I really learned how to do, is looking at the signature itself. Much more than in Object-Oriented languages the signature isn’t a lie and tells exactly how it’s done. Only by looking at the signature, you can make a guess of how the function will look like.

Functional Programming is still a bit strange at first if you come from an Object-Oriented world; but trust me; it’s worth the learn. In a future where Asynchronous, Parallelism and Concurrent topics are considered “mainstream”, Functional Programming will be even more become a lesser and lesser mainstream language.

INTEGRATE 2017 USA Coming to Microsoft Redmond Campus – October 25, 26, 27

If you missed the chance to attend INTEGRATE 2017 in London this year, now is your chance to participate in INTEGRATE 2017 USA at the Microsoft Redmond Campus. Come see Scott Guthrie, Executive Vice President for the Cloud and Enterprise division, deliver the keynote address. Have a chance to network with Microsoft employees along with Microsoft Integration MVPs.

Further details and registration information can be found at https://www.biztalk360.com/integrate-2017-usa/

Microsoft Integration Weekly Update: Sep 25, 2017

Microsoft Integration Weekly Update: Sep 25, 2017

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

 

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

Adding circuit breakers to your .NET applications

Adding circuit breakers to your .NET applications

Apps fail. Hardware fails. Networks fail. None of this should surprise you. As we build more distributed systems, these failures create unpredictability. Remote calls between components might experience latency, faults, unresponsiveness, or worse. How do you keep a failure in one component from creating a cascading failure across your whole environment?

In his seminal book Release It!, Michael Nygard introduced the “circuit breaker” software pattern. Basically, you wrap calls to downstream services, and watch for failure. If there are too many failures, the circuit “trips” and the downstream services isn’t called any longer. Or at least for a period of time until the service heals itself.

How do we use this pattern in our apps? Enter Hystrix from Netflix OSS. Released in 2012, this library executes each call on a separate thread, watches for failures in Java calls, invokes a fallback operation upon failure, trips a circuit if needed, and periodically checks to see if the downstream service is healthy. And it has a handy dashboard to visualize your circuits. It’s wicked. The Spring team worked with Netflix and created a easy-to-use version for Spring Boot developers. Spring Cloud Hystrix is the result. You can learn all about it in my most recent Pluralsight course.

But why do Java developers get to have all the fun? Pivotal released an open-source library called Steeltoe last year. This library brings microservices patterns to .NET developers. It started out with things like a Git-backed configuration store, and service discovery. The brand new update offers management endpoints and … an implementation of Hystrix for .NET apps. Note that this is for .NET Framework OR .NET Core apps. Everybody gets in on the action.

Let’s see how Steeltoe Hystrix works. I built an ASP.NET Core service, and than called it from a front-end app. I wrapped the calls to the service using Steeltoe Hystrix, which protects my app when failures occur.

Dependency: the recommendation service

This service returns recommended products to buy, based on your past purchasing history. In reality, it returns four products that I’ve hard-coded into a controller. LOWER YOUR EXPECTATIONS OF ME.

This is an ASP.NET Core MVC Web API. The code is in GitHub, but here’s the controller for review:

namespace core_hystrix_recommendation_service.Controllers
{
    [Route("api/[controller]")]
    public class RecommendationsController : Controller
    {
        // GET api/recommendations
        [HttpGet]
        public IEnumerable<Recommendations> Get()
        {
            Recommendations r1 = new Recommendations();
            r1.ProductId = "10023";
            r1.ProductDescription = "Women's Triblend T-Shirt";
            r1.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/charcoal_pivotal_grande_43987370-6045-4abf-b81c-b444e4c481bc_1024x1024.png?v=1503505687";

            Recommendations r2 = new Recommendations();
            r2.ProductId = "10040";
            r2.ProductDescription = "Men's Bring Back Your Weekend T-Shirt";
            r2.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/m2_1024x1024.png?v=1503525900";

            Recommendations r3 = new Recommendations();
            r3.ProductId = "10057";
            r3.ProductDescription = "H2Go Force Water Bottle";
            r3.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/Pivotal-Black-Water-Bottle_1024x1024.png?v=1442486197";

            Recommendations r4 = new Recommendations();
            r4.ProductId = "10059";
            r4.ProductDescription = "Migrating to Cloud Native Application Architectures by Matt Stine";
            r4.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/migrating_1024x1024.png?v=1458083725";

            return new Recommendations[] { r1, r2, r3, r4 };
        }
    }
}

Note that the dependency service has no knowledge of Hystrix or how the caller invokes it.

Caller: the recommendations UI

The front-end app calls the recommendation service, but it shouldn’t tip over just because the service is unavailable. Rather, bad calls should fail quickly, and gracefully. We could return cached or static results, as an example. Be aware that a circuit breaker is much more than fancy exception handling. One big piece is that each call executes in its own thread. This implementation of the bulkhead patterns prevents runaway resource consumption, among other things. Besides that, circuit breakers are also machinery to watch failures over time, and allow the failing service to recover before allowing more requests.

This ASP.NET Core app uses the mvc template. I’ve added the Steeltoe packages to the project. There are a few Nuget packages to choose from. If you’re running this in Pivotal Cloud Foundry, there’s a set of packages that make it easy to integrate with Hystrix dashboard embedded there. Here, let’s assume we’re running this app somewhere else. That means I need the base package “Steeltoe.CircuitBreaker.Hystrix” and “Steeltoe.CircuitBreaker.Hystrix.MetricsEvents” which gives me a stream of real-time data to analyze.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNet.WebApi.Client" Version="5.2.3" />
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix" Version="1.1.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix.MetricsEvents" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
  </ItemGroup>
</Project>

I built a class (“RecommendationService”) that calls the dependent service. This class inherits from HystrixCommand. There are a few ways to use these commands in calling code. I’m adding it to the ASP.NET Core service container, so my constructor takes in a IHystrixCommandOptions.

//HystrixCommand means no result, HystrixCommand<string> means a string comes back
public class RecommendationService: HystrixCommand<List<Recommendations>>
{
  public RecommendationService(IHystrixCommandOptions options):base(options) {
     //nada
  }

I’ve got inherited methods to use thanks to the base class. I call my dependent service by overriding Run (or RunAsync). If failure happens, the RunFallback (or RunFallbackAsync) is invoked and I just return some static data. Here’s the code:

protected override List<Recommendations> Run()
{
  var client = new HttpClient();
  var response = client.GetAsync("http://localhost:5000/api/recommendations").Result;

  var recommendations = response.Content.ReadAsAsync<List<Recommendations>>().Result;

  return recommendations;
}

protected override List<Recommendations> RunFallback()
{
  Recommendations r1 = new Recommendations();
  r1.ProductId = "10007";
  r1.ProductDescription = "Black Hat";
  r1.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/hatnew_1024x1024.png?v=1458082282";

  List<Recommendations> recommendations = new List<Recommendations>();
  recommendations.Add(r1);

  return recommendations;
}

My ASP.NET Core controller uses the RecommendationService class to call its dependency. Notice that I’ve got an object of that type coming into my constructor. Then I call the Execute method (that’s part of the base class) to trigger the Hystrix-protected call.

public class HomeController : Controller
{
  public HomeController(RecommendationService rs) {
  this.rs = rs;
  }

  RecommendationService rs;

  public IActionResult Index()
  {
    //call Hystrix-protected service
    List<Recommendations> recommendations = rs.Execute();

    //add results to property bag for view
    ViewData["Recommendations"] = recommendations;

    return View();
  }

Last thing? Tying it all together. In the Startup.cs class, I added two things to the ConfigureServices operation. First, I added a HystrixCommand to the service container. Second, I added the Hystrix metrics stream.

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
  services.AddMvc();

  //add QueryCommand to service container, and inject into controller so it gets config values
  services.AddHystrixCommand<RecommendationService>("RecommendationGroup", Configuration);

  //added to get Metrics stream
  services.AddHystrixMetricsStream(Configuration);
}

In the Configure method, I added couple pieces to the application pipeline.

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
   if (env.IsDevelopment())
   {
     app.UseDeveloperExceptionPage();
   }
   else
   {
     app.UseExceptionHandler("/Home/Error");
   }

   app.UseStaticFiles();

   //added
   app.UseHystrixRequestContext();

   app.UseMvc(routes =>
   {
     routes.MapRoute(
       name: "default",
       template: "{controller=Home}/{action=Index}/{id?}");
   });

   //added
   app.UseHystrixMetricsStream();
}

That’s it. Notice that I took advantage of ASP.NET Core’s dependency injection, and known extensibility points. Nothing unnatural here.

You can grab the source code for this from my GitHub repo.

Testing the circuit

Let’s test this out. First, I started up the recommendation service. Pinging the endpoint proved that I got back four recommended products.

2017.09.21-steeltoe-01

Great. Next I started up the MVC app that acts as the front-end. Loading the page in the browser showed the four recommendations returned by the service.

2017.09.21-steeltoe-02

That works. No big deal. Now let’s turn off the downstream service. Maybe it’s down for maintenance, or just misbehaving. What happens?

2017.09.21-steeltoe-03

The Hystrix wrapper detected a failure, and invoked the fallback operation. That’s cool. Let’s see what Hystrix is tracking in the metrics stream. Just append /hystrix/hystrix.stream to the URL and you get a data stream that’s fully compatible with Spring Cloud Hystrix.

2017.09.21-steeltoe-04

Here, we see a whole bunch of data that Hystrix is tracking. It’s watching request count, error rate, and lots more. What if you want to change the behavior of Hystrix? Amazingly, the .NET version of Hystrix in Steeltoe has the same broad configuration surface that classic Hystrix does. By adding overrides to the appsettings.json file, you can tweak the behavior of commands, the thread pool, and more. In order to see the circuit actually open, I stretched the evaluation window (from 10 to 20 seconds), and reduced the error limit (from 20 to 3). Here’s what that looked like:

{
"hystrix": {
  "command": {
      "default": {
        "circuitBreaker": {
          "requestVolumeThreshold": 3
        },
        "metrics" : {
          "rollingStats": {
            "timeInMilliseconds" : 20000
          }
        }
      }
    }
  }
}

Restarting my service shows new threshold in the Hystrix stream. Super easy, and very powerful.

2017.09.21-steeltoe-05

BONUS: Using the Hystrix Dashboard

Look, I like reading gobs of JSON in the browser as much as the next person with too much free time. However, normal people like dense visualizations that help them make decisions quickly. Fortunately, Hystrix comes with an extremely data-rich dashboard that makes it simple to see what’s going on.

This is still a Java component, so I spun up a new project from start.spring.io and added a Hystrix Dashboard dependency to my Boot app. After adding a single annotation to my class, I spun up the project. The Hystrix dashboard asks for a metrics endpoint. Hey, I have one of those! After plugging in my stream URL, I can immediately see tons of info.

2017.09.21-steeltoe-06.png

As a service owner or operator, this is a goldmine. I see request volumes, circuit status, failure counts, number of hosts, latency, and much more. If you’ve got a couple services, or a couple hundred, visualizations like this are a life saver.

Summary

As someone who started out their career as a .NET developer, I’m tickled to see things like this surface. Steeltoe adds serious juice to your .NET apps and the addition of things like circuit breakers makes it a must-have. Circuit breakers are a proven way to deliver more resilient service environments, so download my sample apps and give this a spin right now!

Advertisements



Categories: .NET, Cloud, Microservices, Pivotal, Spring