Azure AD application registration monitoring: All you need to know 

Azure AD application registration monitoring: All you need to know 

App registrations is a mechanism in Azure AD allowing to work with an application and its permissions. It’s an object in Azure AD that represents the application, its redirect URI (where to redirect users after they have signed in), its logout URL (where to redirect users after they’ve signed out), API access and custom application roles for managing permissions to users and apps. 

As a matter of fact, through an app registration, you can restrict access to an application to only a specific group of users, if needed. An example of this is a solution I built a few years ago where we had two separate apps: a customer-facing app and a management app. Each had its app registration. I’ve restricted access to only a select group of people responsible for managing the system for the management app. 

Associated with an app registration is a service principal, which is the identity of that application. As you undoubtedly know, a service principal has credentials. However, you may not know that these credentials have an expiry date (end-date). If you’re not aware of that and don’t monitor and manage that, you may end up with applications and services that stop working. 

The Microsoft identity platform handles identity and access management (IAM) only for registered applications. Registering an application creates trust between the application and the Microsoft identity platform. 

The trust is unidirectional which means that the registered application trusts the Microsoft identity platform, but not the other way around. 

In Azure AD, applications can be represented in two ways: 

Application objects – Application objects define the application for Azure AD and can be viewed as the definition of the application. This enables the service to understand how to issue tokens to the application based on its settings. 

Service principals – The instance of the application in the user’s directory that controls connections to Azure AD is known as a service principal. 

Monitoring 

Serverless360 is an out-of-the-shelf platform to keep track of the expiration of client secrets for specific app registrations and delivering notifications prior to the expiration date, prompting you to renew it. 

Navigate to the Monitoring section of the resource to specify the number of days before which the expiry alert must be received, that’s pretty much the user has to configure and the rest of the work the platform will take care for you. 

Can you achieve the same from the Azure portal? 

In this section, we’ll see how we can define an Azure Automation runbook that we can run periodically to detect and get a list of those credentials that are either expired or about to expire. 

Setting up the automation runbook 

Creating an Azure Automation runbook can be done through the Azure portal or a CLI. We’ll show the portal way here. 

We first start by creating an Automation account. In the Azure portal, look for “Automation accounts”, then create a new instance: 

Once the account is created, we need to make a runbook (Use an Automation account to do many tasks where each runbook will handle a given task). 

Go to the “Runbooks” section, then click “Create a runbook” and enter the requested information 

You’re then presented with a screen to enter the code for that runbook. Our code will be in PowerShell. We’ll get to the complete source code in the next section. 

For now, I’ve displayed some sample codes:

  • You can notice, in line 3, that we import the “AzureAD” PowerShell module to interact with Azure AD. We use it at line 13 to get the list of all app registrations. 
  • You can notice that, too, between lines 6 and 9, we are authenticating to Azure AD before getting the list of app registrations (again, at line 13). 
From the toolbar (above the text editor), you can save the runbook, test it, publish it (you need to do that before you can use it in production), and revert to the previous version (in case the new version doesn’t work as expected).
 
We need first to install it since we’re importing a module (here, “AzureAD” at line 3). 

For that matter, at the Automation account level, we click on “Modules”, and we look for “AzureAD”: 

Since that module isn’t installed, we need to install it from the gallery by clicking on “Add a module”. We’ll pick 5.1 as the runtime version: 

The code 

The PowerShell code to be added to the runbook is listed?here. Replace the previous code with this one. 

The code is pretty easy to understand. One thing worth mentioning is the $daysToExpire variable that you’ll have to set to an appropriate value for your scenario. It’s intended to detect the service principals whose credentials are about to expire in the x coming days. 

Configuring the permissions for the runbook 

At this point, if you execute the runbook, you’ll notice that it might not work. That’s because the identity under which the runbook runs doesn’t have permissions to interact with Azure AD. 

An Azure Automation account has an associated identity. Find it in the “Connection” section under “Shared resources” in the Azure portal. 

I’ll choose the “AzureRunAsConnection”, which is of type “Service principal”, and give it the appropriate

To find that service principal in Azure AD, I need to search for the name of the Automation account in the list of “All applications” under “App registrations”: 
Since we want to list app registrations from the Azure AD, we need to assign the directory role “Directory readers” to the service principal associated with our Automation account (the one that will execute the runbook) following the least privileges principle. 
 
So, we go to “Roles and administrators” in our Azure AD tenant and select “Directory readers”: 
Then, we add an assignment to our service principal: 
And we’re done. 

The post Azure AD application registration monitoring: All you need to know  appeared first on Steef-Jan Wiggers Blog.

Top 5 Azure Observability Tools in 2023 

Top 5 Azure Observability Tools in 2023 

Building an application with different deployment models, resources, and tools in Azure is not the end of the road. The ultimate goal of end-user experience, sustainability, and increased visibility could be achieved only with observability. 

There are a lot of different tools available in the market. Azure has built-in tools, but many third-party solutions are available that stay on top of the native tooling to advance the observability functionality. 

Today, I’ll take you through the common traits among the available tools and what you should consider while choosing an Azure observability tool. 

What is the difference between monitoring and observability? 

Before I go deep into observability, I would provide better clarity about its supplementary compatriot, monitoring. In fact, observability and monitoring are tightly connected, and you cannot achieve observability without monitoring. 

Let us understand what they are, why they are essential, and when they are crucial to consider in your Azure ecosystem. 

Monitoring 

A monitoring system or tool actively tracks your application and continuously assesses it for any anomalies, flaws, or problems. 

  1. Monitoring gathers metrics and properties from the available sources like APIs and logs. 
  1. It passively tracks the performance and the amount of data it generates usually drowns the admin personnel. 
  1. Monitoring usually focuses on a point observation like integrations, infrastructure, and networks. 
  1. The data available through monitoring is often considered the final expected outcome. 

Observability 

The data collected from monitoring, like metrics and properties, set the base for observability. While monitoring focuses on incident reporting, observability provides insights into why the issue happened. 

  1. It collects various data like metrics, logs, and traces, which sets up the system to extract crucial insights into why things are happening. 
  1. It provides refined information after processing various data sources that pinpoint the exact root cause of the issue or incident. 
  1. Observability holistically focuses on both application and infrastructure to identify the root cause. 
  1. It collects data from sources contributing to the analytical process, representing the incident state. 

At the bottom line, while many observability tools are available in the market, all of them have a shared data source platform: Azure Monitor. 

Can you achieve better observability with Azure Monitor? 

While Azure monitor could only generate metrics and logs, the users cannot achieve the advanced version of the monitoring, which is observability. 

The platform should be able to refine various data sources like metrics, logs, and traces to focus on the relevant data, such as the factors that drive operations decisions and actions to fix incidents faster. 

What should you consider while choosing an observability tool? 

While many third-party and open-source solutions in the market utilize Azure Monitor export API to provide an upgraded experience beyond the threshold determined by the Azure cloud, I will explain the critical features that are expected to be present in any observability tool. 

Analyze and predict anomalies in Azure 

Leveraging custom algorithms to predict anomalies in Azure resources allows users to be proactive with critical performance issues. In addition, it correlates issues across hybrid and microservice architecture. 

Real-time dependency mapping 

This provides a sufficient view of the resources as a Line of Business Applications. Users can derive relationships between the resources that comprise the Business Application using this as a physical representation of the architecture with real-time health status. 

Business KPI dashboard 

Ability to auto-populate the dashboards that aggregates and presents the data to show the business goal achievements and bottlenecks. 

Deep Analytical tool 

Without switching between tabs, drill down into Azure services, components, or parameters using robust in-built tools to identify root causes. 

Automatic remediation 

The advanced automation capabilities help fix trivial incidents that may not require manual intervention.

List of Azure Observability Tools 

Given the volume of tools available in the market, it might be daunting to compare every product and choose the wise one that suits your needs. Hence, we have hand-picked the top 5 observability tools that have advanced capabilities. 

#1 Serverless360 (Best Overall) 

Serverless360 is best for achieving advanced observability and end-to-end correlation tracking. 

Serverelss360 is the provider of advanced monitoring and observability. It advances observability with contextual information, end-to-end correlation, and automation. It helps remove blind spots, resolve issues rapidly with minimal MTTR and deliver a superior customer experience.  

It extends the three core pillars of observability with a topology map that correlates the dependencies between applications to provide contextual information.  

It provides actionable answers rather than just producing severity alerts which could be more helpful. With advanced automation, you can ensure high scalability by auto-remediating trivial issues without manual intervention. 

Features 

  • Contextual information from the observed data about business goals impact 
  • Precise answers to reduce Mean time to recovery 
  • End-to-end correlation between the Azure service dependencies 
  • Service map to get the real-time health status of the application architecture 
  • Granular user access permission and team collaboration 
  • Desperate notification channels like slack, service now, teams, and more 

Price 

Its base price starts at $150/month for 25 Azure resources. You can try their 15 days free trial

#2) Dynatrace 

Dynatrace is a comprehensive enterprise SaaS tool for a wide range of enterprise monitoring needs. Distributed Tracing provides a technology called Purepath that combines distributed tracing with code-level insight. 

Features: 

  • Automatic injection and collection of data 
  • Code-level visibility across all application tiers for web and mobile apps together 
  • Always-on code profiling and diagnostics tools for application analysis 

#3) SigNoz 

SigNoz is a full stack open source APM and observability tool. Collect both metrics and traces with log management, currently included in the product roadmap. Logs, metrics, and traces are considered the three pillars of observability in modern distributed systems. 

Features: 

  • User requests per second 
  • 50th, 90th, and 99th percentile latencies of microservices in your application 
  • Error rate of requests to your services 

#4) Honeycomb 

Honeycomb is a full-stack cloud-based observability tool with support for events, logs, and traces. It provides easy to use UI for unified observability and some of its features includes: 

Features: 

  • Quickly diagnose issues and tweak performance with a top down approach to understand how your system is processing service requests 
  • Full-text search over trace spans and toggle to collapse and expand sections of trace waterfalls 
  • Provides Honeycomb beelines to automatically define key pieces of trace data like serviceName, name, timestamp, duration, traceID, etc. 

#5) Datadog 

DataDog is an enterprise APM tool that offers a variety of monitoring products from infrastructure monitoring, log management, network monitoring to security monitoring. 

Features: 

  • Out of box performance dashboards for web services, queues, and databases to monitor requests, errors, and latency 
  • Correlation of distributed tracing to browser sessions, logs, profiles, network, processes, and infrastructure metrics 
  • Can ingest 50 traces per second per APM host 
  • Service maps to understand service dependencies 

Observability in Azure: Wrap up 

Many tools are available at your disposal, while Azure Monitor is a solid out-of-the-box place to start if your organization decides to work with Azure. But at scale, you may need advanced and custom functionalities that go beyond the limitations of the native tool to help you understand the health of your application at a glance which will keep your business up and running. 

 

The post Top 5 Azure Observability Tools in 2023  appeared first on Steef-Jan Wiggers Blog.

Azure API Management Monitoring and Alerting made simpler

Azure API Management Monitoring and Alerting made simpler

In today’s world, APIs have definitely advanced the way how applications communicate with each other. However, when several APIs are utilized in a business scenario, it would be challenging to retain insight into each API to make sure they work as intended. 

This is where an Azure service like API Management turns out to be a significant aspect. It offers a centralized interface to publish, transform and manage numerous APIs, guaranteeing that they are secure and consumable. 

Significance of Monitoring your Azure APIM 

Since Azure APIM instances manage such business-critical APIs, monitoring them and their operations is crucial to better know their health and efficiency. Here are the top benefits that can be achieved with Azure APIM Monitoring, 

  • Eliminate Bottlenecks:?Monitoring how your APIM APIs and Products (group of one or more APIs) perform is essential to quickly spot problems that might adversely affect the end-user experience. 
  • Reduce latency gaps: API response time has a profound effect on the performance of an application. So with APIM monitoring, get to identify in case of response time delays, thereby eliminating latency gaps. 
  • Ensure availability: When there are any API-related issues, an effective APIM monitoring setup will send instant alerts, allowing you to take the required remedial actions. 
  • Detect failure anomalies: Rapidly figure out if there are outages or abnormal deviations like sudden rise in the rate of failed requests.

Understanding the importance, Azure itself offers its own suite of built-in tools for monitoring Azure APIM Instances.  

Wide range of monitoring options available for Azure APIM  

Native-Azure monitoring tools (Azure Monitor)  

Azure Monitor is one of the primary built-in tools for monitoring Azure APIM instances. Basically, it enables the collection of metrics and logs from APIM, which can be further used for monitoring, visualizing, and alerting. 

Capabilities 

  • Monitor your APIM Instances on metrics like capacity and request rate 
  • Set up alert rules based on metrics to get notified of critical issues 
  • Provides dashboards for visualizing monitoring metrics 
  • Get insights into the operations performed on your APIM Instances with Activity logs 
  • Integration with App Insights lets you know the dependencies between APIM instances and other services.  

Moreover, Azure Monitor focuses more on reactive monitoring – you get to react only after an incident has occurred.  

But when it comes to APIM monitoring, business tends to be more proactive, constantly attempting to spot and fix potential issues before they have an impact on end users.  

The limitations in using Azure Monitor for APIM 

  • Azure Monitor allows you to configure only a limited number of metrics per alert rule 
  • Monitoring an APIM instance on various metrics demands configuring a number of alerts, resulting in a cost spike. 
  • No consolidated error reporting for multiple APIM instances 
  • Doesn’t support visualizing how an API call traverses through various Azure services 
  • It would be hard to perform root cause analysis in case of a performance or latency issue without end-to-end tracing. 
  • Lack of automated features to execute remedial actions without manual intervention 

This is where the necessity of having enterprise-grade monitoring tools comes in place. One such tool that can assist you in overcoming the above-listed drawbacks and help proactively monitor your Azure APIM is Serverless360. 

How to be proactive and overcome limitations in Azure Monitor? 

Serverless360 is an advanced cloud management platform that enables managing and monitoring different Azure services involved in your application from a unified view. 

Considering APIs are critical in simplifying how an end user interacts with your application, Serverless360 offers out-of-the-box monitoring support for Azure APIM APIs, operations, and products. 

Here is how Serverless360 can be extensively used for Azure APIM Monitoring. 

Proactive monitoring for Azure APIM: Monitor all your APIM instances on multiple metrics and properties (Failed Requests, Successful Requests, etc) at no additional cost by setting up maximum thresholds to get an alert whenever there is a violation. 

With this, get to overcome one of the major limitations in Azure Monitor – The restriction to monitor only countable metrics under a single alert. 

Real-time consolidated error reports: In any sort of traditional monitoring, error reports will be generated for each APIM API, operation, or product, making it very difficult to identify the root cause of an issue.  

But Serverless360 can mitigate the challenge by sending you a consolidated report on all the APIM Instances at desired time intervals, eliminating false or alert storms. 

Discover failure trends: Serverless360 offers customizable, plug-and-play dashboards to provide a unified view of the metrics monitored. For instance, Visualize business-centric metrics like the response rates of APIs in a single place to avoid latency issues. 

End-to-end tracking: An Azure application will have various Azure services along with other APIM Instances, so tracking how an API call flows through each of those services is required to perform root cause analysis and troubleshoot issues faster than ever. 

App Insights in Azure Monitor just lets you visualize how services interact with each other whereas Serverless360 supports end-to-end tracking along with dependency mapping. 

Auto-correct the status of APIM Products: Automation is very crucial to reduce the manual workload involved in resolving recurring incidents.  

Serverless360 offers unique functionality to monitor the status of APIM products and auto-correct them during unintended interruptions. Also, it lets you configure various automated actions to be triggered whenever there is a threshold violation. 

Optimize costs associated with APIM Instances: Save time and cost by auto-generating documentation on your entire Azure environment/infrastructure. These documents help keep track of the costs associated with your APIM APIs, operations, and products. They enable you to compare the costs incurred across various time periods and to gain a full analysis of the expenditures made for each of those components. 

No room for security breaches: Understanding the importance of governance and auditing in enhancing security, Serverless360 comes with features to audit every action performed on your APIM instances and enable advanced role-based access control. 

Decode App Insights and Log Analytics: Enabling App Insights and Log Analytics can derive useful data on the APIM performance, however, Serverless360 can make this information more usable to the support team. Refer to this blog to learn more: Serverless360 to enable your Security Manager to Azure WAF data 

Offload support: Serverless360 offers an operation-friendly interface simple and straightforward for support users to infer the status of the resource and remediate identified issues. This can facilitate offloading support from the Azure team allowing them to innovate in business. 

Conclusion 

Having a solid tool for Azure APIM monitoring is mandatory for any organization, as the failure of an API could result in critical performance issues for the whole application.  

But there are plenty of choices available for Azure APIM monitoring and it is important that you choose the most apt one for your business. Thus, this blog discusses the features of native-Azure monitoring tools, their drawbacks, and a solution (Serverless360) to overcome them with consolidated monitoring for Azure APIM APIs, Products, and Operations. 

The post Azure API Management Monitoring and Alerting made simpler appeared first on Steef-Jan Wiggers Blog.

The value of having a Third-party Monitoring solution for Azure Integration Services

The value of having a Third-party Monitoring solution for Azure Integration Services

My day-to-day job focuses on enterprise integration between systems in the cloud and/or on-premises. Currently, it involves integration with D365 Finance and Operations (or Finance and Supply Change Management). One aspect of the integrations is monitoring. When a business has one or more Azure Integration Service running in production, the operation aspect comes into play. Especially integrations that support crucial business processes. The operations team requires the correct procedures, tools, and notifications (alerts) to run these processes. Procedures and receiving notifications are essential; however, team members need help identifying issues and troubleshooting. Azure provides tools, and so do third-party solutions. This blog post will discuss the value of having third-party monitoring in place, such as Serverless360.

Serverless360

Many of you who read blogs on Serverless360 know what the tool is. Moreover, it is a service hosted as a Software as a Service (SaaS). Therefore, operation teams can require access once a subscription is acquired or through a trial. Subsequently, they can leverage the primary business application, business activity monitoring, and documenter feature within the service. We will briefly discuss each feature and its benefits and value in the upcoming paragraphs.

BUSINESS APPLICATIONS

A team can configure, and group integration components with the business applications feature a so-called “Business Application” to monitor. It does not matter where the resources reside – within one or more subscriptions/resource groups.

The overview shown above is the grouping of several resources belonging to an integration solution. In one blink of an eye, a team member of the operations team can see the components’ state and potential issues that need to be addressed. Can the same be done in Azure with available features such as Azure Monitor, including components like Application Insights? Yes, it can be done. However, it takes time to build a dashboard. Furthermore, when operations are divided into multiple tiers, first-tier support professionals might not be familiar with the Azure Portal. In a nutshell, an overview provided by Business Application is not present in Azure out-of-the-box.

As Lex Hegt, Lead Product Consultant at BizTalk360 points out:

Integration solutions can span multiple technologies, resource groups, tags, and even Azure subscriptions. With the Azure portal having the involved components in all those different places, it is hard to keep track of the well-being of those components. Serverless360 helps you utilize the concept of Business Applications. A Business Application is a container to which you can add all the components that belong to the same integration. Once you have added your components to a Business Application, you can set up monitoring for those components, provide access permissions, and administer them.

The Business Application brings another feature that provides an overview of the integration components and dependencies. You might be familiar with the service map feature in Application Insights on a more fine-grained level. The service map in Serverless360 is intended to show the state of each component and dependency on a higher level.

Within a business application, the configuration of monitoring components is straightforward. By selecting the component and choosing the monitoring section, you can set thresholds of performance counters and set the state.

The value of Business Applications is a quick view of the integrations state and the ability to dive into any issue quickly, leading to time-saving by spending far less time identifying the problem (see, for instance, Application Insights health check with Serverless360, and Integrating Log Analytics in Serverless360). With more time on their hand’s operations teams can focus on various other matters during a workday or shift. Furthermore, the ease of use of Business Applications doesn’t require support people in a first-tier support team to have a clear understanding and experience of the Azure portal.
Having a clear overview is one thing. However, it also helps operations teams get notifications or finetune metrics based on thresholds and only receive information when it matters. In addition, it’s essential to keep integrations operational when they support critical business processes, as any outage costs a significant amount of money.

BUSINESS ACTIVITY MONITORING

The second feature of Serverless360 is the end-to-end tracking capability called Business Activity Monitoring (BAM). The BAM feature organization can instrument their Azure resources that support integrations between systems. Through a custom connector and SDK, you can add tracking to Logic Apps and Azure Functions that are a part of your integration. A unique generated transaction instance-id in the first component will be carried forward to the subsequent stages in more functions and Logic Apps.

The operations team must do some work to leverage the BAM functionality. They need to set up the hosting of the BAM infrastructure, define the business process, instrument the business process and add monitoring (see, for instance, Azure Service Bus Logging with BAM – Walk-through). Once that is done, a clear view of the process and its stages are available.

The benefit of the BAM feature is a concise overview of the configured business processes. Moreover, you get an overview of the complete process and potentially see where things go wrong.

AZURE DOCUMENTER

The final feature Serverless360 offers Azure Documentation Generator, which is intended to generate documentation. Operations teams can generate documentation for the subscription that contains the integrations with the documenter. It is good to have a dedicated subscription for integration solutions to govern better and manage Azure resources.

When operations teams like to generate documentation, they can choose between different templates, storing of the document, and billing range.

The benefit of having documentation of the integrations in a subscription is having a clear overview of the components, details, and costs (consumption). While the Azure portal offers a similar capability, you will have to go to the Cost management and billing to see consumption and cost, Azure Advisor, and other places. Furthermore, there is no feature to generate documentation to help report the Azure resources’ state.
The value of the Azure Documenter is the flexibility for generating documentation on a different level of granularity. Furthermore, by frequently running the documenter, you can spot differences like an unexpected increase in cost provide executive reports and information for your knowledge base for integrations.

Conclusion

Features and benefits of Serverless360 have been outlined in this blog post. Of course, there are many more features. Yet, we focused on the most significant one that provides Operations teams the most value. That is a clear overview of the state of integrations in a single-pane-of-glass and the ability to quickly drill down into integration components and spot issues at a fine-grained level. Furthermore, Business Activity Monitoring and Azure Documenter provide end-to-end tracking and generated documentation.

Serverless360 offers an off-the-shelf product for monitoring not directly available in the Azure Portal. As an organization, you can decide whether to buy a product or build a custom solution, or both to fulfill monitoring requirements for integration solutions. Serverless360 can be the solution for organizations looking for a product to meet their needs. It has unique features which are not directly available in Azure or require a substantial investment to achieve.
For more details and Serverless360 in action, see the webinar of Michael Stephenson: Support Strategy for Event-Driven Integration Architectures and the latest features blog.

The post The value of having a Third-party Monitoring solution for Azure Integration Services appeared first on Steef-Jan Wiggers Blog.

The Current State of Microsoft Integration Related Content

The Current State of Microsoft Integration Related Content

With technology changing fast and services in the cloud evolve more rapidly than their on-premise counterparts creating and updating content around those services becomes challenging. Microsoft Integration has expanded over the years from Grid their the on-premise offering BizTalk Server to multiple cloud services in Azure like Service Bus, Logic Apps, API Management, Azure Functions, Event Hubs, and Event.

Introduction

The server product BizTalk has numerous available content types like Microsoft Docs, Blog posts, online recordings, and presentations. Does this also apply to the mentioned Azure Services? Yes and no, because of the rapid change content is out-of-date fast and people creating the material have a hard time keeping up. At least for me, it’s a challenge to keep up and produce content.

The Questions

Do Integration minded people in the Microsoft ecosystem feel the same way as I feel? Or what’s there view about content? To find out I created  in Google Docs. Furthermore, I sent out a few tweets and a LinkedIn post to encourage people to answer some Integration Content related questions. These questions are:

  • What type of content do you value the most?
  • What Integration Event has your preference?
  • What online content in the integration space do you consume the most?
  • What type integration focused content do you think is valuable for your work as integration professional?
  • Have you attended Integrate London, a local user group meeting or the Global Integration Bootcamp?
  • Does the Global Integration Bootcamp, Integrate London or the local integration focused user group provides value for you?
  • Do have any comments or feedback on Microsoft Integration content?

With the questions above I hope to get a little glimpse into the expectations and thoughts people have with regards to integration content. That is what do they think about the existing content, what is do they appreciate, what content types and through what preferred channel.

The Outcome

The number of responses exceeded 50, which can be the representation of either one up to ten percent of the general population of people working in the integration space. At least that my assumption. However, assessing the actual representation, in the end, is hard. Anyways, let’s review the results of the questionnaire.

The first question was around what specific content type people value the most. And it appears that the majority of respondents still favors blogs, one of the older content types, before vlogs, webcasts, and video became more mainstream. Almost 60% favors blogs over any other content type.

In line with the previous question is what content is consumed the most. The response correlates with what is valued. Moreover, static content is preferred over let’s say dynamic content like vlogs or on-line recordings like Integration Mondays or Middleware Fridays. I left out live Events and Channel 9 intentionally, to see how community content would be consumed. Note that Microsoft Docs is open for changes via GitHub, where the community contributes too. Thus this content type is partially maintained by the community.

With another question, I tried to see which event was preferred the most of the three we have available from an integration perspective. A global, centralized one like Integrate, a local user group, or a Global Integration Bootcamp on one day in various venues. Close to 50% favor Integrate London, while local user groups and the boot camp are around 25%.

As a follow-up, I asked who attend any of these events or not. And most (>75%) respondents attended either a local user group, a Global Integration Boot camp or Integrate.

The other questions were open ones. Here, people could more specifically provide feedback on what content they value apart from the channel it is delivered through, and how much value an event is providing (if attended), and one more where people could provide more general feedback people about integration content.

Conclusions

Respondents have strong preferences for content around examples, use-cases (real-world), up-to-date content, architecture, design, and patterns. This feedback was expressed by many in the question “What type integration focused content do you think is valuable for your work as integration professional?”. Furthermore, the answers are reflected in the general feedback they could give about integration content. An example is in the following comments (feedback):

“I would like to see more of how companies are adopting the Azure platform. For instance, a medium to large enterprise integration employing Logic apps and service bus and they came up with the solution architecture, challenges faced, lessons learned.”

Or

“Docs are getting better and better, but finding the right content and keeping up with the release speed of Microsoft appears to be a challenge sometimes.”

With people attending events, the value lies in the opportunity for networking, see (new) content, and have interactions with peers in the fields, MVPs, and Microsoft. Generally, a local event, a boot camp, or a bigger event tend to be the right places to socialize, learn about new tech, and get a perspective on the integration ecosystem. This perceived view is reflected in the answers about the value of attending an event.

To conclude people have an overall satisfaction in content and how it is delivered. However, a clear demand for more up-to-date content online and practical guidance is requested by people for their day to day jobs as integrators.

Finally, I like to thank everyone for taking time to answer the questions.
Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, healthcare, agriculture, (local) government, bio-sciences, retail, travel, and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 8 years. View all posts by Steef-Jan Wiggers

Year’s review of 2017

Year’s review of 2017

The year 2017 almost has come to an end. A year I traveled a lot and spent many hours sitting in planes. In total, I have made close to 50 flights. A bonus at the end of this year is that I have reached gold status with the KLM. Thus I can enjoy the benefit sitting in the lounge like some of my friends. The places I visited in 2017 are Sydney, Auckland, Brisbane, Gold Coast, Melbourne, London, Lisbon, Porto, Olso, Stockholm, Gothenburg, Zurich, Seattle, Rotterdam, Dublin, Prague, Bellevue, Redmond, Adliswil, Ghent, Mechelen, and Montréal (France).

Public speaking in 2017

In 2017 I have spoken at various conferences in the Netherlands and abroad. The number of attendees varied from 20 to 400. My sessions were on the following topics:

– Logic Apps
– Functions
– Cosmos DB
– Azure Search
– Cognitive Services
– Power BI
– Event Grid
– Service Bus
– API Management
– Web API

Besides speaking at local user groups and conferences, I created videos, webinars, blog posts and news articles. The blog posts are available on my blog and the BizTalk360 blog. The news articles for InfoQ, for which I became an editor in November. The latter is something I consider as a great accomplishment.

Middleware Friday 2017

Together with Kent, we put out almost 50 episodes for Middleware Friday. In the beginning, Kent published various videos and later asked me to join the effort. The topics for Middleware Friday in 2017 were:

– Logic Apps
– Functions
– Microsoft Flow
– Cognitive Services: Text, Face, and BOTS
– Operation Management Suite (OMS)
– Event Grid
– Service Bus
– API Management
– Cosmos DB
– Azure Data Lake
– Azure Active Directory
– BizTalk Server
– Event Hubs
– SAP Integration

Creating episodes for Middleware Friday or vlogs was a great experience. It is different than public speaking. However, in the past and also this year I did a few Integration Monday sessions. Therefore, recording for a non-visible audience was not new for me.

Global Integration Bootcamp

Another highlight in 2017 was the first integration boot camp, which I organized with Eldert, Glenn, Sven, Rob, Martin, Gijs and Tomasso. Over 16 locations worldwide join in a full Saturday of integration joy spending time on labs and sessions. The event was a success, and we hope to repeat that in 2018.

Integrate London and US

BizTalk360 organized two successful three-day integration conferences in London and Redmond. At both events, I spoke about Logic Apps discussing its value for enterprises, the developer experience, and cloud-native nature. On stage for a big audience was quite the experience, and I delivered my message.

Personal accomplishments, top five books, and music

Personally, I found 2017 an exciting year with visits to Australia and New-Zealand, completing the Rotterdam Marathon, Royal Parks Half and the many speaking opportunities. Looking forward to my next two marathons in Tokyo and Chicago in 2018 and new speaking engagements.

The top five books in 2017 are:

– The subtle art of not giving a fuck!
– Sapiens – A Brief History of Humankind
– The Gene – An Intimate History
– Blockchain Basics, a non-technical introduction in 25 steps
– The Phoenix Project

The top five metal albums are:

– Mastodon – Emporer of the Sand
– Pallbearer – Heartless
– Caligula’s Horse – In contact
– Enslaved – E
– Leprous – Malina

Thanks everyone for your support in either reading my blogs, articles and or attending my sessions and online videos. Enjoy the winter holidays and merry Christmas and happy new year!

P.S. I might have forgotten a thing or two, but that’s why I created Stef’s monthly update.

Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, healthcare, agriculture, (local) government, bio-sciences, retail, travel, and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 8 years.

View all posts by Steef-Jan Wiggers

Stef’s Monthly Update – October 2017

Stef’s Monthly Update – October 2017

The first month at Codit went faster than I expected. I traveled a lot this past month. A few times to Switzerland where I work for a client, London to run the Royal Parks half marathon, Amsterdam the week after to run another, and finally to Seattle/Redmond for Integrate US.

Month October

October was an exciting month with numerous events. First of all, on the 9th of October, I spoke at Codit’s Connect event in Utrecht on the various integration models. Moreover, on that day I was joined by other great speakers like Tom, Richard, Glenn, Sam, Jon, and Clemens. This was the first full day event by Codit on the latest developments in hybrid and cloud integration and around integration concepts shared with the Internet of Things and Azure technology.

A new challenge I accepted this month was writing for InfoQ. Richard approached me if I wanted to write about cloud technology-related topics. So far two articles are available:

It was not easy writing article’s in a more journalistic style, which meant being objective, research the news and creating a solid story in 400 to 500 words.

Middleware Friday

Kent and I continued our Middleware Friday episodes in October. Cosmos DB, Microsoft’s globally distributed, multi-model database, offers integration capabilities with new binding in Azure Functions.

The evolution of Logic Apps continues with the ability to build your own connectors.

Integrate US

The 20th of October I flew over the Atlantic Ocean to Seattle to meet up with Tom and JoAnn. We did a nice micro-brewery tour on the next day.

Sunday that weekend we enjoyed seeing the Seahawks play against New-York Giants. After the weekend it was time to prepare for Integrate US 2017. Finally, you can read the following recaps from the BizTalk360 blog:

The recaps were written by Martin, Eldert and myself.

To conclude Integrate US was a great success and well organized again by Team BizTalk360.

Before I went home I spent another weekend in Seattle to enjoy some more American football. On Saturday Kent and I went to see the Washington Huskies play UCLA.

On Sunday we watch Seattle play the Texans a very close game. After the game, we recorded a Middleware Friday in out Seahawks outfit.

Music

My favorite albums in October were:

  • Trivium – The Sin And The Sentence
  • August Burns Red – Phantom Anthem
  • Enslaved – E

It was a busy month and next month will be no different with traveling and the next speaking engagements DynamicsHub and CloudBrew.

Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, health care, agriculture, (local) government, bio-sciences, retail, travel and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 7 years. View all posts by Steef-Jan Wiggers

Integrate 2017 USA Day 3 Recap

Integrate 2017 USA Day 3 Recap

Day 3, the final day of Integrate 2017 USA, at Microsoft Campus building 92. The event so far well received and made people happy seeing the innovations, investments, and passion Microsoft is bringing to its customers and pro-integration professionals.

Check out the recap of the events on Day 1 and Day 2 at Integrate 2017 USA.

Moving to Cloud-Native Integration

Richard started the final day of Integrate 2017 USA stating that the conference actually starts now. He is a great speaker to get the audience pumped on cloud-native integrations. Richard talked about what analysts at Gartner see happening in integration. The trend is cloud service integration is rising. The first two days of this conference made that apparent with the various talks about Logic Apps, Flow, and Functions.

What is “cloud-native”? Richard explained that during his talk.

The sessions interesting part was the comparison between the traditional enterprise versus native. The way going forward is “cloud-native”.

The best ways to show what cloud-native really means is by showing demos. Richard showed how to build a Logic App as a data pipeline, the BizTalk REST API available through the Feature Pack, and automating Azure via Service Broker.

Take away from this session was the new way of thinking integration. Finally, there will a book coming out a book coming soon that discusses the topic further.

What’s there & what’s coming in ServiceBus360

Saravana talked in his session about monitoring challenges with a distributed cloud integration solution. He showed the capabilities of ServiceBus360, a monitoring, and management service primarily for service bus yet expanded with new features. These new features are intended to mitigate the challenges the arise with a composite application.

Saravana demoed the ServiceBus360 to the audience to showcase the features and how it can help people with their cloud composite integration solution.

After the demo, Saravana elaborated on the evolution of ServiceBus360. Its still early days, for some of the new capabilities and he is looking for feedback. Furthermore, he discussed where the service will be heading too by sharing the roadmap.

At the end of the presentation, Saravana announced Atomic Scope, a new upcoming product. It will be launched in January 2018, and it is a functional end to end business activity tracking and monitoring product for Hybrid integration scenarios involving Microsoft BizTalk Server and Azure Logic Apps.

Integrate 2017 USAIntegrate 2017 USA

Signals, Intelligence, and Intelligent Actions

Nick Hauenstein talked about Azure Machine Learning, mind reading and experiments. He promised a fun session!

Nick did a great demo on mind reading, having people asking questions and showing what his mind was thinking yes and no. For instance: “Will Astro’s win the next game against the LA Dodgers in the World Series?“.

After the demo, Nick explained Machine Learning, possible very relevant in our day and age. Furthermore, he followed that up with another demo teaching the audience how to build and operationalize an Azure ML model, and able to invoke that from within either BizTalk Server or Azure Logic Apps. The audience could follow along with Azure ML Studio and build a demo themselves.

To conclude, this was a great session and introduction to Machine Learning. In the past, I followed the course on eDX on DataScience, which includes hands-on with ML Studio.

Overcoming Challenges When Taking Your Logic App into Production

Stephen W. Thomas, a long time Integration MVP, took the stage to talk about how to get a Logic App running as a BizTalk guy. He shared during his talk his experience with building Logic Apps.

Moreover, Stephen shared some good tips around Logic Apps:

  • Read the available documentation.
  • Don’t be afraid for JSON – code view is still needed especially with new features, but most of the time is soon available in designer and visual studio. Always save or check-in before switching to JSON.
  • Make sure to fully configure your actions, otherwise, you cannot save the Logic App.
  • Ensure name of action, hard to change afterward.
  • Try to use only one MS account.
  • If you get odd deployment results, close / re-open your browser.
  • Connections – Live at resource group level. The last deployment wins.
  • Best practices: define all connection parameters in one Logic App. One connection per destination, per resource group.
  • Default retries – all actions retry 4 additional times over 20s intervals.
    Control using retry policies.
  • Resource Group artefacts – contain subscription id, use parameters instead.
  • For each loop – limited to 100000 loops. default to multiple concurrent loops can be changed to sequential loops
  • Recurrence – singleton.
  • User permissions (IAM) – multiple roles exist like the Logic App Contributor and the Logic App Operator.

BizTalk Server Fast & Loud

The final session of the day by Sandro Pereira, he talked about performance with BizTalk. After the introduction of himself, nicknames and stickers, he dived into his story. Have your BizTalk Jobs running, pricing based on the setup of a BizTalk environment, default installation, and performance.

How to increase performance, how to decrease response times, BizTalk database optimizations, hard drives, networks, memory, CPU, scaling, Sandro went the distance.

Finally, Sandro did a demo to showcase better performance with BizTalk by doing a lot tuning.

It was a fast demo and he finished the talk with some final advice: “Do not have more than 20 host instances!”.

Q&A Session

After Sandro’s session, lunch and a Q&A session with the Pro-Integration and Flow Product Group.

It’s a wrap

That was Integrate 2017 USA, two and half days of integration focussed content, great set of speakers and empowered attendees, who will go home with a ton of knowledge. Hopefully, BizTalk360 will be able to organize this event again next year and keep the momentum going.

Thanks, Saravana and Team BizTalk360. Job well done!!!

Check out the recap of the events on Day 1 and Day 2 at Integrate 2017 USA.

Author: Steef-Jan Wiggers

Steef-Jan Wiggers has over 15 years’ experience as a technical lead developer, application architect and consultant, specializing in custom applications, enterprise application integration (BizTalk), Web services and Windows Azure. Steef-Jan is very active in the BizTalk community as a blogger, Wiki author/editor, forum moderator, writer and public speaker in the Netherlands and Europe. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 5 years. View all posts by Steef-Jan Wiggers

Stef’s Monthtly Update – September 2017

Stef’s Monthtly Update – September 2017

September 2017, the last month at Macaw and about to onboard on a new journey at Codit Company. And I looking forward to it. It will mean more travelling, speaking engagements and other cool things. #Cyanblue is the new blue.

Below a picture of Tomasso, Eldert, me, Dominic (NoBuG), and Kristian in Olso (top floor or Communicate office).

I did a talk about Event Grid at NoBug wearing my Codit shirt for the first time.

Month September

September was a month filled with new challenges. I onboarded the Middleware Friday team and released two episodes (31 and 33):

Moreover, I really enjoyed doing these type of videos and looking forward to create a few more as I will be presenting an episide every alternating week. Subsequently, Kent will continu with episodes focussed around Microsoft Cloud offerings such as Microsoft Flow. And my focus will be integration in general.

In September I did a few blog posts on my own blog and BizTalk360 blog:

This month I only read one book. Yet it was a good book called: The Subtle Art of Not Giving a F*ck from Mark Manson.

Music

My favorite albums in September were:

  • Chelsea Wolfe – Hiss Spun
  • Satyricon – Deep Calleth Upon Deep
  • Cradle Of Filth – Cryptoriana: The Seductiveness Of Decay
  • Enter Shikari – The Spark
  • Myrkur – Mareridt
  • Arch Enemy – Will To Power
  • Wolves In The Throne Room – Thrice Woven

Running

In September I continued with training and preparing for next months half marathons in London and Amsterdam.

October will be filled with speaking engagements ranging from Integration Monday to Integrate US 2017 in Redmond.

Cheers,

Steef-Jan

Author: Steef-Jan Wiggers

Steef-Jan Wiggers is all in on Microsoft Azure, Integration, and Data Science. He has over 15 years’ experience in a wide variety of scenarios such as custom .NET solution development, overseeing large enterprise integrations, building web services, managing projects, designing web services, experimenting with data, SQL Server database administration, and consulting. Steef-Jan loves challenges in the Microsoft playing field combining it with his domain knowledge in energy, utility, banking, insurance, health care, agriculture, (local) government, bio-sciences, retail, travel and logistics. He is very active in the community as a blogger, TechNet Wiki author, book author, and global public speaker. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 7 years. View all posts by Steef-Jan Wiggers

Route Azure Storage Events to multiple subscribers with Event Grid

Route Azure Storage Events to multiple subscribers with Event Grid

A couple of weeks ago Azure Event Grid service became available in public preview. This service enables centralized management of events in a uniform way. Moreover, it scales with you when the number of events increases. This is made possible by the foundation the Event Grid relies on Service Fabric. Not only does it auto scale you also do not have to provision anything besides an Event Topic to support custom events (see the blog post Routing an Event with a custom Event Topic).

Event Grid is serverless, therefore you only pay for each action (Ingress events, Advanced matches, Delivery attempts, Management calls). Moreover, the price will be 30 cents per million actions in the preview and will be 60 cents once the service will be GA.

Azure Event Grid can be described as an event broker that has one of more event publishers and subscribers. Furthermore, Event publishers are currently Azure blob storage, resource groups, subscriptions, event hubs and custom events. Finally, more will be available in the coming months like IoT Hub, Service Bus, and Azure Active Directory. Subsequently, there are consumers of events (subscribers) like Azure Functions, Logic Apps, and WebHooks. And on the subscriber side too more will be available with Azure Data Factory, Service Bus and Storage Queues for instance.

To view Microsoft’s Roadmap for Event Grid please watch the Webinar of the 24th of August on YouTube.

Event Grid Preview for Azure Storage

Currently, to capture Azure Blob Storage events you will need to register your subscription through a preview program. Once you have registered your subscription, which could take a day or two, you can leverage Event Grid in Azure Blob Storage only in Central West US!

Registered Azure Storage in a Azure Subscription for Event Grid.

The Microsoft documentation on Event Grid has a section “Reacting to Blob storage events”, which contains a walk-through to try out the Azure Blob Storage as an event publisher.

Scenario

Having registered the subscription to the preview program, we can start exploring its capabilities. Since the landing page of Event Grid provides us some sample scenarios, let’s try out the serverless architecture sample, where one can use Event Grid to instantly trigger a Serverless function to run image analysis each time a new photo is added to a blob storage container. Hence, we will build a demo according to the diagram below that resembles that sample.

Image Analysis Scenario with Event Grid.

An image will be uploaded to a Storage blob container, which will be the event source (publisher). Subsequently, the Storage blob container belongs to a Storage Account containing the Event Grid capability. And finally, the Event Grid has three subscribers, a WebHook (Request Bin) to capture the output of the event, a Logic App to notify me a blob has been created and an Azure Function that will analyze the image created in the blob storage, by extracting the URL from the event message and use it to analyze the actual image.

Intelligent routing

The screenshot below depicts the subscriptions on the events on the Blob Storage account. The WebHook will subscribe to each event, while the Logic App and Azure Function are only interested in the BlobCreated event, in a particular container(prefix filter) and type (suffix filter).

Route Azure Storage Events to multiple subscribers with Event Grid

Besides being centrally managed Event Grid offers intelligent routing, which is the core feature of Event Grid. You can use filters for event type, or subject pattern (pre- and suffix). Moreover, the filters are intended for the subscribers to indicate what type of event and/or subject they are interested in. When we look at our scenario the event subscription for Azure Functions is as follows.

  • Event Type : Blob Created
  • Prefix : /blobServices/default/containers/testcontainer/
  • Suffix : .jpg                       

The prefix, a filter object, looks for the beginsWith in the subject field in the event. And in addition the suffix looks for the subjectEndsWith in again the subject. Consequently, in the event above, you will see that the subject has the specified Prefix and Suffix. See also Event Grid subscription schema in the documentation as it will explain the properties of the subscription schema. The subscription schema of the function is as follows:

<pre>{
"properties": {
"destination": {
"endpointType": "webhook",
"properties": {
"endpointUrl": "https://imageanalysisfunctions.azurewebsites.net/api/AnalyseImage?code=Nf301gnvyHy4J44JAKssv23578D5D492f7KbRCaAhcEKkWw/vEM/9Q=="
}
},
"filter": {
"includedEventTypes": [ "<strong>blobCreated</strong>"],
"subjectBeginsWith": "<strong>/blobServices/default/containers/testcontainer/</strong>",
"subjectEndsWith": "<strong>.jpg</strong>",
"subjectIsCaseSensitive": "true"
}
}
}</pre>

Azure Function Event Handler

The Azure Function is only interested in a Blob Created event with a particular subject and content type (image .jpg). This will be apparent once you inspect the incoming event to the function.

<pre>[{
"topic": "/subscriptions/0bf166ac-9aa8-4597-bb2a-a845afe01415/resourceGroups/rgtest/providers/Microsoft.Storage/storageAccounts/teststorage666",
"<strong>subject</strong>": "<strong>/blobServices/default/containers/testcontainer/</strong>blobs/NinoCrudele.<strong>jpg</strong>",
"<strong>eventType</strong>": "<strong>Microsoft.Storage.BlobCreated</strong>",
"eventTime": "2017-09-01T13:40:33.1306645Z",
"id": "ff28299b-001e-0045-7227-23b99106c4ae",
"data": {
"api": "PutBlob",
"clientRequestId": "206999d0-8f1b-11e7-a160-45670ee5a425",
"requestId": "ff28299b-001e-0045-7227-23b991000000",
"eTag": "0x8D4F13F04C48E95",
"contentType": "image/jpeg",
"contentLength": 32905,
"blobType": "<strong>BlockBlob</strong>",
"url": "https://teststorage666.blob.core.windows.net/testcontainer/NinoCrudele.jpg",
"sequencer": "0000000000000AB100000000000437A7",
"storageDiagnostics": {
"batchId": "f11739ce-c83d-425c-8a00-6bd76c403d03"
}
}
}]</pre>

The same intelligence applies for the Logic App that is interested in the same event. The WebHook subscribes to all the events and lacks any filters.

The scenario solution

The solution contains a storage account (blob), a registered subscription for Event Grid Azure Storage, a Request Bin (WebHook), a Logic App and a Function App containing an Azure function. The Logic App and Azure Function subscribe to the BlobCreated event with the filter settings.

The Logic App subscribes to the event once the trigger action is defined. The definition is shown in the picture below.

Event Grid properties in a Logic App Trigger Action.

Note that the resource name has to be specified explicitly (custom value) as the resource type Microsoft.Storage has been set explicitly too. The resource types currently available are Resource Groups, Subscriptions, Event Grid Topics and Event Hub Namespaces, while Storage is still in a preview program. Therefore, registration as described earlier is required. As a result with the above configuration, the desired events can be evaluated and processed. In case of the Logic App, it is parsing the event and sending an email notification.

Image Analysis Function

The Azure Function is interested in the same event. And as soon as the event is pushed to Event Grid once a blob has been created, it will process the event. The URL in the event https://teststorage666.blob.core.windows.net/testcontainer/NinoCrudele.jpg will be used to analyse the image. The image is a picture of my good friend Nino Crudele.

Route Azure Storage Events to multiple subscribers with Event Grid

This image will be streamed from the function to the Cognitive Services Computer Vision API. The result of the analysis can be seen in the monitor tab of the Azure Function.

Route Azure Storage Events to multiple subscribers with Event Grid

The result of the analysis with high confidence is that Nino is smiling for the camera. We, as humans, would say that this is obvious, however do take into consideration that a computer is making the analysis. Hence, the Computer Vision API is a form of Artificial Intelligence (AI).

The Logic App in our scenario will parse the event and sent out an email. The Request Bin will show the raw event as is. And in case I, for instance, delete a blob, then this event will only be caught by the WebHook (Request Bin) as it is interested in any event on the Storage account.

Route Azure Storage Events to multiple subscribers with Event Grid

Summary

Azure Event Grid is unique in its kind as now other Cloud vendor has this type of service that can handle events in a uniform and serverless way. Although it is still early days as this service is in preview a few weeks. However, with expansion of event publishers and subscribers, management capabilities and other features it will mature in the next couple of months.

The service is currently only available in, West Central US and West US. However, over the course of time it will become available in every region. And once it will become GA the price will increase.

Working with Storage Account as a source (publisher) of events unlocked new insights in the Event Grid mechanisms. Moreover, it shows the benefits of having one central service in Azure for events. And the pub-sub and push of events are the key differentiators towards the other two services Service Bus and Event Hubs. Therefore, no longer do you have to poll for events and/or develop a solution for it. To conclude the Service Bus Team has completed the picture for messaging and event handling.

Author: Steef-Jan Wiggers

Steef-Jan Wiggers has over 15 years’ experience as a technical lead developer, application architect and consultant, specializing in custom applications, enterprise application integration (BizTalk), Web services and Windows Azure. Steef-Jan is very active in the BizTalk community as a blogger, Wiki author/editor, forum moderator, writer and public speaker in the Netherlands and Europe. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 5 years. View all posts by Steef-Jan Wiggers