Microsoft Integration Weekly Update: May 21, 2018

Microsoft Integration Weekly Update: May 21, 2018

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

How get started with iPaaS design & development in Azure?

Feedback

Hope this would be helpful. Please feel free to reach out to me with your feedback and questions.
Advertisements

API Management CI/CD using ARM Templates – Products, users and groups

API Management CI/CD using ARM Templates – Products, users and groups

This is the second post in my series around setting up CI/CD for Azure API Management using Azure Resource Manager templates. In the previous post we created our API Management instance, and have set up our build and release pipelines. In this post we will add custom products, users and groups to our API Management instance, which will be used to set up our policies and access to our APIs.

API Management products, users and groups

The posts in this series are the following, this list will be updated as the posts are being published.

For this post, we will be adding a new user to the API Management instance we created in the previous blog post in this series. This user will represent a client developer from the Contoso company, who will be using the APIs which we will define later on. In this scenario, Contoso consumes our APIs in their own processes. The user will be placed into a group, which represents the Contoso company. In a real life scenario, this group would contain users for all the developers and services of this particular client. And finally we will create a product for the Contoso company as well, and link the group to the product. The product is where we will be setting up policies and quotas, so we can limit the usage our services.

As explained in the first post in this series, we will be using different repositories for the various parts of our API Management setup. In that post, we already showed how we can set up a repository and clone this to our local machine. For this post, we will be creating a new repository, in which we will create the ARM template for our products, users and groups. Create the API Management products, users and groups repository and clone it to you machine.

Create new repository

Create new repository

API Management products, users and groups repository

API Management products, users and groups repository

Now we will start by creating the ARM template for adding a user for Contoso, who will be consuming our APIs. In your cloned repository, create a new file and name it products-users-groups.json, and add the following ARM template contents to the file.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "APIManagementInstanceName": {
          "type": "string",
          "defaultValue": "MyAPIManagementInstance"
        }
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service/users",
            "name": "[concat(parameters('APIManagementInstanceName'), '/john-smith-contoso-com')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "firstName": "John",
                "lastName": "Smith",
                "email": "john.smith@contoso.com",
                "state": "active",
                "note": "Developer working for Contoso, one of the consumers of our APIs",
                "confirmation": "invite"
            },
            "dependsOn": []
        }
    ]
}

What we do here, is creating a new user (John Smith), and add it to our API Management instance. We have the name of the instance as a parameter, so we could override this from our deployment pipeline. As you will notice, we don’t set anything in our dependsOn, as the API Management instance has been created from another template. Also note the “confirmation”: “invite” line, which makes sure that the user will receive an email on the specified address to finish his registration by setting his own password.

Next we will expand our ARM template to also create the group, so let’s update the ARM template to the following.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "APIManagementInstanceName": {
          "type": "string",
          "defaultValue": "MyAPIManagementInstance"
        }
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service/users",
            "name": "[concat(parameters('APIManagementInstanceName'), '/john-smith-contoso-com')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "firstName": "John",
                "lastName": "Smith",
                "email": "john.smith@contoso.com",
                "state": "active",
                "note": "Developer working for Contoso, one of the consumers of our APIs",
                "confirmation": "invite"
            },
            "dependsOn": []
        },
        {
            "type": "Microsoft.ApiManagement/service/groups",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosogroup')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "displayName": "ContosoGroup",
                "description": "Group containing all developers and services from Contoso who will be consuming our APIs",
                "type": "custom",
                "externalId": null
            },
            "dependsOn": []
        },
        {
            "type": "Microsoft.ApiManagement/service/groups/users",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosogroup/john-smith-contoso-com')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {},
            "dependsOn": [
                "[resourceId('Microsoft.ApiManagement/service/groups', parameters('APIManagementInstanceName'), 'contosogroup')]"
            ]
        }
    ]
}

What we did here, was add two additional resources, one for the ContosoGroup group, and one to link the user to the group.

And finally, we will add a product for the Contoso consumers. On this product we will set a throttling policy, so these consumers are limited in the number of calls they can make to our APIs. Update the ARM template as following, this will also be the final version of this ARM template.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "APIManagementInstanceName": {
          "type": "string",
          "defaultValue": "MyAPIManagementInstance"
        }
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service/users",
            "name": "[concat(parameters('APIManagementInstanceName'), '/john-smith-contoso-com')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "firstName": "John",
                "lastName": "Smith",
                "email": "john.smith@contoso.com",
                "state": "active",
                "note": "Developer working for Contoso, one of the consumers of our APIs",
                "confirmation": "invite"
            },
            "dependsOn": []
        },
        {
            "type": "Microsoft.ApiManagement/service/groups",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosogroup')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "displayName": "ContosoGroup",
                "description": "Group containing all developers and services from Contoso who will be consuming our APIs",
                "type": "custom",
                "externalId": null
            },
            "dependsOn": []
        },
        {
            "type": "Microsoft.ApiManagement/service/groups/users",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosogroup/john-smith-contoso-com')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {},
            "dependsOn": [
                "[resourceId('Microsoft.ApiManagement/service/groups', parameters('APIManagementInstanceName'), 'contosogroup')]"
            ]
        },
        {
            "type": "Microsoft.ApiManagement/service/products",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosoproduct')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "displayName": "ContosoProduct",
                "description": "Product which will apply the high-over policies for developers and services of Contoso.",
                "subscriptionRequired": true,
                "approvalRequired": true,
                "state": "published"
            },
            "dependsOn": []
        },
        {
            "type": "Microsoft.ApiManagement/service/products/groups",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosoproduct/contosogroup')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {},
            "dependsOn": [
                "[resourceId('Microsoft.ApiManagement/service/products', parameters('APIManagementInstanceName'), 'contosoproduct')]",
                "[resourceId('Microsoft.ApiManagement/service/groups', parameters('APIManagementInstanceName'), 'contosogroup')]"
            ]
        },
        {
            "type": "Microsoft.ApiManagement/service/subscriptions",
            "name": "[concat(parameters('APIManagementInstanceName'), '/5ae6ed2358c2795ab5aaba68')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "userId": "[resourceId('Microsoft.ApiManagement/service/users', parameters('APIManagementInstanceName'), 'john-smith-contoso-com')]",
                "productId": "[resourceId('Microsoft.ApiManagement/service/products', parameters('APIManagementInstanceName'), 'contosoproduct')]",
                "displayName": "ContosoProduct subscription",
                "state": "active"
            },
            "dependsOn": [
                "[resourceId('Microsoft.ApiManagement/service/users', parameters('APIManagementInstanceName'), 'john-smith-contoso-com')]",
                "[resourceId('Microsoft.ApiManagement/service/products', parameters('APIManagementInstanceName'), 'contosoproduct')]"
            ]
        },
        {
            "type": "Microsoft.ApiManagement/service/products/policies",
            "name": "[concat(parameters('APIManagementInstanceName'), '/contosoproduct/policy')]",
            "apiVersion": "2017-03-01",
            "scale": null,
            "properties": {
                "policyContent": "<policies>rn  <inbound>rn    <base />rn    <rate-limit calls="20" renewal-period="60" />rn  </inbound>rn  <backend>rn    <base />rn  </backend>rn  <outbound>rn    <base />rn  </outbound>rn  <on-error>rn    <base />rn  </on-error>rn</policies>"
            },
            "dependsOn": [
                "[resourceId('Microsoft.ApiManagement/service/products', parameters('APIManagementInstanceName'), 'contosoproduct')]"
            ]
        }
    ]
}

The steps we added in this template were to create the ContosoProduct product, add ContosoGroup to the product, create a subscription for the Contoso user John Smith and link it to the product, and finally create a policy which implements throttling on the product level. Commit and push this final ARM template to your repository.

Now that we have finished our template, we will create new build definition called API Management CI-CD ARM-CI – Products, users and groups. The exact steps for creating a build pipeline have already been described in the previous blogpost. Make sure to select the correct GIT repository.

Select correct GIT repository

Select correct GIT repository

Set up the build pipeline to validate the ARM template using a continuous integration tirgger and publish if the template is correct, just like in the previous post. Once done make sure to save and queue the build definition.

Set up build pipeline for validation

Set up build pipeline for validation

The next step will be to set up the deployment pipeline, which has also been thoroughly described in the previous post. Create a new continous deployment triggered release definition called API Management products, users and groups and use the artifact we just published from our build pipeline.

Create release pipeline with artifact published from build pipeline

Create release pipeline with artifact published from build pipeline

Set up the test environment to be triggered as soon as the artifact is available, and deploy to your test environment.

Deploy to test environment

Deploy to test environment

Clone the Test environment and update it to deploy to your production environment. Make sure to include an approval before deployment is being done.

Deploy to production environment after approval

Deploy to production environment after approval

We now have completed our CI/CD process for the products, users and groups, so to test this we just need to make a change in the ARM template on our local machine and check this in, after which our build and deployment pipelines will kick off and update our API Management instance.

API Management instance has been updated

API Management instance has been updated

Microsoft Integration Weekly Update: May 14, 2018

Microsoft Integration Weekly Update: May 14, 2018

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out and let me know your feedback on this Integration weekly series.
Advertisements

Partner Post: Alerts and Analytics to Help with BizTalk Implementations using Enkay PRO

Microsoft has a lot of great partners, and one of our missions is to highlight these, if you want to do a partner post on our team blog reach out to us either over email or through comments on this post.

This post is written by Microsoft Gold Partner Enkay Tech (www.enkaytech.com) to highlight a new product to help with monitoring BizTalk solutions.

When your BizTalk environment is running well and within capacity, your total cost of ownership (TCO) is low. However, without proper monitoring, failures could occur that could significantly increase TCO. For example, when you receive unusually large message payloads from a customer or when new applications are deployed that cause a significant increase in load, or when SQL jobs have stopped/failed, BizTalk could exceed optimal utilization of available resources. If these failures are not resolved in a timely fashion, BizTalk messaging throughput could decrease, integration durations could increase, and timeouts could occur. To recover, one may need to do some of the following tasks, all of which result in an increase in TCO:

  1. Suspended messages may need to be resumed or messages routed to the exceptions database may need to be recovered and resubmitted.
  2. If BizTalk services are down, external applications cannot communicate with BizTalk, and these applications will need to recover and replay their requests once BizTalk services are back online.
  3. Perform cleanup of data (e.g. roll back transactions).

Watch our Enkay Tech webcast (https://www.youtube.com/watch?v=EUQa7gCeatg) on May 22nd at 1:30 pm Central Standard Time to see how Enkay PRO can help reduce TCO. For example, you will see how your operations team can view graphs that continuously display application activity including message counts, message sizes, throughput and durations. By using these graphs, the team can get visibility into performance issues that could impact business service level agreements (SLA). They can perform deep analysis by viewing historical data to quickly identify issues that caused the failure. They can search for details on EDI transactions that are being sent to and being received from trading partners. With proactive monitoring and alerting, Enkay PRO can help customers see the value BizTalk is delivering and verify that business SLAs are being met.

No license fees are required to install and use Enkay PRO for qualified customers. You can download and use Enkay PRO for any number of users, any number of servers, and any number of environments. Free support for ninety (90) days is provided, which includes installation and training. Additional paid support for Enkay PRO software is available and includes customization and consulting services. For more information visit: http://www.enkaytech.com/enkaypro

Backup BizTalk Server job failed. Executed as user: NT SERVICESQLSERVERAGENT. Could not connect to server ” because ” is not defined as a remote login at the server.

Backup BizTalk Server job failed. Executed as user: NT SERVICESQLSERVERAGENT. Could not connect to server ” because ” is not defined as a remote login at the server.

In my last post, I described how can you fix the issue regarding:

Executed as user: BIZDEMOsaspereira. Could not find server ‘BIZDEMO’ in sys.servers. Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers. [SQLSTATE 42000] (Error 7202). The step failed.).

See more about this error here: https://blog.sandro-pereira.com/2018/05/08/backup-biztalk-server-job-failed-could-not-find-server-in-sys-servers/

And I told that this was not the only issue that you will find. The truth is that if you try to execute the Backup BizTalk Server job after you fix this last problem, the job will fail again this time with the following error:

Executed as user: NT SERVICESQLSERVERAGENT. Could not connect to server ‘BIZDEMO’ because ” is not defined as a remote login at the server. Verify that you have specified the correct login name. . [SQLSTATE 42000] (Error 18483). The step failed.

Backup BizTalk Server job failed - Could not connect to server is not defined as a remote login at the server

Cause

This error can be related to several possible problems and a common solution you will find in SQL Server forums and post is that you should Drop and re-create the linked server will resolved this problem.

However, and forgive me in advance for my SQL ignorance, I don’t have any linked Server configured, my BizTalk Server Virtual Machine is a simple standalone machine with BizTalk and SQL installed… so, it couldn’t be that problem!

Backup BizTalk Server job failed - Could not connect to server is not defined as a remote login at the server: Linked Server

So, after several tests, I started to think as a truly genuine old-school technical guy:

  • It doesn’t work? did you try to restart it?

Or as a true BizTalk developer guy:

  • Did you restart the host after your solution deployment?

And I thought that it might be necessary to restart the SQL services so that all the settings of the previous command execution have the correct effect – execution of the commands: sp_dropserver and sp_addserver necessary to fix the issue reported in the previous post.

And it was!

Solution

To fix this issue, we need to restart the SQL services

After you restart the SQL Server Services you will be able to run the Backup BizTalk Server job successfully.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Backup BizTalk Server job failed. Executed as user ”. Could not find server in sys.servers. Verify that the correct server name was specified

Backup BizTalk Server job failed. Executed as user ”. Could not find server in sys.servers. Verify that the correct server name was specified

After you install and configure a BizTalk Server Azure Virtual Machine, the first time you try to run the main BizTalk Server job: Backup BizTalk Server (BizTalkMgmtDb) – of course after you properly configure it – it will fail every time with the following error:

Executed as user: BIZDEMOsaspereira. Could not find server ‘BIZDEMO’ in sys.servers. Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers. [SQLSTATE 42000] (Error 7202). The step failed.).

BizTalk Backup Job Could not find server sys.servers

Note: will not be the only one that you will find but let’s will go step by step and we will address other errors in different blog posts

 Cause

Well, I guess (I’m sure) that this error happens because we are using a default Microsoft BizTalk Server image with all the components already installed: Visual Studio, BizTalk Server and especially SQL Server.

This problem happens because of the SQL Server name present in sys.servers table – maybe that was specified during setup of this image – is defined as “localhost” and not the actual VM name that you created on the Azure Portal and for the job work properly:

  • The SQL server name specified during setup of the Azure Virtual Machine must match the Name in sys.servers of the SQL server

To check the Name present in sys.servers table please run the following SQL Script:

SELECT name FROM sys.servers
GO

Backup BizTalk Server job Could not find server in sys.servers: check the name

Solution

To fix this issue, we need to remove the old server name and add the new server name in SQL. You can do that by:

  • Login on to SQL Server and open SSMS.
  • Connect to the instance and run below query, which will remove the old server name.
sp_dropserver 'localhost'
GO
  • And then run below query to add the new server name, that in my case was “BIZDEMO”:
sp_addserver 'BIZDEMO', local
GO

After you execute these scripts this particular problem will be solved.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.

Working with CloudEvents in Azure Event Grid

Working with CloudEvents in Azure Event Grid

Recently Microsoft announced Azure Event Grid, a highly scalable serverless event driven offering allowing us to implement publish and subscribe patterns. Event driven scenarios are becoming more common by the day, which means that we see these type of integrations increasing a lot as well. A lot of times applications will define their own message formats for their events, however, with the recent announcement of native support in Azure Event Grid for CloudEvents our lives should be made a lot easier. CloudEvents is a standard for working with events accross platforms, and gives us a specification for describing event data in a common way. This will allow any platform or application which is working with events, to implement a common format, allowing easy integration and interoperability, for example between Azure, AWS and Oracle. The specification is still under active development, and Microsoft is one of the big contributors, especially Clemens Vasters, Lead Architect on Azure Messaging Services.

CloudEvents logo

In this blog post we will be looking into Event Grid’s support for CloudEvents, and how to set this up. The specifications for the CloudEvents message format can be found on GitHub, and how this maps to Event Grid’s own schema can be found on Microsoft Docs. For this post we will use the application created in this bogpost, which will generate events when an order has been placed, as well as when a repair has been requested. These events will be handled by a Logic App, which will send out an email. In a real life scenario we could, for example, use this Logic App to create place the order at the ship’s supplier. And because we are using the CloudEvents format, the application can easily integrate with any system which supports this new specification, so they are not just bound to Azure.

Send event from custom application to Logic Apps

Send event from custom application to Logic Apps

Currently support for Cloud Events in Event Grid is still in preview only available in a select group of regions (West Central US, Central US and North Europe), and to use it we need to enable an extension in Azure CLI by giving the following command.

az extension add --name eventgrid
Enable Event Grid extension

Enable Event Grid extension

We can now create our Event Grid topic, where we will receive the events. Currently this is not yet supported in the portal, so we will stay in our Azure CLI, and give the following commands.

az group create -l northeurope -n cloudEventsResourceGroup
az eventgrid topic create --name cloudevents -l northeurope -g cloudEventsResourceGroup --input-schema cloudeventv01schema

The first command creates the resource group, while the second command creates the Event Grid topic. Note the input-schema switch, which allows us to set the CloudEvents format.

Create Event Grid topic

Create Event Grid topic

When the topic has been created, go to the Event Grid Topics blade in the portal, open the topic we just created, and grab the Topic Endpoint, we will need this later on.

Save the topic endpoint for later use

Save the topic endpoint for later use

Switch to the Access keys for the topic, and grab one of the keys, we will need this later as well.

Also save on of the keys for later use

Also save on of the keys for later use

Next we will create the application which will send the events to our custom topic which we just created. For ease of this demo, this will just be a simple console application, but in a real life solution this could be any type of system. Start by creating a new solution in Visual Studio for our application.

Create console app solution

Create console app solution

Data Classes

Add the following data classes, which describe the orders and repairs, as explained in this blog post.

/// <summary>
/// Event sent for a specific ship.
/// </summary>
public class ShipEvent
{
    /// <summary>
    /// Name of the ship.
    /// </summary>
    public string Ship { get; set; }
 
    /// <summary>
    /// Type of event.
    /// </summary>
    public string Type { get; set; }
}
/// <summary>
/// Used to place an order.
/// </summary>
public class Order : ShipEvent
{
    /// <summary>
    /// Name of the product.
    /// </summary>
    public string Product { get; set; }
 
    /// <summary>
    /// Number of items to be ordered.
    /// </summary>
    public int Amount { get; set; }
 
    /// <summary>
    /// Constructor.
    /// </summary>
    public Order()
    {
        Type = "Order";
    }
}
/// <summary>
/// Used to request a repair.
/// </summary>
public class Repair : ShipEvent
{
    /// <summary>
    /// Device which needs to be repaired.
    /// </summary>
    public string Device { get; set; }
 
    /// <summary>
    /// Description of the defect.
    /// </summary>
    public string Description { get; set; }
 
    /// <summary>
    /// Constructor.
    /// </summary>
    public Repair()
    {
        Type = "Repair";
    }
}

CloudEvents class

Add the CloudEvents class, which will be used to create a CloudEvents message which we will send to our Azure Event Grid. The schema for a CloudEvents message can be found here.

/// <summary>
/// Representation of the CloudEvents specification, to be sent to Event Grid Topic.
/// </summary>
class CloudEvents
{
        /// <summary>
        /// This will be used to update the Source and Data properties.
        /// </summary>
        public ShipEvent UpdateProperties
        {
                set
                {
                        Source = $"{Program.TOPIC}#{value.Ship}/{value.Type}";
                        Data = value;
                }
        }
 
        /// <summary>
        /// Gets the version number of the CloudEvents specification which has been used.
        /// </summary>
        public string CloudEventsVersion { get; }
 
        /// <summary>
        /// Gets the registered event type for this event source.
        /// </summary>
        public string EventType { get; }
 
        /// <summary>
        /// Gets the The version of the eventType.
        /// </summary>
        public string EventTypeVersion { get; }
 
        /// <summary>
        /// Gets the event producer properties.
        /// </summary>
        public string Source { get; set; }
 
        /// <summary>
        /// Gets the unique identifier for the event.
        /// </summary>
        public string EventID { get; }
 
        /// <summary>
        /// Gets the time the event is generated based on the provider's UTC time.
        /// </summary>
        public string EventTime { get; }
 
        /// <summary>
        /// Gets or sets the event data specific to the resource provider.
        /// </summary>
        public ShipEvent Data { get; set; }
 
        /// <summary>
        /// Constructor.
        /// </summary>
        public CloudEvents()
        {
                CloudEventsVersion = "0.1";
                EventID = Guid.NewGuid().ToString();
                EventType = "shipevent";
                EventTime = DateTime.UtcNow.ToString("o");
        }
}

.Program Class

And finally we will update the Program class. Here we will get the input from the user, and create a CloudEvents message which will be sent to Event Grid. Make sure to update the topic endpoint and access key with the entries we retrieved from the portal in the previous step. Also update the topic property with your subscription id, and the resource group and topic name you used when creating the topic. One more thing to notice, is how we only send a single message, instead of a List of messages as we did in this blog post. Currently CloudEvents does not support batching of events, which is why we can only send a single event.

/// <summary>
/// Send CloudEvents messages to an Event Grid Topic.
/// </summary>
class Program
{
        /// <summary>
        /// Endpoint of the Event Grid Topic.
        /// Update this with your own endpoint from the Azure Portal.
        /// </summary>
        private const string TOPIC_ENDPOINT = "<your-topic-endpoint>";
 
        /// <summary>
        /// Key of the Event Grid Topic.
        /// Update this with your own key from the Azure Portal.
        /// </summary>
        private const string KEY = "<your-access-key>";
 
        /// <summary>
        /// Topic to which we will be publishing.
        /// Update the subscription id, resource group and topic name here.
        /// </summary>
        public const string TOPIC = "/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.EventGrid/topics/<your-topic-name>";
 
        /// <summary>
        /// Main method.
        /// </summary>
        public static void Main(string[] args)
        {
                // Set default values
                var entry = string.Empty;
 
                // Loop until user exits
                while (entry != "e" && entry != "exit")
                {
                        // Get entry from user
                        Console.WriteLine("Do you want to send an (o)rder, request a (r)epair or (e)xit the application?");
                        entry = Console.ReadLine()?.ToLowerInvariant();
 
                        // Get name of the ship
                        Console.WriteLine("What is the name of the ship?");
                        var shipName = Console.ReadLine();
 
                        CloudEvents cloudEvents;
                        switch (entry)
                        {
                                case "e":
                                case "exit":
                                        continue;
                                case "o":
                                case "order":
                                        // Get user input
                                        Console.WriteLine("What would you like to order?");
                                        var product = Console.ReadLine();
                                        Console.WriteLine("How many would you like to order?");
                                        var amount = Convert.ToInt32(Console.ReadLine());
 
                                        // Create order event
                                        // Event Grid expects a list of events, even when only one event is sent
                                        cloudEvents = new CloudEvents { UpdateProperties = new Order { Ship = shipName, Product = product, Amount = amount } };
                                        break;
                                case "r":
                                case "repair":
                                        // Get user input
                                        Console.WriteLine("Which device would you like to get repaired?");
                                        var device = Console.ReadLine();
                                        Console.WriteLine("Please provide a description of the issue.");
                                        var description = Console.ReadLine();
 
                                        // Create repair event
                                        // Event Grid expects a list of events, even when only one event is sent
                                        cloudEvents = new CloudEvents { UpdateProperties = new Repair { Ship = shipName, Device = device, Description = description } };
                                        break;
                                default:
                                        Console.Error.WriteLine("Invalid entry received.");
                                        continue;
                        }
 
                        // Send to Event Grid Topic
                        SendEventsToTopic(cloudEvents).Wait();
                }
        }
 
        /// <summary>
        /// Send events to Event Grid Topic.
        /// </summary>
        private static async Task SendEventsToTopic(CloudEvents cloudEvents)
        {
                // Create a HTTP client which we will use to post to the Event Grid Topic
                var httpClient = new HttpClient();
 
                // Add key in the request headers
                httpClient.DefaultRequestHeaders.Add("aeg-sas-key", KEY);
 
                // Event grid expects event data as JSON
                var json = JsonConvert.SerializeObject(cloudEvents);
 
                // Create request which will be sent to the topic
                var content = new StringContent(json, Encoding.UTF8, "application/json");
 
                // Send request
                Console.WriteLine("Sending event to Event Grid...");
                var result = await httpClient.PostAsync(TOPIC_ENDPOINT, content);
 
                // Show result
                Console.WriteLine($"Event sent with result: {result.ReasonPhrase}");
                Console.WriteLine();
        }
}

The complete code for the Event Publisher application can also be found here on GitHub.

Our next step is to create the Logic App which will handle the events sent by our events publisher application.

Create Logic App for processing events

Create Logic App for processing events

Once the Logic App has been created, open the designer and create a HTTP Request trigger template.

Use HTTP Request trigger

Use HTTP Request trigger

Set the Request JSON Schema to the following, which is a representation of the CloudEvents schema including the ship events.

{
    "type": "object",
    "properties": {
        "CloudEventsVersion": {
            "type": "string"
        },
        "EventType": {
            "type": "string"
        },
        "EventTypeVersion": {},
        "Source": {
            "type": "string"
        },
        "EventID": {
            "type": "string"
        },
        "EventTime": {
            "type": "string"
        },
        "Data": {
            "type": "object",
            "properties": {
                "Ship": {
                    "type": "string"
                },
                "Type": {
                    "type": "string"
                }
            }
        }
    }
}
Set JSON schema for parsing the request

Set JSON schema for parsing the request

Add a step to send out an email, and authenticate using your Office365 account. If you don’t have an Office365 account you can also use one of the other connectors to send out an email.

Add the Office365 Outlook connector

Add the Office365 Outlook connector

Set the options for the email and save the Logic App. When you save the Logic App make sure to grab the HTTP POST URL of the HTTP Request trigger, as we will need this in the next step to set up the subscription.

Set email properties with body to the data element

Set email properties with body to the data element

We are now going to create the Event Grid subscription, which will catch the events from our events publisher, and route them to our Logic App. We will have to do this once again from the Azure CLI, as the portal UI does not yet support the use of the CloudEvents schema. Give the following command in the Azure CLI to create the subscription which will route messages to our Logic Apps HTTP endpoint. Remember the Event Grid extension should be enabled for this.

az eventgrid event-subscription create --name shipEventsToProcessingLogicApp --topic-name cloudevents -g cloudEventsResouceGroup --endpoint '"<endpoint-for-logic-app-http-trigger>"' --event-delivery-schema cloudeventv01schema
Run Azure CLI command to create subscription

Run Azure CLI command to create subscription

Now go to the Event Grid Subscriptions blade, make sure the filters are set right, and you will find your newly created subscription.

Subscription has been created

Subscription has been created

Testing

Open the event publisher application, and send in some events.

Send events from event publisher application

Send events from event publisher application

These events will now be received in the Event Grid topic, and routed to the subscription, which will then deliver it at the Logic App.

Logic App run shows we receive CloudEvents message

Logic App run shows we receive CloudEvents message

An email will then be sent, indicating the type of event.

Receive email with event information

Receive email with event information

API Management CI/CD using ARM Templates – API Management Instance

API Management CI/CD using ARM Templates – API Management Instance

This is the first in a series of blogposts around setting up CI/CD for Azure API Management using Azure Resource Manager templates. We will be using Visual Studio Team Services to host our repositories and set up our build and release pipeline. By using CI/CD our API Management will be updated any time we check in changes made in our ARM templates.

The posts in this series are the following, this list will be updated as the posts are being published.

We will have several developers who are working in API Management creating and updating API definitions. The developers can either do this directly in the ARM template files, or by creating the API definition in the portal first and exporting it with a tool like the API Management ARM Template Creator. They will then check in their changes to a GIT repository hosted in VSTS.

In this first post we will create an instance of API Management, without any custom APIs defined, just the Echo API which comes out of the box. We will start by creating a account in VSTS.

Create VSTS account

Create VSTS account

Once your account has been created, add a new project under which we will host our repositories. Select GIT as the version control provider.

Browse projects

Browse projects

Create new project

Create new project

Set up project properties

Set up project properties

Once our project has been created, let’s create a new GIT repository which will be used to hold the ARM template used to deploy our API Management instance. By creating a repository we can easily detect changes to trigger a specific deployment, like the instance or a specific API definition. This also allows us to limit developers to specific repositories, for example we might only want our lead developers to work on the instance, but have all developers work on APIs within the instance.

Manage repositories

Manage repositories

Create new repository

Create new repository

Set repository properties

Set repository properties

Once created, switch to your new repository.

Switch to API Management instance repository

Switch to API Management instance repository

Clone the repository to your local machine. When working with ARM template I like using Visual Studio Code in combination with the Azure Resource Manager Tools extension, but you can use any tool you like.

Clone repository

Clone repository

Once you have cloned your repository, create a new file called instance.json, and use the following code to make it an ARM template.
Notice that we use a couple of parameters, these can be overridden at deployment time according to the environment we are deploying to, for example we will want to use the Developer sku in our test environment, but use the Standard sku in the production environment. The other thing to notice is we use [resourceGroup().location] for our location, this will make sure our API Management instance lands in the same region as the resource group to which we deploy from our deployment pipeline.

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "APIManagementSku": {
        "type": "string",
        "defaultValue": "Developer"
      },
      "APIManagementSkuCapacity": {
        "type": "string",
        "defaultValue": "1"
      },
      "APIManagementInstanceName": {
        "type": "string",
        "defaultValue": "MyAPIManagementInstance"
      },
      "PublisherName": {
        "type": "string",
        "defaultValue": "Eldert Grootenboer"
      },
      "PublisherEmail": {
        "type": "string",
        "defaultValue": "me@mydomain.com"
      }
    },
    "variables": {},
    "resources": [
      {
        "type": "Microsoft.ApiManagement/service",
        "name": "[parameters('APIManagementInstanceName')]",
        "apiVersion": "2017-03-01",
        "properties": {
          "publisherEmail": "[parameters('PublisherEmail')]",
          "publisherName": "[parameters('PublisherName')]",
          "notificationSenderEmail": "apimgmt-noreply@mail.windowsazure.com",
          "hostnameConfigurations": [],
          "additionalLocations": null,
          "virtualNetworkConfiguration": null,
          "customProperties": {
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Ssl30": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TripleDes168": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls10": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls11": "False",
            "Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Ssl30": "False"
          },
          "virtualNetworkType": "None"
        },
        "resources": [],
        "sku": {
          "name": "[parameters('APIManagementSku')]",
          "capacity": "[parameters('APIManagementSkuCapacity')]"
        },
        "location": "[resourceGroup().location]",
        "tags": {},
        "scale": null
      }
    ],
    "outputs": {}
  }

Now commit and push these changes changes to the repository, so we can use it in our build and deployment pipeline.

Commit and push changes to repository

Commit and push changes to repository

Go back to VSTS, and create a new build definition for our project.

Open builds

Open builds

Create new definition

Create new definition

Make sure you have selected the repository for the API Management instance, and create an empty process definition.

Select API Management instance repository

Select API Management instance repository

Create empty process definition

Create empty process definition

We will start by setting the trigger to enable continous integration. This will kick of the build each time we check in a change to our repository.

Enable continous integration trigger

Enable continous integration trigger

Next go to the tasks, and add a Azure Resource Group Deployment task to your build Phase. The name of the task is somewhat misleading, as it does not just do resource group deployments, but actually deploys complete ARM templates.

Add Azure Resource Group Deployment task

Add Azure Resource Group Deployment task

Click on the task we just added, set the name of the task, and select the subscription to be used. If needed, authorize your connection. In this build pipeline we will only validate the template, so nothing will be deployed yet.

Select subscription and authorize if needed

Select subscription and authorize if needed

Fill in the rest of the Azure details of the task. Keep in mind that the resource group will only be used for validation, you can use either an existing or a new resource group for this.

Set Azure details

Set Azure details

Now fill in the template details of the task. For the Template reference, select the ARM template we created earlier on by clicking on the three dots next to the textbox. Set the deployment mode to Validation only, this will allow us to validate the ARM template without deploying it. Leave all other sections of the build task to their default values.

Set template and deployment mode

Set template and deployment mode

Now add a Delete an Azure Resource Group if it is empty task to the build phase. This custom task has first to be added to your VSTS account (you will need to refresh your VSTS browser screen after you added it), and will be used to clean up the resource group if it was created during the validation if it is empty. This is done, because if you created a new resource group in the previous task, it will leave an empty resource group behind.

Add Delete an Azure Resource Group if it is empty task

Add Delete an Azure Resource Group if it is empty task

Open the new task, and set the Azure details. Make sure to use the same subscription and resource group as was used during the validation. You could use VSTS variables here instead as well, but for this blogpost I will just set the names manually.

Set Azure details

Set Azure details

And now add a Publish Build Artifacts task to our build stage. This task will publish the ARM template so we can use it in our deployment pipeline.

Publish Build Artifacts task

Publish Build Artifacts task

Open the task, and select the ARM template file for Path to publish. Give a name for the artifact which will be published, and set the publish location to VSTS.

Set publish settings

Set publish settings

We now have completed our build pipeline, so save and queue the definition. This will publish the artifact which we can then use to set up our deployment pipeline.

Save and queue the build definition

Save and queue the build definition

Select location to save the definition

Select location to save the definition

Queue the build definition

Queue the build definition

We have finished our build pipeline, so the next step is to set up a deployment definition. Go to Releases and create a new definition.

Create new release definition

Create new release definition

Start with an empty process, as we will set up our definition ourselves.

Choose empty process

Choose empty process

In this definition, two environments will be used, Test and Production. But first we will link our artifacts from our build pipeline, by clicking on the Add artifact field.

Add artifact

Add artifact

Select the build definition we created before, this will read the ARM template which we validated in our build pipeline.

Select build definition

Select build definition

And now click on the button to set a continuous deployment trigger, and enable this trigger. This will make sure our deployment process runs each time our build pipeline completes successfully.

Open continous deployment trigger

Open continous deployment trigger

Enable continuous deployment trigger

Enable continuous deployment trigger

Now that we have set up our artifacts to be published, click on the environment and set the environment name to Test.

Set test environment

Set test environment

Next click on the Tasks dropdown and select the Test environment.

Open Test environment tasks

Open Test environment tasks

Add an Azure Resource Group Deployment task to your phase, this will deploy the ARM template to our Azure environment.

Add Azure Resource Group Deployment task

Add Azure Resource Group Deployment task

Open the task, and edit the Azure details. Remember, this is for your test environment, so set the subscription and / or resource group accordingly.

Set Azure details for test environment

Set Azure details for test environment

For the template, select the output from our build pipeline. If you want, you can override your template parameters here as well, but as our defaults are already prepared for the test environment, we will not do this at this time.

Use template from build pipeline

Use template from build pipeline

Go back to your pipeline, and click on the Clone button under the Test environment.

Clone the test environment

Clone the test environment

Rename this new environment to Production, and open the pre-deployment conditions.

Open pre-deployment conditions

Open pre-deployment conditions

As this is our production environment, we don’t want to release here until a release manager has approved the deployment. To do this, enable the Pre-deployment approvals option and select someone who is allowed to approve the deployments. Normally this will probably be a release manager.

Enable pre-deployment approvals

Enable pre-deployment approvals

Open your Production tasks, click on the Azure Resource Group Deployment task, and update the subscription and / or resource group for your production environment.

Update subscription and resource group

Update subscription and resource group

As this is our production instance, we will want to run on the Standard tier of API Management instead of the Developer tier, so override the APIManagementSku property we had set in our ARM template.

Override sku to use Standard tier

Override sku to use Standard tier

And finally name and save your deployment definition.

Name and save definition

Name and save definition

To test our CI/CD process, we can now make a change to the ARM template and check it in to our repository. This will then start the build pipeline validating our template, and once it is done, kick off the deployment pipeline automatically, deploying it to Azure.

Build pipeline gets triggered automatically after check in

Build pipeline gets triggered automatically after check in

The first time this is done, it will take a while to create the API Management instance, and if you are on VSTS free build tier, it might show as failed on the deployment, because it will exceed the maximum of 30 minutes.

Deployment succeeded

Deployment succeeded

From now on after making changes to your ARM template your API Management instance will automatically be updated through the CI/CD pipeline. And as we choose for incremental updates in our deployment process, it will only update the parts which have actually been changed, without redeploying the entire instance.

API Management instance has been created

API Management instance has been created

Microsoft Integration Weekly Update: May 7, 2018

Microsoft Integration Weekly Update: May 7, 2018

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

Feedback

Hope this would be helpful. Please feel free to reach out and let me know your feedback on this Integration weekly series.
Advertisements