by Rene Brauwers | Dec 20, 2016 | BizTalk Community Blogs via Syndication
In our previous post, I guided you through setting up a WCF service and protecting it using URL Authentication. Although a lengthy post you would have noticed that setting up url-authentication is actually quite simple and only involving a few steps.
Anyways, in this post we will be focusing on adding the integration magic, without adding a single line of custom code, using Azure Logic Apps.
The integration magic which we will be adding will take care of the following functionality within our end-to-end scenario.
A request will come in which will start our process which is to retrieve a list of customers.
The customer information to be retrieved combines the result from two sources; the first source being the WCF service we build in our previous post and the second source a public rest api. The data which is to be returned to the caller as such will consist of the base data originating from the wcf services enriched with data obtained from the public rest api.
Visualizing the flow
Before we start implementing the solution using Logic Apps it is always a good practice to work-out the actual process flow using a tool such as Microsoft Visio.
Having said that, let’s eat my own dogfood. Low and behold, see below the diagram depicting the process and an explanation of the process.
The process kicks off whenever a http post requesting a list of customer data is being made to Logic Apps (1). Once received within logic apps a new message (soap request) has to be created (2). Once created this message is being offloaded to the custom WCF service (3), we created in the previous post. If the call is successful the webservice will return a list of customers (4). The information contained within the response contains the following data: customerId, FirstName, SurName and postcode.
The postcode value(s) contained within this response is subsequently used to retrieve detailed location information.
In order to retrieve this location information, logic apps will perform a loop over the response message (5), extract the postal code and invoke a custom rest API to do the location lookup (6). The response received contains the following data: Suburb name, postcode, state-name, state abbreviation, locality and the latitude and longitude of the locality.
This data and the basic customer data is then combined and temporarily persisted in DocumentDB (7).
//Reason, for leveraging this external persistence store is to make life easier for us, as we want //enrich all the customer data with additional information retrieved from the second api call and //return it in one go to the caller. Currently there is no easy way of doing this directly from within //logic-apps as, however, have no fear; in one of the next releases a feature to store session state //within a logic app will be implemented and thus we would no longer need to result to an //intermediate ‘session state’ store.
This process is then repeated for all customers and once we have iterated over all customer records we exit the loop and retrieve all ‘enriched’ documents stored in DocumentDB (8) which we then will return to the caller. The information returned to the caller will then contain the following data; FirstName, LastName and Location information consisting of Locality, State Name, SubUrb, Postcode and longitude and latitude (9).
Provision the logic App
At this point we have worked out the high-level flow and logic and as such we can now go-ahead and create the logic app, so let’s go ahead and do so
1. Login to the Azure Portal
2. Select the resource-group which you created in part-1, in which you deployed your custom wcf service. In my case this resource-group is called Demos
3. Once the resource-group blade is visible, click on the Add button
4. A new blade will popup, within this blade search for Logic App and click on the Logic App artefact published by Microsoft and of the Category Web + Mobile
5. Click on create
6. Now fill out the details and once done click Create, after which your logic app will be created
7. Once the logic app has been created, open it and you should be presented with a screen which allows you to create a new logic app using one of the pre-build templates. In our case we will choose the “Blank LogicApp”
Implement the ‘Blank LogicApp’
Once you’ve clicked on the blank logic app template, the designer will pop up. We will be using this designer to develop the below depicted flow which will be explained in the following sections. Well let’s get started.
Step 1: Request Trigger
Within this designer, you will be presented with a ‘card selector’. This card selector, being the first of many, contains so-called triggers. These triggers can best be explained as ‘event listeners’ which indicate when a logic app is to be instantiated.
In our scenario, we want to trigger our logic app by means of sending a request. So, in our case we would select the Request trigger. Now select this Request Trigger.
To dig up more information regarding the different triggers and actions you can click on the Help button, which will open up a Quick Start Guide blade containing links too more information.
Configure
Once you’ve selected the trigger, the Request Trigger ‘Card’ will be expanded and will allow you to configure this trigger.
1. This section is not customizable, but once the logic app is saved will contain the generated endpoint. This endpoint is to be used by clients who which to invoke the logic app.
2. The request body JSON schema section, is an optional section, which allows us to add a schema describing what the inbound request message should look like.
You might be wondering why bother? Well if we bother by adding a schema we get the benefit of an ‘intellisense like’ experience from within the designer, which can help us down the road in case we want to easily access one of the properties of the request message in a follow up action.
So let’s go ahead and add a schema. In our case, we will only require one property to be send to our logic-app and this property is RequestId. We will be using the property further down the stream to uniquely identify the request and use it to store our ‘session state’.
As such our Json request can be represented as follows:
{
“RequestId”:”2245775543466″
}
Now that we know what the payload message looks like, we need to derive the Json schema. Well luckily for us, we can go to JSONSchema.net and generate a schema. J The generated schema, subsequently would be represented as
{
“type”: “object”,
“properties”: {
“RequestIds”: {
“type”: “string”
}
},
“required”: [
“RequestIds”
]
}
At this point we have all the information required to fill out the ‘Request Body JSON Schema’ section, so all we have to do is copy and paste it into that section.
3. At this point we are ready to proceed with our next step. Which according to our high-level design consists of an activity which composes a new message, which represents the request message (soap) which is to be send to the customer WCF service.
So, let’s proceed and click on the + New Step button
4. Now several options appear, but we are currently only interested in the option ‘Add an action’, so select this.
Step 2: Compose SOAP request message
As part of our last step we clicked on the “new step” button and selected “Add an action”. Which subsequently would display the ‘card selector’ again, only this time displaying available actions to choose from.
Please note: typical actions to choose from would include
· connectors to SaaS services such as Dynamics CRM Online, on premise hosted Line of business applications such as SAP and connectors to existing logic-apps, azure functions and API’s hosted in API Management
· typical workflow actions which allow us to delay processing or even allow us to terminate further processing.
Looking back at our overall scenario which we are about to implement one of the initial actions would be retrieving a list of customers.
In order to retrieve this list of customers we would need to invoke our Customer WCF service, we build earlier. As our WCF service is SOAP based, it requires us to implement one additional step before we can actually invoke the service from within Logic Apps and this steps involves creating the SOAP request message, using a Compose Action.
So from within the ‘Card Selector’ select the compose Action.
Please note: In the near future this additional step will no longer be required as API Management will be able to RESTify your soap endpoints which than can easily consumed from within logicapps (see roadmap). Besides having functionality in API Management, the chances are pretty good as well that a first-class SOAP connector will be added to logic apps in the future as it is ranked high on the logic apps functionality wishlist.
Configure
Once you’ve selected the compose action the following ‘Card’ will show up on in the designer which allows you to compose a Message, which in our case will be the SOAP Request message.
1. The input section allows us to construct the soap (xml) message, which will act as the request which we will be sending to our customer WCF service.
So how would you determine what this message would look like. Well the easiest way would be by using a tool such as SOAPUI which can generate a sample request message. In the previous post, I’ve added a section which explains how to do this and in our scenario the soap request message looks as follow:
<?xml version=”1.0″ encoding=”UTF-8″?>
<Envelope xmlns=”http://schemas.xmlsoap.org/soap/envelope/”>
<Body>
<GetCustomers xmlns=”http://tempuri.org/” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” />
</Body>
</Envelope>
2. Once we have our sample SOAP request message, we simply copy and paste it into the input field.
Please note; once you click on the Inputs section a windows will appear which will allow you to select ‘dynamic content, used within this flow’. This is the ‘intellisense like’ experience I referred to earlier in this post. Anyways we will be ignoring this for now, but in future steps we will be using this.
3. At this point we are ready to proceed with our next step. Which will actually call our customer WCF service.
So, let’s proceed and click on the + New Step button
4. Once again several options appear and once again select the option ‘Add an action’.
Step 3: Invoke our Customer WCF Service
After completing step 2 we are now able to actually implement calling our customer WCF service. In order to do so all, we need to do is select the ‘HTTP’ Action from within the ‘Card Selector’
Configure
Once you’ve selected the HTTP action the following ‘Card’ will show up on in the designer which allows you to configure the HTTP request in order to receive the customer information.
As you might remember the custom WCF Service which we are about to invoke uses URL Authorization using Azure Active Directory (see previous post) and as such requires any (POST) request to be authenticated. Long story short; One of the nice things of the HTTP action is that it makes it a breeze invoking web-services even if they require authentication, all we need to do is configure the action correctly and this is done by expanding the advanced options of the HTTP Card, which will allow us to do so.
1. The Method which we need to select is ‘POST’ as we will be posting the soap request to the customer WCF service.
2. The Uri sections allows us to enter the Request URL of the web-service. In our case that would be https://demo-apis.azurewebsites.net/Customers.svc
3. The Headers sections will be used to add both the SOAP Action which needs to be invoked as well as the Content-Type of the actual request message.
The easiest way to retrieve the SOAP Action would be by means of SOAPUI as well. So from within SOAPUI open the request and then select WS-A (bottom menu-bar), and then copy and paste the Action
The header information needs to be passed in as a Json string, and looks as follows
{
“Content-Type”:”text/xml”,
“SOAPAction”:”http://tempuri.org/ICustomers/GetCustomers”
}
4. The body section will contain the message which we composed in the previous step. As such once you click in this section, additional information will displayed on the desiger which allows you to select ‘dynamic content’. (this is the ‘intellisense like’ experience I referred to earlier). From this menu, select the variable ‘’ This variable contains the message which we composed earlier.
5. Now click on the Show Advanced Options, which will allow us to fill out the required authentication information.
6. From the dropdown select Active Directory OAuth
7. For Active Directory OAuth we will require to fill out the Tenant, Audience, Client ID and Secret. This information is to be retrieved as follows
a. In the Azure Portal, go to Azure Active Directory Blade and click on APP Registrations
b. Select the application in question (see previous blog-post) which you registered for the WCF Customer service. In my case demo-apis
c. Now on the settings blade click on Properties and make a note of the following:
Application ID – This is the equivalent of the Client ID
App ID Uri – This is the equivalent of the Audience
d. GO back to the settings blade, click on Keys
e. Now it is time to generate the secret. In order to do this, add a description and select how long the secret should be valid. Once done save the entry and make a note of the value (this is the secret)
f. Now on the portal page, click on the Help Icon and select ‘Show diagnostics’
g. In the window, which pops up, search for tenants. Find your tenant (most likely the one which states ‘isSignedInTenant = true’ and note down the Tenant ID
h. At this point we have all the information in order to fill out the required information
Test
Now that we’ve implemented the call, it would be a good time to go ahead and test the logic app. Luckily for us, this is quite simple.
1. Click on the Save button to save your logic app
2. Now Click on the run button.
3. Wait a few seconds and you should see a debug output. If everything went Ok, it should look similar to the image below.
4. Now click on the HTTP – GetCustomers shape. Which allows you to look at the debug / tracking information. It will show you the input as well as the output information.
5. Now go to the OUTPUTS section and copy and paste the Body section. We will be needing this in Step 4 J
Step 4: Loop over the customer result
Our last step resulted in the fact that we configured our HTTP Action which was responsible for invoking our customer wcf service and returning us a list of customers.
Now in this step we will need to loop over the returned customer list, such that we can enrich each individual record with localization information obtained from a different API.
In order to do so we will have to select a for-each action. This action can be selected by clicking on the “+ New Step button”. Several options will appear of which we need to select the ‘more’ followed with the ‘add a for each’ action.
Configure
1. Once the for-each step has been selected it is being dropped on the designer. The designer than offers us a section in which we can add in input over which we want to loop.
2. If our WCF service would have returned an Json Array object, we would have been able to simply select this output using the ‘Dynamic Content’ selection process (aka intellisense). However in our case the output over which we want to loop is a customer resultset formatted in XML. So, in our case we will need to help the the logic-apps engine a bit, and they way to do this, is by adding a custom expression. Which in our case is a Xpath expression, pointing to the node over which we want to loop.
The xpath expression in our case would be:
/*[local-name()=”Envelope”]/*[local-name()=”Body”]/*[local-name()=”GetCustomersResponse”]/*[local-name()=”GetCustomersResult”]/*
Easiest way to test this xpath expression, would be by using the response message we extracted when we tested our logic app earlier and subsequently use an online tool to test the xpath expression.
Now that we have our xpath expression, we can use it in the following Logic App Expression
@xpath(xml(body(‘Replace with the name of action of which we want to use the response’,’Xpath Expression’)
In my scenario the expression would be as follows
@xpath(xml(body(‘HTTP_-_GetCustomers’)), ‘/*[local-name()=”Envelope”]/*[local-name()=”Body”]/*[local-name()=”GetCustomersResponse”]/*[local-name()=”GetCustomersResult”]/*’)
Step 5: Extract individual customer info
In our previous step we instantiated our for-each loop which will loop over our xml result set. Now our next step is to extract the individual customer info and store it in a intermediate json format which we will be using in subsequent actions.
So from within our for-each action, select the Add an action.
From within the ‘Card Selector’ select the compose Action.
Configure
Once you’ve selected the compose action the following ‘Card’ will show up on in the designer which allows you to compose a Message, which in our case will be a custom Json message which holds the individual customer information consisting of CustomerId, FirstName, LastName and PostCode
Note: As in Step 4 when configuring the for-each iteration path. We will be leveraging xpath expressions in order to extract the individual customer data. Alternatively, I could have leveraged an Azure Function to convert the received XML Customer response into JSON or I could have leveraged API Management which by means of policies can perform conversion from xml to json out of the box. In my next post (part 3 of this series) I will be using this.
.
1. The input section allows us to construct our custom Json message which holds the individual customer information consisting of CustomerId, FirstName, LastName and PostCode
2. In order to extract the required fields from the xml will be leveraging the following xpath queries
a. customerId extraction:
string(/*[local-name()=”CustomerData”]/*[local-name()=”CustomerId”])’)
b. FirstName extraction:
string(/*[local-name()=”CustomerData”]/*[local-name()=”FirstName ”])’)
c. SurName extraction:
string(/*[local-name()=”CustomerData”]/*[local-name()=”SurName”])’)
d. PostCode extraction:
string(/*[local-name()=”CustomerData”]/*[local-name()=”PostCode”])’)
the logic app expression which we will be leveraging to extract a value using xpath will be
@{xpath(xml(decodeBase64(item().$content)), ‘Xpath Expression‘) where item() refers to the current item (customer record) in the loop and $content represents the content (customer record xml part)
Combined in a Json construct the complete message construction would look like (note that we escape using )
{
“CustomerId”: “@{xpath(xml(decodeBase64(item().$content)), ‘string(/*[local-name()=”CustomerData”]/*[local-name()=”CustomerId”])’)}”,
“FirstName”: “@{xpath(xml(decodeBase64(item().$content)), ‘string(/*[local-name()=”CustomerData”]/*[local-name()=”FirstName”])’)}”,
“LastName”: “@{xpath(xml(decodeBase64(item().$content)), ‘string(/*[local-name()=”CustomerData”]/*[local-name()=”SurName”])’)}”,
“PostCode”: “@{xpath(xml(decodeBase64(item().$content)), ‘string(/*[local-name()=”CustomerData”]/*[local-name()=”PostCode”])’)}”
}
Test
Now that we’ve implemented the xml extraction within the for-each, it would be a good time to go ahead and test the logic app, and see if everything works accordingly.
1. Click on the Save button to save your logic app
2. Now Click on the run button.
3. Wait a few seconds and you should see a debug output. If everything went Ok, it should look similar to the image below.
4. As you can see the last item in the flow, contains a Json output depicting the customer values extracted.
Step 6: Invoke the postcodeapi
Now that we have extracted our customer data and stored it in a json format. We can proceed with the next step, which invokes invoking a public postcode api. In order to do so we will once again select the HTTP Action within the ‘Card Selector’
Configure
Once you’ve selected the HTTP action the following ‘Card’ will show up on in the designer which allows you to configure the HTTP request in order to receive localization information based on a postal code.
1. The Method which we need to select is ‘GET as we will be retrieving data from a rest endpoint.
2. The Uri sections allows us to enter the Request URL of the web-service. In our case that would be http://v0.postcodeapi.com.au/suburbs.json?postcode=XXXXX where XXXX is a dynamic parameter, to be more specific; we will be using the PostCode field which we extracted in step 5. In order to use this PostCode value we will
a. Enter the value http://v0.postcodeapi.com.au/suburbs.json?postcode= in the Uri field.
b. Select the dynamic content ‘Outputs’ from the Extracted xml
We are currently not able to directly access the PostCode field from within the designer as the designer currently is not aware of this property. It is only aware of the fact that the ‘compose step – Extracted xml’ has a output which is a ‘message’ and as such we can only select the complete message.
Note: In a future release of logic-apps this experience will be improved and additional magic will be added such that the designer can ‘auto-discover’ these message properties. How this will be implemented is not 100% clear, but one of the possibilities would be; that we would manually add a ‘description’ of the output (Json schema, for example) to the compose action or any other action which returns / creates an object.
3. In order to select the PostCode field from the Outputs, we will be needing to switch to Code View.
4. Once in code view, find the code block which contains the http://v0.postcodeapi.com.au/suburbs.json?postcode= url. Once found we simple modify the code from
http://v0.postcodeapi.com.au/suburbs.json?postcode=@{outputs(‘Extracted_xml’)}
to
http://v0.postcodeapi.com.au/suburbs.json?postcode=@{outputs(‘Extracted_xml’).PostCode}
5. Now go back to the designer
6. And behold the designer now states “http://v0.postcodeapi.com.au/suburbs.json?postcode={} PostCode”
Test
Now that we’ve implemented the postcode api call, it would be a good time to go ahead and test the logic app.
1. Click on the Save button to save your logic app
2. Now Click on the run button.
3. Wait a few seconds and you should see a debug output. If everything went Ok, it should look similar to the image below. If you expand the HTTP action, you wull notice that the URI now is composed using the extracted PostCode value
Step 7: Compose an enriched customer message
Now that we have invoked the postcode API it is time to combine both the original customer data and the postcode data. In order to do this, we will be composing a new Json message using the Compose Action.
From within the ‘Card Selector’ select the compose Action.
Configure
Once you’ve selected the compose action the following ‘Card’ will show up on in the designer which allows you to compose a Message, which in our case will be a new Json message which holds both the customer data as well as the location data retrieved from the PostCode lookup.
1. The input section allows us to construct our custom Json message which will hold all the combined data
2. Now copy and paste the below ‘sample’ json message into the input section
This message will be of the following structure:
{
“FirstName”: “## FirstName from the WCF Customer web service##”,
“LastName”: “## LastName from the WCF Customer web service##”,
“Location”: {
“Latitude”: “## Latitude obtained from the postal api##”,
“Locality”: “## Locality obtained from the postal api##”,
“Longitude”: “## Longitude obtained from the postal api##”,
“PostCode”: “# PostCode from the ‘extract_xml’ message##”,
“State”: “## State obtained from the postal api##”,
“Suburb”: “## Suburb obtained from the postal api##”
},
“RequestId”: “## Obtained from the request trigger##”,
“id”: “# CustomerId from the ‘extract_xml’ message##”
}
3. Now go to code view
4. Once in code view, find the code block which represents the Json message which we just copied and pasted in the input section.
Note: In a future release of logic-apps this experience will be improved and additional magic will be added such that the designer can ‘auto-discover’ these message properties, which we will now add manually. How this will be implemented is not 100% clear, but one of the possibilities would be; that we would manually add a ‘description’ of the output (Json schema, for example) to the compose action or any other action which returns / creates an object.
5. Now replace the json such that it looks like depicted below
“Enrich_with_postal_code”: {
“inputs”: {
“FirstName”: “@{outputs(‘Extracted_xml’).FirstName}”,
“LastName”: “@{outputs(‘Extracted_xml’).LastName}”,
“Location”: {
“Latitude”: “@{body(‘HTTP’)[0].latitude}”,
“Locality”: “@{body(‘HTTP’)[0].locality}”,
“Longitude”: “@{body(‘HTTP’)[0].longitude}”,
“PostCode”: “@{outputs(‘Extracted_xml’).PostCode}”,
“State”: “@{body(‘HTTP’)[0].state.name}”,
“Suburb”: “@{body(‘HTTP’)[0].name}”
},
“RequestId”: “@{triggerBody()[‘RequestId’]}”,
“id”: “@{outputs(‘Extracted_xml’).CustomerId}”
},
“runAfter”: {
“HTTP”: [
“Succeeded”
]
},
“type”: “Compose”
},
Test
Now that we’ve composed a message containing both the WCF and PostCode API data, it would be another good time to go ahead and test if everything works and this time we will be testing our logic app using Fiddler
1. Download Fiddler, if you already have.
2. Go to you logic app and expand the Request Trigger and press on the “Copy Icon”, this will copy the logic app endpoint to your clipboard.
3. Open fiddler, and select the composer tab
4. In the composer
a. Set the HTTP Action to POST
b. Copy and Paste the uri in the Uri field
c. In the header section add
i. Content-Type:application/json
d. In the body section add the following json
{
“RequestId”:”20161220″
}
e. Click on the Execute button
5. Now go back to your Logic App
6. In the run history, select the last entry
7. If everything went Ok, it should look similar to the image below.
Step 8: Store Session State in DocumentDB
At this point we have implemented functionality which
· allows us to iterate over all the customer records
· retrieve localization data from the postal code api using the postal code extracted from the customer record.
· Compose a new message which contains all the data.
The functionality which is left to implement at this point in time consists of; combining all the composed new messages, containing the customer and localization data, in one document and returning it to the caller.
Note: Currently there is no easy way of doing this directly from within logic-apps as logic apps currently does not contain the functionality which would allow us to ‘combine the data’ in memory. But have no fear; in one of the next releases of Logic Apps will have support for storing session state and once this is available we will no longer require this additional step, which is explained below.
Configure
As Logic Apps currently has no means of storing session state, we will be resorting to an external session state store. In our case, the most obvious choice would be DocumentDB.
So before we proceed, let’s go and create a DocumentDB service.
1. Go to the Azure Portal and click on the New Icon
2. Search for DocumentDB
3. Select DocumentDB from Publisher Microsoft
4. Fill out the required information and once done create the DocumentDB instance
5. After creation has completed, open the DocumentDB Instance.
6. Now Add a Collection
7. Fill out the required information for the Collection Creation, and press OK once done
8. Go back to the main DocumentDB Blade, and click on Keys
9. From within the Keys, Copy and Paste the Primary or Secondary Key
10. Now go back to our logic app, and open it in the designer
11. In the Logic App, Click on the Add New Item
12. Now search for DocumentDB Actions and select “Azure DocumentDB – Create or update document”
13. The connector will now be displayed and will require some configuration
14. Fill out the required information. For which it has to be noted that for Database Account Nam is the actual name of the documentDB. In my case docdb-playground.
15. Once filled out the information should look similar to the one depicted below in the image
16. At this point the connection has been created, and we can now proceed with the actual configuration in which we will
a. select the correct Database ID from the dropdown
b. select the collection to use
c. add the dynamic content (message) which we want to store
d. we set the value to True for IsUpsert
Step 9: Exit Loop, Retrieve and return Stored Data
Our last step resulted in the fact that we persisted all documents into DocumentDB. Now before we proceed, let’s have a look at Step 7 in which we composed the following message, which eventually was stored in DocumentDB.
Have a good look at the field: RequestId. This field is actually passed in whenever we invoke our LogicApp. (see step 7, the test section).
There was a reason why we added this field and have it stored in DocumentDB. The reason? Well this way we are able to select all documents stored in DocumentDB belonging to the specific ID of the current Request and return them to the caller.
Configure
1. Select the Add an action button located just below the for-each scope.
2. Now search for DocumentDB Actions and select “Azure DocumentDB – Query documents”
3. The Document DB Query Documents connector, can now be configured as follows
a. Select the correct database ID from the dropdown in our case ProcessingState
b. Select the applicable collection from the dropdown in our case LogicApp
c. Now add a query, which will return all documents stored in the collection which have the same request id.
SELECT c.Id as CustomerId, c.FirstName,c.LastName,c.Location FROM c where c.RequestId = …..
d. Where c.RequestId = “ SELECT REQUEST ID from the Dynamic Content window”
4. At this point we have completed the action which will retrieve all the applicable stored documents. So the only thing which is left to do is, returning this list of document back to the caller. In order to do this, we add one more action. This action is called Response
5. The Response action, can can now be configured as follows
a. Enter 200 for the return status code, this indicates the HTTP Status code ‘OK’
b. In the response header we will need to set the content-type. We will do this by adding the following piece of json
{ “Content-Type”:”application/json” }
c. In the body we will add the dynamic content which relates to documents which were returned from document DB
Test
Well now that we have implemented the complete flow, it is time to do our final test and once again we will be using Fiddler to perform this test.
1. Open fiddler, and select the composer tab
2. In the composer
a. Set the HTTP Action to POST
b. Copy and Paste the uri in the Uri field
c. In the header section add
i. Content-Type:application/json
d. In the body section add the following json
{
“RequestId”:”20161221″
}
e. Click on the Execute button
3. Now open the result and you should see a response similar to the one below
4. No go back to your logic app and in the run history, select the last entry
5. If everything went Ok, it should look similar to the image below.
Conclusion
This post has guided you through setting up a logic app which calls two api’s a, combines the data and returns the aggregated result back to the caller.
In my next post I will introduce API Management into the mix which will be using to expose the two api mentioned and apply some api management magic which further simplify our logic app implementation.
So until next time, stay tuned.
Cheers
René
by Rene Brauwers | May 29, 2014 | BizTalk Community Blogs via Syndication
Introduction
Microsoft released a new service to Azure, called API Management. This service was released on May the 12th 2014.
Currently the Azure API Management is still in Preview, but already it enables us to easily create an API façade over a diverse set of currently available Azure services like Cloud Services, Mobile Services, Service bus as well as on premise web-services.
Instead of listing all the features and write an extensive introduction about Azure API Management I’d rather supply you with a few links which contains more information about the Azure API Management service:
Microsoft:
http://azure.microsoft.com/en-us/services/api-management/
http://azure.microsoft.com/en-us/documentation/services/api-management/
Blogs:
http://trinityordestiny.blogspot.in/2014/05/azure-api-management.html
http://blog.codit.eu/post/2014/05/14/Microsoft-Azure-API-Management-Getting-started.aspx
Some more background
As most of my day-to-day work involves around the Microsoft Integration space in which I am mainly focusing on BizTalk Server, BizTalk Services and Azure in general, I was looking to a find a scenario in which I, as an Integration person, could and would use Azure API management.
The first thing which popped in to my mind; wouldn’t it be great to virtualize my existing BizTalk Service Bridges using Azure API management. Well currently this is not possible, as the only authentication and authorization method on a BizTalk Service Bridge is ACS (Access Control Service) and this is not supported in the Azure API Management Service.
Luckily with the last feature release of BizTalk Services included support for Azure Service Bus Topics / queues as a source J and luckily for me Azure Service Bus supports SAS (Shared Access Signatures) and using such a signature I am able to generate a token and use this token in the HTTP Header – Authorization section of my http request.
Knowing the above, I should be able to define API’s which virtualize my Service bus Endpoints. Create a product combining the defined API’s and assign policies to my API operations.
Sounds easy? Doesn’t it. Well it actually is. So without further ado, let’s dive into a scenario which involves exposing an Azure Service bus Topic using Azure API Management.
Getting started
Before we actually start please note that the sample data (Topic names, user-accounts, topic subscriptions, topic subscription rules etc.) I use for this ‘Step by step’ guide is meant for a future blog post 😉 extending an article I posted a while back on the TechNet Wiki
Other points you should keep in mind are
- Messages send to a TOPIC may not exceed 256Kb in size, if a message is larger you will receive an error message from Service Bus telling you that the message is too large.
- Communication to service bus is asynchronous; thus we send a message and all we get back is an HTTP code telling us the status of the submission (200 OK, 210 Accepted etc.) or an error message indicating that something went wrong (401 Access Denied, etc.). So actually our scenario is using the Fire and Forget principle.
- You will have to create a SAS Token, which needs to be send in the header of the message in order to authenticate to Service bus.
Enough set, let’s get started.
For the benefit of the reader I’ve added hyperlinks below such that you can skip those sections involved which you might already know.
Sections Involved
If you don’t already have an Azure account, you can sign up for a free trial here
Once you’ve signed up for an Azure account, login to the Azure Portal and create a new Azure Service Bus Topic by following the steps listed below.
-
If you have logged in to the preview portal, click on the ’tile’ Azure Portal. This will redirect you to an alternative portal which allows for a more complete management of your Azure Services.
-
In the ‘traditional’ portal click on Service Bus
-
Create a new Namespace for your Service bus Entity
-
Enter a name for your Service bus namespace and select the region to which it should be deployed and select the checkmark which starts the provisioning.
-
Once the provisioning has finished, select the Service Bus Entity and click on Connection Information
-
A window will appear with Access Connection Information, in this screen copy the ACS Connection String to your clipboard. (We will need this connection string later on) and then click on the checkbox to close to window
Now that a new Service Bus Entity has been provisioned, we can proceed with creating a TOPIC within the created Service Bus entity, for this we will use the Service Bus Explorer from Paolo Salvatori. Which you can download here. Once you’ve downloaded this tool, extract it and execute the ServiceBusExplorer.exe file and follow the below mentioned steps.
-
Press CRTL+N which will open the “Connect to Service Bus Namespace” window
-
In the Service Bus Namespaces box, select the following option from the dropdown: “Enter Connection String”
-
Copy and paste the ACS Connection String (you copied earlier, see previous section step 6) and once done press “OK”
-
The Service Bus Explorer should now have made a connection to your Service Bus Entity
-
In the menu on your left, select the TOPIC node, right click and select “Create Topic”
-
In the window which now appears enter a TOPIC name “managementapi_requests” in the “Path” box and leave all other fields blank (we will use the defaults). Once done press the “Create button”
-
Your new Topic should now have been created
Now that we have created a TOPIC it is time to add some subscriptions. The individual subscriptions we will create will contain a filter such that messages which are eventually posted to this TOPIC end up in a subscription based on values set in the HTTP header of the submitted messages. In order to set up some subscriptions follow the below mentioned steps:
-
Go back to your newly created TOPIC in the Service Bus Explorer
-
Right click on the TOPIC and select “Create Subscription”
-
The Create Subscription windows will now show in which you should execute the following steps
- A) Subscription Name: BusinessActivityMonitoring
- B) Filter: MessageAction=’ BusinessActivityMonitoring’
- C) Click on the Create button
-
Now repeat Steps 2 and 3 in order to create the following subscription
- Subscription Name: Archive
- Filter: 1 = 1
At this point we have set up our Topic and added some subscriptions le viagra est il efficace. The next step consists of adding a Shared Access Policy to our topic. This policy than allows us to generate a SAS token which later on will be used to authenticate against our newly created Topic. So first things first, let’s assign a Shared Access Policy first. The next steps will guide you through this.
-
-
Go to the Service Bus menu item and select the Service Bus Service you created earlier by clicking on it.
-
Now select TOPICS from the tab menu
-
Select the Connection Information Icon on the bottom
-
A new window will pop up, in this windows click on the link “Click here to configure”
-
Now un the shared access policies:
- A) Create a new policy named ‘API_Send’
- B) Assign a Send permission to this policy
- C) Create a new policy named ‘API_Receive’
- D) Assign the Listen permission to this policy
- E) Create a new policy named ‘API_Manage’
- F) Assign the Manage permission to this policy
- G) Click on the SAVE icon on the bottom
-
At this point for each policy a Primary and Secondary key should be generated.
Once we’ve added the policies to our Topic we can generate a token. In order to generate a token, I’ve build a small forms application which uses part of the code which was originally published by Santosh Chandwani. Click the following link to start downloading the application “Sas Token Generator“. Using the SAS Token Generator application we will now generate the token signatures.
-
-
Fill out the following form data
- A) Resource Uri = HTTPs Endpoint to your TOPIC
- B) Policy Name = API_Send
- C) Key = The Primary Key as previously generated
- D) Expiry Date = Select the date you want the Sas token to expire
- E) Click on generate
-
After you have clicked GENERATE by Default a file will be created on your desktop containing all generated Sas Tokens, the file is named SAS_tokens.txt. Once saved you will be asked if you want to copy the generated token to your Clipboard. Below 2 images depicting the message-prompt as well as the contents stored in the generated file.Perform step 2 for the other 2 policies as well (API_Listen and API_Manage)
At this point we have set up our Service Bus Topic, Subscriptions and have generated our Sas Tokens we are all set to start exposing the newly created service bus topic using Azure API Management, but before we can start with this we need to create a new API Management Instance. The steps below detail how to do this.
-
-
Click on API Management, in the right menu-bar, and click on the link “Create an API Management Service”
-
A menu will pop up in which you need to select CREATE once more
-
At this point a new window will appear:
- A) Url: Fill out an unique name
- B) Pricing Tier: Select Developer
- C) Subscription: Check your subscription Id (Currently it does not show the subscription name, which I expect to be fixed pretty soon)
- D) Region: Select a region close to you
- E) Click on the right arrow
-
You now will end up at step 2:
- A) Organization Name: Fill out your organization name
- B) Administration E-Mail: Enter your email address
- C) Click on the ‘Checkmark’ icon
-
In about 15 minutes your API management Service will have been created and you will be able to login.
Now that we have provisioned our Azure API management service it is time to create and configure an API which exposes the previously defined Azure Service Bus Topic such that we can send messages to it. The API which we are about to create will expose one operation pointing to the Azure Service Bus Topic and will accept both XML as JSON messages. Later on we will define a policy which will ensure that if a JSON message is received it is converted to XML and that the actual calls to the Service Bus Rest API are properly authenticated using our SAS token created earlier.
So let’s get started with the creation of our API by Clicking the Manage Icon which should be visible in the menu bar at the bottom of your window.
Once you’ve clicked the Manage icon, you should automatically be redirected to the API Management portal.
Now that you are in the Azure API Management Administration Portal you can start with creating and configuring a new API, which will virtualize your Service Bus Topic Rest Endpoints. In order to do so follow the following steps.
-
Click on the API’s menu item on your left
-
Click on ADD API
-
A new window will appear, fill out the following details
-
A) Web API title
- Public name of the API as it would appear on the developer and admin portals.
-
B) Web Service Uri
- This should point to your Azure Service Bus Rest Endpoint. Click on this link to get more information. The format would be: http{s}://{serviceNamespace}.servicebus.Windows.net/{topic path}/messages
-
C) Web API Uri suffix
- Last part of the API’s public URL. This URL will be used by API consumers for sending requests to the web service.
-
D) Once done, press Save
-
Once the API has been created you will end up at the API configuration page
-
Now click on the Settings Tab
- A) Enter a description
- B) Ensure to set authentication to None (we will use the SAS token later on, to authenticate)
- C) Press Save
-
Now click on the Operations Tab, and click on ADD Operation.
- Note: detailed information on how to configure an operation can be found here
-
A form will appear which will allow you to configure and add an operation to service. By default the Signature menu item is selected, so we start with configuring the signature of our operation.
- A) HTTP verb: Choose POST as we will POST messages to our Service Bus Topic
- B) URL Template: We will not use an URL template, so as a default enter a forward-slash ” / “
- C) Display Name: Enter a name, which will be used to identify the operation
- D) Description: Describe what your operation does
-
Now click on the Body item in the menu bar on the left underneath REQUESTS (we will skip caching as we don’t want to cache anything), and fill out the following fields
- A) Description: add a description detailing how the request body should be represented
-
Now click on the ADD REPRESENTATION item just underneath the description part and enter Application/XML
-
Once you’ve added the representation type, you can add a representation example.
-
Now once again click on the ADD REPRESENTATION item just underneath the description part and enter Application/JSON
-
Once you’ve added the representation type, you can add a representation example.
-
Now click on the ADD item in the menu bar on the left underneath RESPONSES (We will skip Caching, Parameters and Body as we don’t need it)
-
Start typing and select a response code you which to return once the message has been send to the service operation.
-
Now you could add a description and add a REPRESENTATION, however in our case we will skip this as a response code 202 Accepted is all we will return.
- Press save
Now that we have defined our API we need to make it part of a Product. Within Azure API Management this concept has been introduced as a container containing one or more API definitions to which consumers (developers) can subscribe. In short if your API is not part of a product definition consumers can not subscribe to it and use it. More information regarding the ‘Product’ definition can be found here.
In order to create a new product, we need to perform the following steps:
-
Select the Products menu item from within the API Management Administration Portal and click on it.
-
A new window will appear listing all products currently available, in this window click on the ADD PRODUCT item
-
Fill out the following form items in order to create a product.
- A) Title: Enter the name of the product
- B) Description: Add a description of the product
- C) Require subscription approval: Ensure to check this, as this will require any subscription requests to this product to be approved first
- D) Press Save
-
Now that we have created our Product it is time to see if there are any other things we need to configure before we add policies to it and publish it. In order to check the settings by clicking on the newly created product
-
On the summary tab, click on the link ADD API TO PRODUCT
-
A new window will pop up, select the API you want to add to the product and once done click Save
At this point we have created a product but have not yet published it. We will publish it in a bit, but first we need to set up some policies for the API and the operation we’ve created earlier, In order to do this follow these steps
-
From the menu bar on your left select the Policies item and in the main window in the policy scope section make the following selections
- A) For the API select the API you created earlier
- B) For the Operation select the operation you created earlier
-
Now in the Policy definition section, click on ADD Policy
-
At this point the Empty Policy definition is visible
For our API operation to correctly function we are going to have to add a few policies. These policies should take care of the following functionality
- Authenticate to our Service Bus Topic using our previously created SAS Token
- Automatically convert potential JSON messages to their equivalent XML counterpart
- Add some additional context information to the inbound message which are converted to brokered message properties when passed won to Azure Service Bus.
General information on which policies are available to you within the Azure API Management Administration Portal and how to use them can be found here
The next few steps will show you how we can add policy statements which will ensure the above mentioned functionality is added.
-
In the Policy definition section, ensure to place your cursor after the <inbound> tag
-
From the Policy statements, select and click on the “Set HTTP Header” statement.
-
A “Set Header” Tag will be added to the Policy Definition area, which will leverage the Authorization Header containing our SAS token we created earlier. The steps required are listed below:
- A) Put in the value “Authorization” for the attribute “name”
- B) Put in the following value “skip” for the attribute “exists-action”
- C) Now get the SAS Token you created earlier, wrap the token string in between a CDATA element and put all of this in between the “value” element
Textual example:
<set-header name=”Authorization“ exists-action=”skip“>
<value><![CDATA[YOUR SAS TOKEN STRING]]></value>
</set-header>
-
Place your cursor just after the closing tag </set-header>
-
From the Policy statements, select and click on the “Convert JSON to XML” statement.
-
A “json-to-xml” Tag will be added to the Policy Definition area, which contains the instructions resulting in JSON messages to be converted to XML. Ensure that the tag is configured as mentioned below:
- A) Put in the value “content-type-json” for the attribute “apply”
- B) Put in the following value “false” for the attribute “consider-accept-header”
Textual example:
<json-to-xml apply=”content-type-json“ consider-accept-header=”false“/>
-
Now add “set-header” policy statements adding the following headers
-
A) Header name: MessageId
- exists-action: “skip”
- value:“00000000-0000-0000-0000-000000000000”
-
B) Header name: MessageAction
- exists-action: “skip”
- value=“Undefined”
textual example:
<set-header name=”MessageId“ exists-action=”skip“>
<value>00000000-0000-0000-0000-000000000000</value>
</set-header>
<set-header name=”MessageAction“ exists-action=”skip“>
<value>Undefined</value>
</set-header>
-
Once you have added all the policy statements, press the save button
Now we have created a new product and assigned policies we need to perform some group/user related actions. This way we can set up a dedicated Group of users which is allowed to use our Product and it’s containing API.
The steps below will guide you through this process
-
Now select the Visibility tab, and click on the MANAGE GROUPS link
-
You will be redirected to the GROUPS page, on this page click on the ADD GROUP Link
-
A new windows will pop up, fill out the following fields
- A) Name: Unique name of the group
- B) Description: General description of the group and its purpose
- C) Click on Save
-
After you’ve created the new Group, select the developers menu item and in the main window click on ADD User
-
Once again a new window will pop up. In this window fill out the following fields:
- A) Email
- B) Password
- C) First and last name
- D) Press Save
-
Now that we have created a new user, we need to make it a member of our group. In order to do so follow the following steps
-
- A) Ensure to select the new user
- B) Click on the ADD TO GROUP item and add the user to the earlier created Group
-
Now go back to the PRODUCTS menu item and select the product you created earlier
-
In the main window follow these steps
- A) Click on the Visibility Tab
- B) Allow the new group to subscribe to your product
- C) Click Save
-
Now Click on the summary tab and click on the PUBLISH link
-
Now select the Developers menu item and click on the user you created earlier
-
The main window will now change, in this window click on ADD Subscription
-
A window will pop up, in this window ensure to put a checkmark in front of the product you want the user to subscribe to. Once done press the subscribe button
At this point you have set up your API and now you can proceed with testing it. In order to test we will use the Azure API Management Developer Portal and we will log-on to it using the user account we set up previously.
The steps below list the steps involved:
-
First log out of the Azure API Management Administration Portal
-
Now login using the email and password of the user you defined earlier
-
In the top menu, select APIS
-
Click on the API you created earlier
-
Click on the button “Open Console”
-
A form will appear which allows you to send a message to the API. Follow the steps below to send a JSON formatted message to the API.
- A) From the dropdown select a subscription-key (used to authenticate to the API)
- B) Add two http headers
- Content-type: application/json [indicates that the message you are about to send is formatted as JSON]
- MessageAction: NonExisting [this will ensure that the message ends up in our Azure Service Bus subscription named Archive, as this subscription is our catch-all
- MessageId: 11111111-1111-1111-1111-111111111111
- MessageBatchId: SingleMessageID
- C) Add some sample JSON
- D) Press HTTP POST
-
Now open up the Service Bus Explorer and connect to your service bus instance
-
Right Click on the Archive subscription and select the option “Receive all messages”
-
One message should be received, which should meet the following test criteria:
MY TEST RESULT: PASSED
- Now we will perform another test, but this time we will send an XML formatted message to the API.
- A) From the dropdown select a subscription-key (used to authenticate to the API)
- B) Add two http headers
- Content-type: application/xml [indicates that the message you are about to send is XML]
- MessageAction: BusinessActivityMonitoring [this will ensure that the message ends up in our Azure Service Bus subscription named BusinessActivityMonitoring and it will end up in our Archive subscription (as it is our catch-all)]
- C) Add some sample XML
- D) Press HTTP POST
- Right Click on the BusinessActivityMonitoring subscription and select the option “Receive all messages”. One message should be received, which should meet the following test criteria:
- The message should be formatted in XML
- The following custom message properties should be available
- MessageAction: BusinessActivityMonitoring
- MessageId: 00000000-0000-0000-0000-00000000000
- MessageBatchId: SingleMessageID
-
MY TEST RESULT: PASSED
-
Now let’s receive all message from our Archive subscription (it should contain a copy of the previous message). Reason for this, is the fact that the archive subscription is our Catch-All subscription and thus all messages send to the topic end up in this subscription as well.
MY TEST RESULT: PASSED
- Ensure to document your API well, this makes life easier for the consumers
- Using SAS Tokens you can fairly easy integrate fairly with Azure Service Bus Queues / Topics
- If using a policy to set Custom Headers (which you use for setting the Authorization header) ensure to enclose the SAS token within a <![CDATA[ …… ]]> tag
- The Azure API Management Key can be found on the developer page of the Azure API Developer Portal (f.e https://{namespace}.portal.azure-api.net/developer)
- Code examples on how to consume an API are available from the Azure API Developer Portal, by clicking on the menu item APIS and then clicking on the API you want to test
- Logging into the Azure API Management Administration Portal must be done using the Azure Management Portal
- Azure Management API, in my opinion, could easily be used to virtualize your on premise internet-faced web services (BizTalk generated web services f.e). This way you have one central place to govern and manage them.
I hope this walkthrough contributed in gaining a better understanding of how we as integrators can leverage the Azure API Management Service to expose Service Bus entities. Once you’ve grasped the concepts you could easily take it a step further and for example involve Azure BizTalk Services which would process messages from certain subscriptions, do some transformations and deliver it to for example another Azure Service Bus Topic, the topic endpoint could then be incorporated into a new API which would allow your API consumers to retrieve their processed messages.
Ah well you get the idea, the possibilities are almost endless as Azure delivers all these building-blocks (services) which enable you to create some amazing stuff for your customers.
I hope to publish a new post in the coming weeks; I’ve already worked out a scenario on paper which involves Azure API Management, Azure Service Bus, Azure BizTalk Services, Azure File Services, and an Azure Website; however implementing it and writing it down might take some time and currently my spare-time is a bit on the shallow side. Ah well, just stay tuned, check my Twitter and this blog.
Until next time!
Cheers
René