API Management Best Practices, Tips, and Tricks: #4 Include a Cache Response Header

API Management Best Practices, Tips, and Tricks: #4 Include a Cache Response Header

Here we are, ready for another edition of API Management Best Practices, Tips, and Tricks. Until now, I have been addressing some tips to apply to your Azure API Management policies. However, today I will address a good Best practice that you must consider while implementing cache on your operations: Including a Cache Response Header on your API responses.

#2 Include a Cache Response Header

In my previous article, I briefly mentioned this topic, but I think it should have its own individual highlight. Headers are an essential part of REST API design, providing a way to include additional information about the request and response. They are a key peace to allow us to control the behavior of the API. Some typical headers used in REST APIs include Content-Type, Accept, Authorization, and User-Agent.

One good best practice while applying cache responses on our APIs – which has the advantage of significantly reducing latency for API callers – is to inform API users when they are receiving a cached response or not. This way, users or systems know if they are working with live-fresh data or not, and provide actions according. Sometimes we cannot rely on a cached version of the resource, sometimes it doesn’t matter. However, by having this strategy, you will be enriching and improving your APIs.

And once again, this is quite simple to accomplish:


	...
			
			
				
					
						
						
							application/json
						
						
							true
						
						@((string)context.Variables["varTokenValue"])
					
				
			
   ...

...

	...
	
		
		
			text/plain
		
		
			false
		
		@((string)context.Variables["varToken"])
	
   ...

I hope you enjoy this tip and stay tuned for the following Azure API Management Best practices, Tips, and Tricks.

If you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

API Management Best Practices, Tips, and Tricks: #3 How to implement a cache refresh policy

API Management Best Practices, Tips, and Tricks: #3 How to implement a cache refresh policy

Here we are, ready for another edition of API Management Best Practices, Tips, and Tricks. Today, we will address another helpful Best practice, Tips, and Tricks that you must consider while implementing your API policies: How to implement a cache refresh policy.

#2 How to implement a cache refresh policy?

As Microsoft documentation references, APIs and operations in API Management (APIM) can be configured with response caching. Response caching can significantly reduce latency for API callers and backend load for API providers. APIM provides an out-of-the-box Internal cache, but this built-in cache is volatile and is shared by all units in the same region in the same API Management service. However, if you desire more robustness and additional capabilities, you can use an external Azure Cache for Redis, for example.

An excellent example of using cache capabilities is to store access tokens. Usually, API tokens have a “time-to-live” (TTL), which is the maximum time that the access token will be valid for use within the application. That means we don’t need to regenerate a token each time we call our API. Instead, we can cache that value in APIM, specifying the cache duration.

When working with cache inside APIM, there are at least two policies you need to know:

  • cache-store-value: The cache-store-value performs cache storage by key. The key can have an arbitrary string value and is typically provided using a policy expression.
  • cache-lookup-value: Use the cache-lookup-value policy to perform cache lookup by key and return a cached value. The key can have an arbitrary string value and is typically provided using a policy expression.

cache-store-value

Policy statement:


This is a practical sample of this policy:


cache-lookup-value

Policy statement:


This is a practical sample of this policy:


Cache is quite simple to implement and use inside APIM. However, and this is the reason for this blog post, in many situations, we have the need to refresh or force a refresh on our cache or in our cache value. Let’s say that while developing our operation policy, we made a mistake, like caching an incorrect value or setting the duration incorrectly. Now we need to refresh the value that is cached, but we don’t want to wait for the cache duration to expire – that can be 30 min – or modify the operation policy to add a cache removed statement and then modify that policy again to remove that “workaround”.

So, how can we easily handle these requirements or capabilities?

Well, the best way is to always address this requirement by design and implement a cache refresh mechanism.

Taking the token example, this can easily be implemented by:

  • Adding an optional header on your API methods, let’s say:
    • skip-cache header that “allows” a true or false value
      • If it is true, you need to force to refresh the value that is cached.
      • Otherwise – if the value is false – you use the cached value.
  • In the inbound policy of your operation, add the following statements:
    • Read the value of the skip-cache header. If it doesn’t exist, the default value is false.
    • Check if the skip-cache header value is false:
      • If the condition is true:
        • Try to read the value from the cache:
          • If it is present in the cache, return the cached value.
          • If it is not present, call the token renewal API.
    • Otherwise, perform a call to the token renewal API.

	
	
		
			
			
				
					
						
						
							application/json
						
						
							true
						
						@((string)context.Variables["varTokenValue"])
					
				
			
		
	
    ... implement the call to the token renewal API

Note: It is a good practice to add a response header stating the response is cached.

  • In the outbound policy of your operation, add the following statements:
    • Read the token renewal API response.
    • Store the value in the cache for next time.
    • And return the response back to the caller.

	<set-variable name="bodyResponse" value="@(context.Response.Body.As())" />
	
	
	
	
		
		
			text/plain
		
		
			false
		
		@((string)context.Variables["varToken"])
	

I hope you enjoy this tip and stay tuned for the following Azure API Management Best practices, Tips, and Tricks.

If you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

API Management Best Practices, Tips, and Tricks: #2 How to access a context variable on body transformation using liquid

API Management Best Practices, Tips, and Tricks: #2 How to access a context variable on body transformation using liquid

Here we are, ready for another edition of API Management Best Practices, Tips, and Tricks. In my prior blog post, I inaugurated this series by addressing the first best practices, tips, and tricks—How to validate if a Header is an empty string. Today, I will speak about another helpful Best practice, Tips, and Tricks that you must consider while implementing your policies: How to access a context variable on body transformation using liquid.

#2 How to access a context variable on body transformation using liquid

Liquid is an open-source template language created by Shopify and written in Ruby. It is the backbone of Shopify themes and is used to load dynamic content on storefronts.

Azure API Management uses the Liquid templating language (DotLiquid) to transform the body of a request or response. This can be effective if you need to reshape the format of your message completely. That can be accomplished using the set-body policy inside inbound, backend, outbound, or on-error policies. For example:


	
	…
	
		{
			"Header": {
				"OrigSystem": "API Management",
				"DateRequest": "{{body.MsgDate}}"
			},
			"InputParameters": {
				"MyObject": {
					"Reference": "{{body.ExtId}}",
					"Type": "{{body.ObjType}}",
					"Id": "{{body.Id}}"
				}
			}
		}
	

On the other side, inside API Management policies, users will always have the availability to create context variables or, in this particular case, User-Defined Variables or Policy Variables (whatever you want to call them) to store and manipulate data specific to your API’s needs. These variables are often used in policies to make decisions or modify requests and responses.

Creating or reading a value of a context variable inside an APIM policy is a straightforward operation. Once again, Microsoft documentation will explain that simple operation very well. To declare a context variable and assign it a value, we utilize the set-variable policy, specifying the value through an expression or a string literal. For example:


Or


To read a context variable and assign it a value, we utilize the following expression:

(string)context.Variables["myVar"]

Or using the GetValueOrDefault function:

context.Variables.GetValueOrDefault("myVAr", "This is the default value")

What is more difficult is to find a good documentation that explains how to read the value of a context variable on body transformation (set-body policy) using the liquid template. I won’t be wrong to say that 98% of the information I looked up online was wrong because most of them say that is using the same way(string)context.Variables[“myVar”]what is incorrect. We should use a dot (.) notation inside the liquid to access the variables, similar to many programming languages used to access properties deep within the structure. So, in this case, we should use the following:


	
		
		
			
				Sandro Pereira
				{{context.Variables.myVar}}
			
		
	

I hope you enjoy this tip and stay tuned for the following Azure API Management Best practices, Tips, and Tricks.

If you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

Unleash your API Management skills: how to read Query and URI parameters

Unleash your API Management skills: how to read Query and URI parameters

Before we focus on the goal of this blog post topic, let’s first understand the distinction between Query Parameters and URI Parameters in API Design, this is crucial for you to know. URI Parameters, also known as Path Parameters, are primarily used to identify specific resources, while Query Parameters are utilized for the sorting and filtering of these resources.

For instance, imagine a scenario where you need to identify a book by its ISBN (also known as the book ID); in this case, you’d make use of the URI parameter, something like:

GET /books/{book_id}

An example of this call should be GET /books/978-1-3999-0861-0

However, if your aim is to retrieve all the books categorized by genre, such as novels, then we should use a Query parameter, something like:

GET /books?gender={type}

An example of this call should be GET /books?gender=novels

Now that we know the key difference between Query Parameters and URI Parameters, let’s see how we can read these parameters inside Azure API Management policies.

How to read URI parameters inside Azure API Management policies

Taking the sample above, for us to read the URI parameter, we have to use the following expression:

context.Request.MatchedParameters["parameter-name"]

Using the previous sample GET /books/{book_id} the expression should be:

context.Request.MatchedParameters["book_id"]

We can also make use of the GetValueOrDefault function to retrieve the URI parameter:

context.Request.MatchedParameters.GetValueOrDefault("parameter-name","optional-default-value")

And, of course, we can apply this expression to several operations like:

  • Check URI parameter existence:
    • context.Request.MatchedParameters.ContainsKey(“book_id”) == true
  • Check if the URI parameter has the expected value:
    • context.Request.MatchedParameters.GetValueOrDefault(“book_id”, “”).Equals(“978-1-3999-0861-0”, StringComparison.OrdinalIgnoreCase) == true

How to read Query parameters inside Azure API Management policies

Once again, taking the sample above, for us to read the URI parameter, we have to use the following expression:

ccontext.Request.Url.Query["parameter-name"]

We can also make use of the GetValueOrDefault function to retrieve the URI parameter:

context.Request.Url.Query.GetValueOrDefault("parameter-name", "optional-default-value")

Using the previous sample GET /books?gender={type} the expression should be:

context.Request.Url.Query.GetValueOrDefault("type")]

And, of course, we can apply this expression to several operations like:

  • Check Query parameter existence:
    • context.Request.Url.Query.ContainsKey(“type”) == true
  • Check if the Query parameter has the expected value:
    • context.Request.Url.Query.GetValueOrDefault(“type”, “”).Equals(“Novels”, StringComparison.OrdinalIgnoreCase) == true

Hope you find this helpful! So, if you liked the content or found it helpful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego! 

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.

He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira

BizTalk360: How to monitor BizTalk Server BRE Policies Pending to be Deployed

BizTalk360: How to monitor BizTalk Server BRE Policies Pending to be Deployed

A few days ago, I helped a fellow “BizTalker” on a PowerShell script implementation that would monitor all your BizTalk Server BRE Policies, highest versions, that aren’t in the deployed state. You can know more about this on my blog: BizTalk DevOps: Monitor your BizTalk environment using PowerShell – Monitoring BRE Policies Pending to be Deployed.

However, my monitor script is, in fact, a combination of:

  • A piece of PowerShell that invokes a SQL Query and notifies by email if any non-compliance occurs
  • A SQL Server query that will have the ability to check what rules are in a non-compliance state; this is where the magic happens

I did that because out-of-the-box BizTalk360 doesn’t have the capability to monitor BRE Policies that aren’t in the deployed state. Despite that BizTalk360 allows you to create “Secure SQL Queries” that will enable us to:

  • Write optimized SQL queries and store them with friendly names in BizTalk360
  • Assign who will have permissions to run the queries
  • Run the queries, of course

This will allow us to quickly check if any policy is in a non-compliance state.

01-BizTalk360-Secure-SQL-Queries

You can see and download the SQL script here: BizTalk360 Secure SQL Server Query to check BRE that are not in a Deployed state.

Nevertheless, you cannot implement monitoring capabilities on these Secure SQL Queries. This feature was made for allowing support persons to safely execute custom queries without impacting the environment.

So, I wanted to go a little further, and to do that I asked my great friend Lex Hegt (Technical Lead at BizTalk360) to help me to implement a monitoring mechanism for this strategy, using the existing features in BizTalk360 and… we did it!

02-BizTalk360-Database-queries-to-monitor-BRE

How to monitor BRE using BizTalk360

The easy way for you is to download and import the BRE monitor alarm that I, along with Lex, developed here: BizTalk360 Monitor BRE Undeployed policies alarm.

To do that, after you download the zip file containing alarm information, you should:

  • Access to BizTalk360 and click ‘Settings‘ button on the upper right corner

01.3-BizTalk360-Import-Alarm-Settings

  • Select the ‘Import and Export‘ option from the left tree menu and select ‘Import

01.4-BizTalk360-Import-Alarm-import

  • Import operation is a 5-step process, as shown below:
    • Step 1 – Select configuration file (zip file containing alarm information): This is very straightforward, you just click on the ‘Select Files‘ button to choose the configuration file, or you can drag and drop the configuration file into the user interface as shown below.

01.5-BizTalk360-Import-Alarm-import-select-file

    • Step 2 – Choose the alarms you wanted to import: Once the configuration file is selected, the wizard will automatically move forward into step 2 and display all the alarms that are present in the configuration file. You can then either choose to import all of them or select only the specific alarms you wanted to import as shown below.
      • As you will see in the picture below, we also include the Secure SQL Queries, you should also select them;

01.6-BizTalk360-Import-Alarm-import-import-configurations

      • Note that by default all the alarms are kept in disabled status to avoid sending unnecessary notification after import, you may want to review all the configuration before enabling it
    • Step 3 – Map environment discrepancies: Once you have chosen the relevant alarms to import, the system will detect if there is any mapping required between the source system and destination system. The three common things that will require mapping are BizTalk Server, SQL Server, and SQL Server Instance names since they will, for sure, differ between environments. The third screen ‘Map Environment‘ will give the user the option to do this mapping as shown below.

01.7-BizTalk360-Import-Alarm-import-map-environment

    • Step 4 – Review the summary: On the fourth screen, based on the selection and mapping so far, will give a final summary report. It will also give bit more detail on each alarm, for example which artifacts are configured in each one of them (you need to expand the down arrow to view it). Once you are happy, you can just click on the ‘Import‘ button

01.8-BizTalk360-Import-Alarm-import-summary

    • Step 5 – Result summary and exception handling: Once the import button is clicked, all necessary backend calls will be made, and the result of the import process will be shown. You can view all the details like which artifacts are imported, what mapping applied etc. If there are any errors during the import process in specific alarms, those will be displayed as well
  • Once you close the import wizard, you can navigate to the ‘Manage Alarms‘ screen in the Monitoring section, and you can view all the imported alarms.

01.9-BizTalk360-Import-Alarm-manage-alarms

Of course, you can always create your alarm manually by:

  • Click ‘Monitoring‘ in the Navigation panel
  • Click ‘Manage Alarms‘ tab
  • Click ‘New Alarm
    • Enter a descriptive name for the Alarm, Email id (you can enter multiple email ids as comma separated values), and Alarm Description
  • Click ‘Next‘ to enter the Threshold Alarm page
    • Select the ‘Alert on threshold violation’ check box if you want to be alerted upon a threshold violation.
  • Click ‘Next‘ to enter the Health Monitoring Alert page
    • This step is optional, if you want to use the alarm for regular health/status check, you can select day(s)/time when you want to receive the status alert
  • Click ‘Next‘ to move to the screen to set up data monitoring alert.
    • Select the checkbox ‘Use this alarm for data Monitor Alerts’ if you wish to associate the current alarm with the data monitors.
    • Select the checkbox ‘Notify on Success as well’ to receive success email alerts for the configured data monitors. If you do not choose the second checkbox, you will not receive the success email alerts.
  • Click ‘Next‘ to move to the last section of adding the Advanced Settings information
    • All the settings in the Advanced settings page are optional. Set the phone number to receive the Notification SMS when the alert is triggered. As with the email ids, you can enter multiple phone numbers as comma separated values.
  • Click ‘OK‘ to create the alarm
    • The alarm will be created, and you will be redirected to the Manage Alarms page

But if you choose this approach you will them need to:

  • Select the ‘Manage Mapping‘ in the Navigation pane under ‘Monitoring‘ and then:
    • Select the alarm you just created above or that you already have in your environment
    • Select the tab option ‘Database Query
    • Finally, create two new queries, by selecting the button ‘New Query
      • One for monitoring BRE on Published state
      • Another for monitoring BRE on Saved state

01.10-BizTalk360-Import-Alarm-manage-mapping

    • After you press the ‘New Query‘ button you need:
      • Enter the Name for the Database Query
      • Add the SQL Instance name
      • And the SQL Database name
        • This need to be “BizTalkRuleEngineDb

01.11-BizTalk360-New-Alarm-New-Database-Query-Basic-Details

    • Click ‘Next‘ to add the Query and Threshold details
    • This will define when BizTalk360 must return an alert and notify us if any non-compliance occurs

01.12-BizTalk360-New-Alarm-New-Database-Query-Query-and-Threshold-details

01.13-BizTalk360-New-Alarm-BRE-Alarm

If you noticed, my queries have a red flag which means that I have non-compliance BRE and I will get an alert.

Now you have your favorite monitoring tool be able to monitor the state of your BRE!

Author: Sandro Pereira

Sandro Pereira is an Azure MVP and works as an Integration consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. View all posts by Sandro Pereira

BizTalk DevOps: Monitor your BizTalk environment using PowerShell – Monitoring BRE Policies Pending to be Deployed

BizTalk DevOps: Monitor your BizTalk environment using PowerShell – Monitoring BRE Policies Pending to be Deployed

After a tweet exchange with Mark Brimble a fellow “BizTalker” that I respect and admire, I realized that I had developed a PowerShell script some time ago to monitoring BRE Policies that could help him solve his problem. The initial question or need he was facing was:

  • How would you detect when there is no rules policy in a deployed state? I don’t know how many times I import a rule an forget to set it to deployed…

Monitoring BRE Policies Pending to be Deployed: problem

This is a common problem, and to be honest, I sometimes forget what is the correct state of a policy: Deployed or Published. (the correct answer is Deployed). And unfortunately, there isn’t a simple solution to solve this need and the solutions that I found was:

  • Using BizTalkFactory PowerShell Provider that nowadays come with BizTalk Server (“SDKUtilitiesPowerShell” folder)
  • Create my own monitor script – more work involved

Using BizTalkFactory PowerShell Provider is quite simple, but it has some limitations for what I would like to archive, for example, it only shows the policies that are bound to a particular BizTalk Application.

Monitoring BRE Policies Pending to be Deployed: BizTalkFactory PowerShell Provider

And I would like to know and have visibility to all of them because you don’t need to bind a policy to a BizTalk Application on the BizTalk Administration Console to use that policy.

And for that reason, I decide to create my own monitor script that I can easily change and optimize for my scenarios.

The problem I faced in the past was in fact quite similar to what Mark Brimble was describing, maybe with some small differences but the end goal is the same, so I decide to help him (at least try) and publish this PowerShell script.

The purpose of this PowerShell script is to:

  • Monitor BRE Policies and check if the highest version of a given policy is on Deployed state
    • If not notify someone (your BizTalk administration team);

So how can PowerShell help us?

With this script, you can be able to monitor your BizTalk Server BRE Policies, highest versions, that aren’t in the deployed state. Only if the script finds any non-compliance, an email notification will be sent.

Taking this sample:

Monitoring BRE Policies Pending to be Deployed: Policies Sample

The result will be a notification that includes two warnings:

  • Policy1 is in a non-compliance state because version 1.2 is not deployed
  • Policy2 is in a compliance state
  • Policy3 is in a non-compliance state because version 1.2 is neither published nor deployed (this is optional, but I choose to include in my monitoring script)

This script is a combination of a PowerShell script and a SQL Server Script that allows you to set:

  • Set your email notification settings:
#Set mail variables
[STRING]$PSEmailServer = "mySMTPServer" #SMTP Server.
[STRING]$SubjectPrefix = "MBV Notification Report -  "
[STRING]$From = "biztalksupport@mail.pt"
[array]$EmailTo = ("sandro.pereira@devscope.net")
  • And configure a SQL Server script, that in fact were the magic happens. The SQL Script will have the ability to check what rules are in a non-compliance state:
/****** Sandro Pereira & José Barbosa - DevScope  ******/
;with 
cteHist as (
        select h.* from [BizTalkRuleEngineDb].[dbo].[re_deployment_history] h
join (select strname, max(dttimestamp) as dttimestamp from [BizTalkRuleEngineDb].[dbo].[re_deployment_history] group by strname) q on h.strName=q.strName and h.dtTimeStamp=q.dttimestamp
),
ctetDeployed as (
        SELECT StrName, nMajor, nMinor, nStatus
                                                FROM   (
                                                   SELECT StrName, nMajor, nMinor, nStatus
                                                                , row_number() OVER(PARTITION BY StrName ORDER BY nMajor, nMinor DESC) AS rn
                                                   FROM   [BizTalkRuleEngineDb].[dbo].[re_ruleset]
                                                   ) sub
                                                WHERE  rn = 1
)
select * from ctetDeployed d
where nStatus = 0
or exists (select 1 from cteHist h  where h.strName=d.strname and bDeployedInd=0)

The following Windows PowerShell script is a fragment that will help us demonstrate the monitoring capabilities:

$mydata = invoke-sqlcmd -inputfile $sqlQuery -serverinstance $server

Foreach ($log in $mydata)
{
    #Create mail body content
    $mailBody += "..."
}

Here is example expected report output from running the Windows PowerShell script sample, if any of the BRE Policies are in an unwanted state.

Monitoring BRE Policies Pending to be Deployed: Report

Note: This type of script must be viewed as a complement to the tools mentioned above or used in the absence of them. The script should also be adjusted to your needs.

THIS POWERSHELL & SQL SCRIPT ARE PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND.

Special thanks to my coworker José Barbosa for helping me optimize the SQL Server script!

The script can be found and download on Microsoft TechNet Gallery:
Monitoring BRE Policies in your BizTalk environment with PowerShell (18.0 KB)
Microsoft TechNet Gallery

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira

Prototyping your Frontend: How to mock responses in API Management?

Prototyping your Frontend: How to mock responses in API Management?

Sometimes the best option to create a REST API and provide something for to the partners to try it out (documentation or starting to developer their side), especially if the requirements are not completely defined, is not to start coding your backend API, which takes more time to develop and which sometimes will be subjected to constant changes, becoming in many cases an inglorious work and a complete waste of time (… sometimes) but instead to prototype your API in your frontend system. So in this post, I will address the question: How to mock responses in API Management?

API Management provides different, powerful and easy ways to mock your API’s to return static or dynamic sample responses even when there is no functional backends service capable of providing them. Mocking can be very useful in several scenarios like:

  • Create proof of concepts or demos
  • Test Driven Development approach: when the API façade is designed first and the backend implementation comes later or is worked upon in parallel.
  • When the backend is temporarily not operational.
  • When a specific operation is not yet implemented.
  • And so on…

Despite being pretty simple to set up a mock, at the moment there are several ways to archive this in API Management, some of them more simple and static and others more complex and dynamic. So, let’s see all the options.

Options 1) Using the return-response policy in the “old” Publisher portal

Using the return-response policy, it would halt the execution of the API pipeline (… if it exists) and return a response code as specified. You can also send an optional set of headers and a body to the caller.

mock responses in API Management: return-response policy overview

(Picture from https://www.youtube.com/watch?v=SDyUw93hx1w)

Note: One of the beautiful things on using this policy, comparing with the mock-response policy that we will describe in option 3, is that the mock can be implemented in a very dynamic way if you combine this policy with expressions.

To accomplish that we need to:

  • Access to the “old” Publisher portal, by accessing you API Management resource in Azure Portal and then click “Publisher portal” button

mock responses in API Management: Azure Portal Publisher Portal option

  • On the Publisher portal, select the option “Policies” from the left menu

mock responses in API Management: Publisher Portal Policies

    • Note: Policies can be configured globally or at the scope of a Product, API or Operation.
  • The next step is to define the scope of the policy, in our sample, we will be selecting a particular operation. To do that you need to select you API from the “API” drop box and then the specific operation from the “Operation” drop box.
  • And then on the “Policy definition” click “ADD POLICY”

mock responses in API Management: Publisher Portal Policy scope

  • Add a return-response policy in the inbound section by:
    • Focus the cursor in the inbound section and then from the Policy statements toolbox click in the option

mock responses in API Management: Publisher Portal add return response policy

    • TIP: When mocking, the policy should always be used in the inbound section to prevent an unnecessary call to the backend.
  • This will add the default template format of the policy to the policy definition, that you will need to set up according to your needs:

mock responses in API Management: return-response policy template

  • To simplify our case, we just need to return a 200-status code with a static JSON response and for that, we need to apply the following policy:
<return-response response-variable-name="existing response variable">
     <set-status code="200" reason="OK" />
     <set-header name="Content-Type" exists-action="override">
         <value>application/json</value>
     </set-header>
     <set-body>{
                 "success": true,
                 "data": {
                    "cards": [
                       {
                          "id": 28,
                          "Name": "Sandro Pereira"
                       },
                       {
                          "id": 56,
                          "Name": "Carolina Pereira"
                       },
                       {
                          "id": 89,
                          "Name": "José Pereira"
                       }
                    ]
                 }
     }</set-body>
</return-response>
  • Save the policy to take effect next time a call is made to the operation.

Of course, this policy can be used in many different ways, for example, if you only want to return a 200 OK without body response, you can use an abbreviated version of the policy that will be represented like this:

<return-response/>

But as I told you earlier that this also can be very dynamic as you can see in the “Mocking response in Azure API Management” tutorial provided by Microsoft where they are mocking an “Add two integers” operation in which the policy will look like this:

<return-response response-variable-name="existing response variable">
     <set-status code="200" reason="OK" />
     <set-header name="Content-Type" exists-action="override">
         <value>application/json</value>
     </set-header>
     <set-body>{
          "success": true,
          "data": "@((Convert.ToInt32(context.Request.Url.Query.GetValueOrDefault("a", "0")) + Convert.ToInt32(context.Request.Url.Query.GetValueOrDefault("b", "0"))).ToString())"
     }</set-body>
</return-response>

Here, I am taking the actual query parameters provided in the request and implementing the all the operation logic of my backend API dynamically inside my policy… pretty cool!

Options 2) Using the return-response policy in the Azure Portal.

This second option is exactly the same as the previous one, but instead of doing in the “old” Publisher portal, we will accomplish the same goal using the “new” API Management capabilities/functionalities embedded in the Azure portal.

To accomplish that you need to:

  • Access your API Management resource in Azure Portal and then click “APIs” option under “API Management” section from the left menu

mock responses in API Management: Azure Portal APIs

  • Select the API from the API list, then from the operation list select the correct operation and then click the edit button on the “Inbound processing” policies

mock responses in API Management: Azure Portal create or edit operation policy

  • Click in “</> Code View” to view or edit your policies as explained earlier in the first option

mock responses in API Management: Azure Portal create or edit operation policy code view

  • You will find the same experience as the “old” Publisher portal while editing the rules

mock responses in API Management: Azure Portal create or edit operation policy code view result

Both option 1 and 2 are the same, the only difference between them is that in the first option we are using the Publisher portal (this portal still exists because not all functionalities have yet been migrated for the Azure Portal) and in the second we are using the Azure Portal UI.

Options 3) Using the Mock-Response policy

The first two options, that in fact use the same rule are very useful in several distinct scenarios, especially if you want to implement some intelligence (dynamic responses) in you mock.

But what if you want to combine your mocking cases against our API specifications that we used while creating our operations? Fortunately for us, Microsoft released a few months ago a new policy to perform this task in an easier way and you can now use the Mock-Response policy to achieve this effect and it is fully supported through the UI in the Azure Portal.

Note: this policy can also be used in the “old” Publisher portal but I will not address that.

TIP: Once again, you can apply this policy to every section, but its typical usage scenario is on the inbound path to provide a response instead of the backend service and also to prevent unnecessary calls to the backend.

To configure this policy from the Azure Portal you need to:

  • Access your API Management instance, under “API Management” section click “APIs”, select the API from the API list, then from the operation list select the correct operation and then click the edit button on the “Inbound processing” policies to open the Inbound processing editor

mock responses in API Management: Azure Portal Mocking

  • You will now notice that a tab titled “Mocking” is available, in which you can configure the desired static response back to your caller by:
    • Selecting the “Static responses” option
    • And what response status should be returned by configuring the “API Management will return the following response” drop box

mock responses in API Management: Azure Portal enable Mocking

Are you wondering where you can define the response?

If you look at the description of this policy in the documentation it says: “… as the name implies, is used to mock APIs and operations. It aborts normal pipeline execution and returns a mocked response to the caller. The policy always tries to return responses of highest fidelity. It prefers response content examples, whenever available. It generates sample responses from schemas, when schemas are provided and examples are not. If neither examples or schemas are found, responses with no content are returned.”

So, in other words, when designing the operation specification:

  • if you provide expected response types and samples:

mock responses in API Management: Azure Portal Mocking Frontend Response types

    • The mock-response policy will take this sample response that you defined as the response to be delivered to the caller

Note: If there are already response status codes, with or without content types, examples and schemas defined, configured on that particular operation (as shown in the figure above), these HTTP status codes will be listed at the top of the “API Management will return the following response” drop box on the mocking section.

mock responses in API Management: Azure Portal Mocking user defined responses

  • if you instead provide a response schema instead of samples, the mock-response policy will generate a sample response from the schema provided
  • If you don’t define samples or schemas, the policy will return a 200 OK response with no content return.

Normally, the policy template (or signature) is:

<mock-response status-code="code" content-type="media type"/>

However, as similar with the return-response policy, you can use an abbreviated version of the policy that looks like this:

<mock-response/>

And again, it will return a 200 OK status code and the response body will be based on an example or schema, if provided for this status code. The first content type found will be used and if no example or schema is found, a response with no content will be returned.

Conclusion

Both mock-response and return-response policies can be used on API Management for prototyping your API frontend. Although at first glance they may have similar behaviors, both policies have advantages and disadvantages and can be used in different scenarios/context.

But I will probably say for mocking propose I will use or advice to use the mock-response policy just because it is simpler and will take advantage from the API specification to generate the mock response which also “force” the developers (or admins in charge of your frontend) to properly document the APIs.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira