Azure Logic Apps Monthly Update – January 2016

After the holiday season, the Azure Logic Apps team were back with their monthly Google Hangout session – the seventhin the series – on January 28, 2015. This was quite an interesting session with lots of updates on the new features that were built over the last couple of months. We recommend you to read […]

The post Azure Logic Apps Monthly Update – January 2016 appeared first on BizTalk360 Blog.

Blog Post by: Sriram Hariharan

BizTalk Mapper tips and tricks: How to properly implement conditions using Functoids chains

BizTalk Mapper tips and tricks: How to properly implement conditions using Functoids chains

It’s always good to analyze code performed by others, please do not consider this a criticism, it is not my intention. Doing that you will, compare technics, learn new things, noted and we become aware of how some transformation rules are made inside the maps and sometimes, most often when things are done in the […]
Blog Post by: Sandro Pereira

BizTalk360 Version 8.0 Will Change the Way You Think About BizTalk – Again!

BizTalk360 version 8.0 is almost there! Make sure you get on the list for the launch webinar (Europe or US). We’ve been working on BizTalk360 8.0 for nearly 14 months now. 9 developers, 2 UI/UX designers, 2 document writers, and 5 QA people. We spent over 30,000 man hours into BizTalk360 version 8.0. Based on […]

The post BizTalk360 Version 8.0 Will Change the Way You Think About BizTalk – Again! appeared first on BizTalk360 Blog.

Blog Post by: Saravana Kumar

We Are Going Agile!

Our team is getting matured in terms of experience and composition, so as our flagship product – BizTalk360. We have added new features, improved existing features, and giving a complete face lift in terms of UI/UX with version 8.0. We are also identifying existing issues in the productthrough our QA practice and most importantly by […]

The post We Are Going Agile! appeared first on BizTalk360 Blog.

Blog Post by: Arunkumar Kumaresan

XLANG Uncaught exception: A failure occurred while evaluating the distinguished field [MY_FIELD_NAME] against the message part data.

XLANG Uncaught exception: A failure occurred while evaluating the distinguished field [MY_FIELD_NAME] against the message part data.

Well let’s go back to the topic: you are doing crazy things with your maps! I was thinking that I had already seen it all, I was thinking that I had already seen it all, but the reality is that I continue to be blown away. While testing a project migration I end up catching […]
Blog Post by: Sandro Pereira

Installing BizTalk Server 2013 R2 in a Basic Multi-Computer Environment (User Guide)

Installing BizTalk Server 2013 R2 in a Basic Multi-Computer Environment (User Guide)

Finally, something that many community members have been requesting me to publish is here! I already made this manual several months ago, however, for several reasons (speaking engagements, publishing other content and so on) I have been delaying its publication. But I have offered this same guide to all my customers. There are many things […]
Blog Post by: Sandro Pereira

Why do we care so much about the BizTalk Community?

A lot of times, people are puzzled why we care so much about the BizTalk community. We write good blog articles covering in-depth topics or sometimes fightwiththe competitors of BizTalk Server, we publish lot of white papers and eBooks on BizTalk, we actively run a weekly webinar on Integration/BizTalk, we organize the most influential annual […]

The post Why do we care so much about the BizTalk Community? appeared first on BizTalk360 Blog.

Blog Post by: Saravana Kumar

Strength of your integration solution is defined by your weakest link

As a start to 2016, we decided to clean up our contact list. Over the period of last 4+ years, users submitted their contact details to us in various places like trial download, event registration, blog subscription, video subscription etc. In some places, we restricted users from using same email address twice (example: trial download), […]

The post Strength of your integration solution is defined by your weakest link appeared first on BizTalk360 Blog.

Blog Post by: Saravana Kumar

How to create a marketplace API App from code: Hacking the Azure Portal

How to create a marketplace API App from code: Hacking the Azure Portal

So a few months ago, I was trying to work out how to create an instance of a marketplace API App via code. There was no API or samples for this at the time. API Apps (like Logic/Mobile/Web apps) can be deployed using ARM templates, so it was just a matter of working out what to put in the template.

 

Since then, the AAS team have released a guide on what needs to go in the template, what the contract looks like, and supplied a sample on GitHub:

https://azure.microsoft.com/en-gb/documentation/articles/app-service-logic-arm-with-api-app-provision/

https://github.com/Azure/azure-quickstart-templates/tree/master/201-logic-app-api-app-create

 

But their example requires you to supply *all* the information – and for an API App, that’s quite a lot e.g. gateway name, secret etc. Especially as your gateway host name needs to be unique.

 

Note: Just as a primer, when you deploy an API App, you’re actually deploying two web sites: a gateway app (which is how you interact with the API App, and which handles security, load balancing, etc.) and then the actual API App itself.

 

I found an easier way to do all this: you can ask the Azure API for an instance of an API App deployment template, with everything all filled out for you.

 

The TL;DR version is that there is an API you can call to get a completed deployment template for any marketplace API App, and then you just need to wrap this in a Resource Group template and deploy it – see further down this article for the details.

 

The technique for working this out can be used to figure out other undocumented APIs in Azure. Not that when I say “undocumented” here, I don’t mean as in the sense of the Win32 API, where undocumented APIs are subject to change and use of them is frowned upon: here I just mean APIs that have yet to be documented, or which are documented poorly.

 

First things first: the Azure Portal is just a rich RESTful-client running in your browser. Whenever you do anything in the portal, it makes REST calls against the Azure APIs. By tracing those calls, we can recreate what the portal does.

 

Most modern browsers have some sort of developer option you can turn on which captures those network calls and lets you examine them (e.g. Dev Tools for Internet Explorer).

Alternatively, you can download a tool such as Fiddler (https://www.telerik.com/download/fiddler), which acts as a local proxy to capture all the requests your browser makes.

 

In my case, I used Internet Explorer, and pressed F12 to open the dev tools dialog.

What I wanted to know was: how do I create an instance of the FlatFileEncoder API App?

 

Here are the steps I took to work it out.

 

Step 1: Open the Portal to the point just before you create your resource

When you start the dev tools, all requests (called Sessions) are logged. The portal makes a *lot* of requests, as it’s always updating the data displayed. In order to cut down the number of requests we have to sort through, it’s best to navigate to the correct screen before starting dev tools – in my case, I navigated to the New option, then to Marketplace, then chose the Everything option, and searched for “BizTalk” – this showed me a list of integration API Apps, including the BizTalk Flat File Encoder:

 

At this point, I would normally select the API App and click Create, but I stopped and moved to Step 2.

 

Step 2: Open Dev Tools (or Fiddler) and clear the sessions

In IE, you open Dev Tools by pressing F12. You get a new window at the bottom. Select the Network tab.

 

In Chrome, it’s also F12 (or Ctrl-Shift-I) to open Dev Tools, and again you select the Network tab.

 

In Fiddler, just start Fiddler and ensure File/Capture Traffic is enabled.

 

Clear the list of current sessions: In IE, click the Clear Session button:

 

In Chrome, it’s the clear button:

 

And in Fiddler, click the Edit menu, then Remove, then All Sessions (or press Ctrl-X).

 

Step 3: Take the actions in the portal to create your resource

Leaving Dev Tools open (or Fiddler running), in the portal create your resource – in my case, I selected the BizTalk Flat File Encoder item, selected Create, typed in values (I chose a new App Service Plan and Resource Group), then clicked OK:

 

Step 4: Stop the Network trace and save it

Once the portal has finished (which takes about 2 minutes for a new API App) we can stop the network trace so we don’t end up with a lot more information than necessary.

 

We stop the trace by pressing the Stop Button in Dev Tools (IE and Chrome) or deselecting the Capture Traffic option in Fiddler.

 

Now save the results file – in IE or Chrome, click the Save button, or in Fiddler click File/Export Sessions/All Sessions.

 

Step 5: Filter the results file

Both IE and Chrome save the results file as an HAR file – an HTTP Analysis Results file. Although you can just look at the file in your browser (in the Dev Tools window) I prefer to open it up in Fiddler, as there is more choice for decoding requests and responses.

 

Open the file in Fiddler by selecting File, then Import Sessions, then choosing the HTTPArchive (HAR) file type, and clicking Next to select the file.

 

You should end up with a window that looks a bit like this:

 

You’ll probably have over 500 items in your list, depending on when you started your trace and when you stopped it, and what you created.

 

And now the fun part: analysing the results.

 

Luckily Fiddler makes this easier for us, by displaying a little icon in the left (which tells us the type of HTTP Verb and content type) – and even better, we can filter the list of sessions so that we only see JSON responses (which is what the Azure Management APIs use): on the right side of the window, select the Filters tab, then under Response Type and Size, select “Show only JSON”. Then click the Actions button (top right of the Filters tab) and select “Run filterset now”:

 

The list of sessions should now be drastically reduced – in my case, I only had 86:

 

And now we can start analysing those API calls!

 

Step 6: Analysing the API calls

This is probably the hardest step – looking at all the calls, seeing what they do. An easy search is to just look for the words FlatFileEncoder in the URL or request data, but I like to go through the calls seeing what each do.

 

The majority of the early calls are to do with populating the dialog displayed when you create a new API App i.e. a list of App Service Plans (ASPs), Resource Groups, Regions, etc.

 

For example, there is a bunch of calls one after another that look like this:

 

They all do a POST against the Microsoft.Web provider API, supplying an api-version, and optionally the type of data we want:

 

  1. The first call asks for serverfarms – this is the old name for App Service Plans (and helps you understand what an App Service Plan is, as it’s implemented as a server farm: a collection of web servers – the number of servers and their capabilities are determined by the pricing tier).

    A GET request is issued on this URL:

https://management.azure.com/subscriptions/(subId)/providers/Microsoft.Web/serverfarms?api-version=2015-04-01&_=1452592242795

Note: the suffix at the end of that URL (the _=1452592242795 part) is a sequence number, used to differentiate multiple identical requests from each other.

If you have any ASPs you’ll see a response similar to this:

 

 

  1. The second call doesn’t ask for a particular type of data, which means all generic data is returned e.g. details on mobile sites, site pools, available hosting locations, etc.

 

A GET request is issued on this URL:

https://management.azure.com/subscriptions/(subid)/providers/Microsoft.Web?api-version=2015-01-01&_=1452592242797

 

The response will look something like this:

 

  1. The third call asks for deploymentLocations – and this is what is returned, along with a sort order (interestingly the Indian regions are always given a sort order of Int32.Max (sortOrder=2147483647) so they always appear at the end of the list, if you’re able to see them.

 

A GET request is issued on this URL:

https://management.azure.com/subscriptions/(subid)/providers/Microsoft.Web/deploymentLocations?api-version=2015-04-01&_=1452592242798

 

The response looks like this:

 

  1. The fourth call is for gateways – this returns a list of any API App gateways that exist already.

 

A GET request is issued on this URL:

https://management.azure.com/subscriptions/(subid)/providers/Microsoft.AppService/gateways?api-version=2015-03-01-preview&_=1452592242801

 

The response will look something like this:

 

  1. The fifth call is for sites – this returns any mobile, web, API or Logic Apps you have deployed, as they’re all deployed as web sites.

 

A GET request is issued on this URL:

https://management.azure.com/subscriptions/(subid)/providers/Microsoft.Web/sites?api-version=2015-04-01&_=1452592242804

 

The response will look something like this:

 

The above is all well and good – it’s useful to know how to get this data. But it doesn’t help us in our aim of creating a new API App.

But by looking through the rest of the calls, we can spot 3 calls that are key:

 

  1. Getting the version number or internal name for the Flat File Encoder.

This is the first call we can see that has the words FlatFileEncoder in the request:

 

(ignore the seemingly duplicate call above it (session 156), that’s an HTTP OPTIONS call used to check what actions this endpoint supports – the portal uses this pattern a lot of issuing an OPTIONS call before doing a POST).

 

As per the request, we can see that we’ve requested metadata for the FlatFileEncoder. Let’s look at the response:

 

We can see that the response gives us the current version of the API App, the type of app, the display name, and the microServiceId i.e. the name of the API App we have to use to create it (although we actually needed to know that part in order to request the metadata!).

 

We can also see that there are no specific app settings, and no dependencies needed for the API App.

 

  1. Getting a deployment template for the FlatFileEncoder

This is the request I found most interesting.

 

We create a request that looks like this:

 

Here’s that in text format:

{

  “microserviceId”: “FlatFileEncoder”,

  “settings”: {

 

  },

  “hostingPlan”: {

“subscriptionId”: “subid”,

“resourceGroup”: “Api-App-0”,

“hostingPlanName”: “FFASPNew”,

“isNewHostingPlan”: “true”,

“computeMode”: “Dedicated”,

“siteMode”: “Limited”,

“sku”: “Standard”,

“workerSize”: “0”,

“location”: “West Europe”

  },

  “dependsOn”: [

 

  ]

}

 

The request is saying that we’d like to create a new FlatFileEncoder microservice (i.e. API App), and we’d like to use a new ASP called FFASPNew, and we’re using a Resource Group called API-App-0 (which is obviously a temp name).

 

We POST this request to this URL:

https://management.azure.com/subscriptions/(subid)/resourcegroups/(resourcegroupname)/providers/Microsoft.AppService/deploymenttemplates/FlatFileEncoder/generate?api-version=2015-03-01-preview

 

I wasn’t sure what to expect from the response, but look at this:

 

It’s a really long response… but what’s in there is a complete ARM template to create an API App – all you need to do is wrap it in the template to create a Resource Group.

 

What’s interesting is that not only does it use variables (which are passed to the template) to define things such as the gateway name, or ASP name, it also supplies us with a unique gateway name, plus the secrets to use for the API App:

 

  1. Deploying the FlatFileEncoder

Deploying the API App now becomes a simple matter of wrapping the response from the last call in a new ARM template for the resource group.

 

The request looks like this:

 

In text, that looks like:

{

  “resourceGroupLocation”: “West Europe”,

  “resourceGroupName”: “(resource group name)”,

  “resourceProviders”: [

“Microsoft.Web”,

“Microsoft.AppService”

  ],

  “subscriptionId”: “(subid)”,

  “deploymentName”: “FlatFileEncoder”,

  “templateLinkUri”: null,

  “templateJson”: “(api app template)”,

  “parameters”: {

“FlatFileEncoder”: {

          “value”: {

            “$apiAppName”: “FlatFileEncoder”

          }

},

“location”: {

          “value”: “West Europe”

}

  }

}

 

The response you got from the previous step (i.e. the context of the value field) is supplied in the templateJson field, but it is escaped – that is, any double-quotes (“) are escaped (“) and any single backslashes are escaped (i.e. becomes ).

 

This request is POSTed against this URL:

https://portal.azure.com/AzureHubs/api/deployments

 

The response you get back will give you details about the provisioning of the API App, including a CorrelationId you can use to check progress.

 

This is a snippet of the response:

“mode”: “Incremental”,

“provisioningState”: “Accepted”,

“timestamp”: “2016-01-12T22:55:41.6680867Z”,

“duration”: “PT0.4716789S”,

“correlationId”: “30f8ddc6-a44d-4afc-9b03-9c2645c569b4”,

 

  1. Checking the status of the deployment

The portal then issues multiple calls to check the status of the deployment.

 

A GET request is made to this URL:

https://portal.azure.com/AzureHubs/api/deployments?subscriptionId=(subid)&resourceGroupName=(resource group name)&deploymentName=(api app name)&_=1452592189872

 

This will return a response like this:

 

Once the deployment has finished, it will change to this:

 

And if it fails, you’ll get a failure response.

 

This is how the portal knows to show you a tile with a “deploying” animation on it, and how it knows to stop showing that when the deployment has finished.

 

  1. Bonus: Finding recommended items in Azure

I noticed this little request whilst looking through the calls that the portal makes.

 

A GET request is made to this URL:

https://recommendationsvc.azure.com/Recommendations/ListFrequentlyBoughtTogetherProducts?api-version=2015-11-01

 

This will return a response like this:

 

This appears to be a way for the portal to recommend other items that you might be interested in. I haven’t noticed this functionality before!

 

And that’s it – there’s really only two steps involved: get the deployment template, and then deploy it.

 

All the rest is useful if you’re creating a UI where a user can select from a list of existing Resource Groups and ASPs and Locations etc., but not necessary if you already have all that information or are creating new ASPs and Resource Groups.

 

The techniques I showed here can be used for lots of different activities in the portal – you don’t have to wait until Microsoft creates .NET management libraries or publish details of management APIs, you can go find out how to do it yourself. Go get to it!

 

I’m hoping this has all been useful.

 

BizTalk Tracking: best practices & troubleshooting

How it works

The BizTalk tracking service works with 2 separate artifacts:

  • The TDDS service (the BizTalk tracking host instance)
  • The SQL Agent job “TrackedMessages_Copy_BizTalkMsgBoxDb”

Both services have their own particular tasks for moving the tracking data to the BizTalkDTADb database.

The TDDS service is responsible for moving all the tracking events from the BizTalkMsgBoxDb to the BizTalkDtaDb and/or the BAMPrimaryImport database. The TDDS service is running in the host instances that have the “Allow Host Tracking” enabled.

The SQL job is responsible for moving the actual body contents from the BizTalkMsgBoxDb to the BizTalkDtaDb.

Best Practices

Configure a dedicated tracking host.

  • Do not bind this host instance to any other BizTalk artifact (receive location / send port / orchestration).
  • Create only 1 dedicated host per message box database (so in most cases, only one host should be configured for tracking).

Only activate tracking where it is necessary.

This is a very important point that is overlooked a lot during the development of a flow. To avoid any performance problems, the BizTalkDtaDb database needs to be as small as possible. Only activate tracking where it is absolutely  necessary.

  • Global tracking: If you don’t need any tracking at all, you can disable global tracking. Tracking can be very useful to detect bottlenecks / performance issues for a certain application, so I wouldn’t recommend disabling tracking on a global level.
  • Pipeline tracking: If it’s not necessary to track any events for a particular pipeline, disable the pipeline tracking. Note: you won’t see any events for that pipeline, even if the tracking on the ports are enabled!
  • Port tracking: You can define tracking for a certain port. It’s important to think about which ports you want to track. Too much tracking can lead to a large tracking database.

Disable orchestration start and end shape tracking.

  • This tracking is only useful for the orchestration debugger, which is rarely used on a production environment. This can have a large impact on the performance if there are a lot of long-running orchestrations. When this tracking is enabled, the events are saved in the “dta_debug_trace” table in the BizTalkDtaDb, it is recommended that this table doesn’t contain more than 1 million rows.

Track the promoted context properties that need to be searchable.

  • If you need to be able to search for messages based on some context properties, make sure to enable the tracking on the property schema.

Configure the DTA purge & archive job.

  • To prevent the tracking database for growing infinitely, you need to configure this job properly.

Troubleshooting the tracking service

There can be several reasons why the tracking isn’t working as expected.

  • The SQL agent jobs or tracking host instance aren’t working properly
  • The tracking isn’t configured properly
  • Certain issues with the BizTalk runtime

The first things you need to check when you experience issues, are the BizTalk host instances and the BizTalk SQL jobs.
If the host instances are not started or the SQL job “TrackedMessages_Copy_BizTalkMsgBoxDb” doesn’t complete successfully, the tracking simply won’t work.

If those services are running ok, then you should check if the tracking is configured properly. If the tracking is not enabled for certain ports or pipelines, it’s normal that you can’t access the tracking information for those ports. Enabling the tracking should fix the issue (only for new messages).

If the above isn’t the reason why the tracking isn’t working, it’s an indication that there is something wrong in the BizTalk environment. These are the tools you can use to further troubleshoot the issues:

  • Event Viewer: In most of the cases, checking the application event log of the servers is very helpful for troubleshooting the issues. Check if you see errors related to “BAM Eventbus service” or “BizTalk Server” that can help you detect the problem.
  • Performance Counters: Using perfmon, you can check the performance counters for the “BizTalk:TDDS” service. It can be useful to verify if there are any failed TDDS batches
  • Table “TDDS_FailedTrackingData” that is located in the BizTalkDtaDb database. If there are recent entries in those tables, it means that there were some issues with moving the tracked events/data. Check the “errmsg” column for the reason why it failed.
  • TDDS tracing: You can enable TDDS tracing by adding the entry below in the btsntsvc.exe.config (or btsntsvc64.exe.config if you are running a 64 bit tracking host) file that is located in the BizTalk installation folder. After adding the entry, restart the tracking host instance. This will show you clear error messages in the log file if there are failures in the TDDS service.

<system.diagnostics>
 <switches>
  <add name=”Microsoft.BizTalk.Bam.EventBus” value=”1″ />
 </switches>
 <trace autoflush=”true” indentsize=”4″>
  <listeners>
   <add name=”Text” type=”System.Diagnostics.TextWriterTraceListener” initializeData=”c:tdds.log“/>
  </listeners> 
 </trace>
</system.diagnostics>

Out of sequence tracking streams

I would like to elaborate one particular issue we encountered at a customer.

The tracking of their BizTalk 2009 environment suddenly stopped working. There were no errors in the event log and no clear indications of issues with BizTalk. The tracking data remained in the BizTalkMsgBoxDb and wasn’t moved to the BizTalkDtaDb.

This can have a large impact, since the MessageBox database keeps growing. This can cause a lot of performance issues and eventually database size throtthling!

On this particular environment, these problems occurred after having some disk space issues on the SQL server. When the disk space issues were resolved, they noticed that the tracking of new messages weren’t visible in the BizTalk Administration console anymore but the messages were being processed correctly.

We enabled the TDDS tracing and noticed following entries:

Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Info 1754188 records are missing from the sequence number range for this batch.
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Info Record with sequence number 9780138 is not in the batch. Trying to read it again..
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Info 1757776 records are missing from the sequence number range for this batch.
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Info Record with sequence number 9760524 is not in the batch. Trying to read it again..
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Warning Record with sequence number 9780138 was not found.
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Info Record with sequence number 9780139 is not in the batch. Trying to read it again..
Microsoft.BizTalk.Bam.EventBus.TDDSDecoder Run Warning Record with sequence number 7956152 was not found.
 
This error occurs when the tracking streams are out of sync.

Thanks to the blog post by Michael Stephenson we were able to resolve this problem.

The tracking streams were out of sync (possibly due to the disk space problems on the database server). Running the update query fixed the issue. Please note that it isn’t recommended to run this query on a production environment!

I hope you found this blog post useful and that you can apply some of this information in your BizTalk implementation. If you have any remarks / questions, feel free to leave a comment.