BizTalk Server 2006 Host/Instances Configuration

BizTalk Server 2006 Host/Instances Configuration

I've seen one too many BizTalk groups with poorly configured Host/Host Instances.  So I've decided to create a BASIC post on the subject. This is a pretty sophisticated topic having to do with scaling, high availability, resource management etc. so please don't assume that what I'm about to say is universally appropriate. I'll give a few suggestions and more importantly, I'll give the reasons behind those suggestions which should give you the background you need to create your own configuration wisely.

Suggested Starting Point

  1. Enable "Allow host tracking" on the default in-process BizTalk Application Host
    1. Do NOT enable "Allow host tracking" on any other Host
      Confirm with the BTS docs if you like, but only one host has to have this enabled to move the messages from the MsgBox to the tracking DB
  2. Create 3 in-process hosts for each logical application that you have. (Use 2 hosts if you are not using any orchestrations)
    1. One for Send Operations      [Bind your send points to this host]
    2. One for Receive Operations  [Bind your receive ports to this host, except HTTP or SOAP]
    3. Once for Processing             [Bind your orchestrations to this host, if you have any]

Application Isolation

Having separate hosts for your applications allows you to restart the host instances for that application (perhaps because of a re-deployment) without interrupting the other apps.

Scaling Out

Having a host for Send/Receive/Process allows you to distribute your application accross several servers in your BizTalk group in a flexible manner.  You can increase the number of host instances running to "beef up" any combination of these three types of operations as required.

High Availability

Have at least two host instances running for every host that you have.  This way, any single node can fail and the other node will continue to operate. This can become more complicated for Receive nodes so check the HA configuration for whatever adapter types you have running.

Some Caveats

  1. Too many hosts. 
    1.  My BizTalk groups seldom have more than 22 hosts or so (7 apps + default host for tracking)  So if you begin to experience problems with Groups having more hosts than that I recommend you consider too many hosts as a possible cause.
    2. Configurations in BTSNTsvc.exe.config may give you weird results because you may be assuming that they apply to the whole group, when really they apply to the host instances
    3. Again, in the BTSNTsvc.exe.config,  the changes in this file are picked up when the host instance restarts.  You may have changed the configurations, restarted only a handful of your hosts. Later down the road, you restart other host instances that then begin to have problems because of the contents of the BTSNTsvc.exe.config.  This is VERY hard to trouble shoot and I have two recommendations.
      1. Don't change BTSNTsvc.exe.config unless you absolutely MUST.
      2. If you change BTSNTsvc.exe.config restart ALL of your host instances immediately so you can quantify the affect of the changes immediately.
  2. Single Box groups
    1. If you're running BizTalk all on one box then it hardly helps you to be able to partition out the operations. In this case, I would just create one host per application so you can still have Application Isolation, without worrying about the scale out features or HA (which you can't really use with just one box anyways)

BizTalk: Anti-Patterns: Business Rule Engine

BizTalk: Anti-Patterns: Business Rule Engine

Let’s see one of the projects I’ve made with the Business Rule Engine from the Microsoft BizTalk tool set.
There are the EDI data in”very flexible format”. The source partners fill up the fields of these documents by different applications and peoples. As a result the data are usually spread through the several fields if they filling application has a limit to the field size.
 
For example:
Good data:
N1*CN*YOUNG BAY INTERNATIONAL INC
N3*2655 ST-JACQUES#101*MONTREAL QC H3J 1H8 CANADA
N3*TEL:514-8121887 514-9313157
 
 
Bad data:
N1*CN*AFS -VISA INTERNATIONAL (HK) LTD    SUITE NO.1 29/F SKYLINE T
N3*OWER      39 WANG KWONG ROAD, KOWLO
N3*ON BAY,   KOWLOON, HONG KONG
 
The data are in the wrong fields; the data are spread through the several fields; etc.
 
I need to map the data to the into the local database. For example:
we’ve got the source data: “YOUNG BAY INTERNATIONAL INC”, which can be mapped to several “YOUNG BAY”-like companies, depends of other source data. In our data base we have:
“YOUNG BAY INC”
“YOUNG BAY INT’L CO”
“YOUNG-BAY INTERNATIONAL “

plus the variation in the other source data as City, State, CountryCode, PostalCode, Address etc.
 

 

This project is the good candidate to use BRE. The data in the input set has different relations (rules). Those rules tend to be constantly improved and modified. It is the ideal candidate to use BRE, isn’t it?
We can process the data by BRE. We can easily develop and modify the sets of rules.
 
The project was implemented in two stages.
First was the “Unification of the data”. It unified the punctuation, “Incorp” –> “”Inc”, “Logictic” -> “Logistics”, “HONGKONG” -> “HK”, “U  S  A” -> “US”, “H3J 1H8” -> “H3J1H8”, etc. etc. etc.
The second stage was the “Company resolving”. It took the unified data from the input documents and looked through the corporate data base to map the input data to the corporate data. The example is above.
 
This part was implemented with help of the BRE .
 The Rule Sets were implemented.  There were about forty rules.

 

And from the start there were the big issues with performance.
Why? The most of the rules did not have the conditions with the simple boolean comparisons like “<Name_Compny> == “value1” but the resource consuming query to the data base.
In the BRE the MOST of the rule conditions are tested (invoked) at the start. (For explanation there is a great article by Charles Young http://geekswithblogs.net/cyoung/articles/79500.aspx) In my case near ALL forty data base queries hit the performance the system.
I did not need in all queries at the start. Then I separated the rules to stages. If the first set of rules did not get the result, the second set comes to the work, etc.
Sounds good? Let see.
One of the BRE technology advantage is a Rete networks (a Rete algorithm) (see http://en.wikipedia.org/wiki/Rete_algorithm). It gives us the capability to efficiently process the big or huge sets of data. The BRE takes care about the order of the tests in the rule set conditions. Moreover the BRE cached the fields of an XML document or the columns of a database table and when these data are accessed for the second time or later within the policy, the values are usually retrieved from the cache. However, when the members of a .NET object are accessed for the second time or later, the values are retrieved from the .NET object, and not from the cache. That means if I invoke the same “test” method from several rules, BRE do not cache the result of this method but each time re invoke the method. It was unfortunately my case.
User has few techniques to manage the order and I used the simplest. I added to each test (or stage) a flag which goes up after this test is ended. The conditions checked the flags. The down flags in the conditions prevented the heavy part of the condition from the start. Now the conditions were started in managed order.
 
Looks weird.
 
First of all, the BRE is used to separate the business logic to the independent layer to make it easy (easy to creating, modification, maintaining).
Did I gather the business logic in the rules? No. The main part of rules was remained in the database queries and the classes which wrapped those queries. The rules did only manage the invoking of these queries. If these layers were the same size then the layering give me improvement. The most of the logic was out of BRE Rules layer and easily could be implemented by .NET code. Maintaining the BRE infrastructure for such easy work as invoking the methods in my case was not a good choice.
 
Second. The BRE takes care about the order of the tests in the rule set conditions. It SHOULD take care of it, otherwise we throw away of this advantage of the Business Rule Engine. If we manage the rule set condition order, we stopped at the same point where we started. I mean now I used BRE, it was cool, but all the rule order processing was mine responsibility. And the most of my issues was to avoid the BRE most amazing feature.
 
If the BRE was managed the rule set condition order, the performance was bad.
If I managed the rule set condition order, why do I need the BRE?
 
After several modifications in the rules I’ve decides to redesign the project. I’ve created the simple class to manage the rules and got rid of the BRE artifacts.
Ufff… It was cool but in my case it was something different.
Now all staff is pretty simple and manageable.
You can tell me “Now you cannot use the BRE Composer as a User interface to modifying and improving the business layer logic.” That’s true. Now I cannot delegate the “rule-modification” job to the third party. If this is a requirement then the BRE and the Composer would be a good choice.
 
 
Conclusions (“Donts”):
 
1. If your rules have the resource-consuming conditions (predicates) be aware of non-efficient performance.
2. If your rules have the well-defined processing order, seems the BRE is not your choice. If you don’t care about the rule processing order, the BRE could be useful.
3. If your rules have the values in the conditions retrieved from the .NET object be aware of non-efficient performance.
4. Rules should cover the biggest part of the business logic. If the rules have a lot of calls to the .NET classes think of redesign this project.

 

====================================================================================
Be aware of the BRE:
It is the tool for processing the specific data set with specific conditions. Not all data and not all business logic can be implemented efficiently with BRE.
Maybe it is not what exactly we can estimate from the product with such name. Now it is hardly a tool for the Business Analyst. It is the tool for software developers. And this is great tool!
I’m sure the next version would be much better from a developers point of view.
 
=================================================
Notes:
* The other simple method to stage the rules is to separate the rules to different policies and chain these policies. This approach did not work well for my case because I mostly had to manage the order of the rules, not the stages, what leads to pack one rule to one policy. Each policy creates the persistence point in the orchestration. Maybe it would not hit the performance too hard in my case, I didn’t try this. 
 
* We can force the caching of the .NET objects but only programmatic. The Business Rule Composer tool does not expose this possibility.

 

BizTalk: Anti-Patterns: Business Rule Engine

BizTalk: Anti-Patterns: Business Rule Engine

Let's see one of the projects I've made with the Business Rule Engine from the Microsoft BizTalk tool set.
There are the EDI data in"very flexible format". The source partners fill up the fields of these documents by different applications and peoples. As a result the data are usually spread through the several fields if they filling application has a limit to the field size.
 
For example:
Good data:
N1*CN*YOUNG BAY INTERNATIONAL INC
N3*2655 ST-JACQUES#101*MONTREAL QC H3J 1H8 CANADA
N3*TEL:514-8121887 514-9313157
 
 
Bad data:
N1*CN*AFS -VISA INTERNATIONAL (HK) LTD    SUITE NO.1 29/F SKYLINE T
N3*OWER      39 WANG KWONG ROAD, KOWLO
N3*ON BAY,   KOWLOON, HONG KONG
 
The data are in the wrong fields; the data are spread through the several fields; etc.
 
I need to map the data to the into the local database. For example:
we've got the source data: "YOUNG BAY INTERNATIONAL INC", which can be mapped to several "YOUNG BAY"-like companies, depends of other source data. In our data base we have:
"YOUNG BAY INC"
"YOUNG BAY INT'L CO"
"YOUNG-BAY INTERNATIONAL "

plus the variation in the other source data as City, State, CountryCode, PostalCode, Address etc.
 
This project is the good candidate to use BRE. The data in the input set has different relations (rules). Those rules tend to be constantly improved and modified. It is the ideal candidate to use BRE, isn't it?
We can process the data by BRE. We can easily develop and modify the sets of rules.
 
The project was implemented in two stages.
First was the "Unification of the data". It unified the punctuation, "Incorp" –> ""Inc", "Logictic" -> "Logistics", "HONGKONG" -> "HK", "U  S  A" -> "US", "H3J 1H8" -> "H3J1H8", etc. etc. etc.
The second stage was the "Company resolving". It took the unified data from the input documents and looked through the corporate data base to map the input data to the corporate data. The example is above.
 
This part was implemented with help of the BRE .
 The Rule Sets were implemented.  There were about forty rules.
And from the start there were the big issues with performance.
Why? The most of the rules did not have the conditions with the simple boolean comparisons like "<Name_Compny> == "value1" but the resource consuming query to the data base.
In the BRE the MOST of the rule conditions are tested (invoked) at the start. In my case near ALL forty data base queries hit the performance the system.
I did not need in all queries at the start. Then I separated the rules to stages. If the first set of rules did not get the result, the second set comes to the work, etc.
Sounds good? Let see.
One of the BRE technology advantage is a Rete networks (a Rete algorithm). It gives us the capability to efficiently process the big or huge sets of data. The BRE takes care about the order of the tests in the rule set conditions. Moreover the BRE cached the fields of an XML document or the columns of a database table and when these data are accessed for the second time or later within the policy, the values are usually retrieved from the cache. However, when the members of a .NET object are accessed for the second time or later, the values are retrieved from the .NET object, and not from the cache. That means if I invoke the same "test" method from several rules, BRE do not cache the result of this method but each time re invoke the method. It was unfortunately my case.
User has few techniques to manage the order and I used the simplest. I added to each test (or stage) a flag which goes up after this test is ended. The conditions checked the flags. The down flags in the conditions prevented the heavy part of the condition from the start. Now the conditions were started in managed order.
 
Looks weird.
 
First of all, the BRE is used to separate the business logic to the independent layer to make it easy (easy to creating, modification, maintaining).
Did I gather the business logic in the rules? No. The main part of rules was remained in the database queries and the classes which wrapped those queries. The rules did only manage the invoking of these queries. If these layers were the same size then the layering give me improvement. The most of the logic was out of BRE Rules layer and easily could be implemented by .NET code. Maintaining the BRE infrastructure for such easy work as invoking the methods in my case was not a good choice.
 
Second. The BRE takes care about the order of the tests in the rule set conditions. It SHOULD take care of it, otherwise we throw away of this advantage of the Business Rule Engine. If we manage the rule set condition order, we stopped at the same point where we started. I mean now I used BRE, it was cool, but all the rule order processing was mine responsibility. And the most of my issues was to avoid the BRE most amazing feature.
 
If the BRE was managed the rule set condition order, the performance was bad.
If I managed the rule set condition order, why do I need the BRE?
 
After several modifications in the rules I've decides to redesign the project. I've created the simple class to manage the rules and got rid of the BRE artifacts.
Ufff… It was cool but in my case it was something different.
Now all staff is pretty simple and manageable.
You can tell me "Now you cannot use the BRE Composer as a User interface to modifying and improving the business layer logic." That's true. Now I cannot delegate the "rule-modification" job to the third party. If this is a requirement then the BRE and the Composer would be a good choice.
 
 
Conclusions ("Donts"):
 
1. If your rules have the resource-consuming conditions (predicates) be aware of non-efficient performance.
2. If your rules have the well-defined processing order, seems the BRE is not your choice. If you don't care about the rule processing order, the BRE could be useful.
3. If your rules have the values in the conditions retrieved from the .NET object be aware of non-efficient performance.
4. Rules should cover the biggest part of the business logic. If the rules have a lot of calls to the .NET classes think of redesign this project.
====================================================================================
Be aware of the BRE:
It is the tool for processing the specific data set with specific conditions. Not all data and not all business logic can be implemented efficiently with BRE.
Maybe it is not what exactly we can estimate from the product with such name. Now it is hardl
Upcoming Presentations

Upcoming Presentations

I’ve got several talks coming up in the next month, if your interested in learning

more about any of these subjects I encourage you to check these out.

  • Scott Colestock’s BizTalk Deployment Framework – January 10th, 2006 – Dallas

    BizTalk User Group

    This talk will be an overview of Scott Colestock’s excellent BizTalk Deployment Framework.

    This is a tool for automating BizTalk deployments built in NAnt. If you’re working

    with BizTalk and don’t have this tool, you’re simply working to hard.

  • Black Belt XML – January 11th, 2006 – Little Rock

    .NET User Group

    This will be the same talk I recently

    presented to the Dallas .NET User Group, but this time in for the great folks

    of Little Rock, AR.

Macros for FILE send handler

Here is a list of macros for the FILE send handler that I found on Jan Tielens' Bloggings.
This is something I find everyone googling for quite often, so thought of putting it up here for quick reference.

For the details of each of the macros, please see Jan Tielens' Bloggings

%datetime%
%datetime_bts2000%
%datetime.tz%
%DestinationParty%
%DestinationPartyID%
%DestinationPartyQualifier%
%MessageID%
%SourceFileName%
%SourceParty%
%SourcePartyID%
%SourcePartyQualifier%
%time%
%time.tz%

BizTalk 2006 R2 WCF Adapter First Look Video

Ready to take a look at some of the new features of BizTalk 2006 R2? 

I have put together a short 13 minute video walking through how simple it is to expose BizTalk Orchestrations as WCF Services and then consume them inside a Windows Application.

This video show how to use the BasicHTTP Binding and the WCF Wizard. 

This is intended to give you a first look into some of the new features in BizTalk 2006 R2.

Take a look at the video today!  Available via live play or WMI download.  Source code is also available although without R2 installed you will not be able to run the sample.

Please note that anything you see is subject to change in later releases of BizTalk.

BizTalk 2006 R2 Windows Communication Foundation Adapter Video

BizTalk 2006 R2 Windows Communication Foundation Adapter Video

 

I have put together a short video that walks through how simple it is to expose a simple Orchestration as a Windows Communication Foundation (WCF) Services.

This video uses the BasicHTTP Binding and the WCF Wizard. 

This is intended to give you a quick look into some of the new features in BizTalk 2006 R2 relating to WCF.

Take a look at the video today! 

I have made the video available through Flash live play or WMI download.  The source code is available for download as well but without BizTalk 2006 R2 installed you will not be able to run the sample.

Please note that anything you see is subject to change in later releases of BizTalk.

Enterprise Service Bus (ESB) for Partners

Enterprise Service Bus (ESB) for Partners

The term Enterprise Service Bus (ESB) is widely used in the context of implementing the messaging capabilities of a service oriented infrastructure. An ESB is one of many building blocks that make up a comprehensive service oriented infrastructure.  Microsoft provides a comprehensive ESB offering through its large network of partners. 


You can find a list of the partners in the early access program on the Microsoft SOA/ESB Web site:  http://www.microsoft.com/biztalk/solutions/soa/esb.mspx


Regards,


Marjan