BizTalk Server Contest Entry

Finally submitted my entry to the BizTalk Server contest.  Curious?

I created a transactional .NET adapter.  This adapter allows you to submit messages to any .NET component that implements a certain interface, in a transactional manner.  This means that, for example, if you access SQL Server from within your component, every operation on SQL will be in the same transaction as all message box operations.  This guarantees high reliability between the message box and any transactional backend.  If the transaction fails, everything will be rollbacked (including all operations on the message box) and the message transmission will be retried.

So, instead of accessing your components using an expression shape inside an orchestration, you finally might think of doing things asynchronously now!  By calling .NET component asynchronously using the BizTalk Server 2004 Transactional DOTNET Adapter you can leverage the retries, backup transport, tracking and BAM features as well.

What’s in the package?

  • MSI installation package
  • .chm documentation files
  • couple of samples
  • full documented source code

To show the adapter framework’s inner workings, I programmed directly against the adapter framework.  (Not using the SDK adapter base classes.) 

Now it’s up to you guys for using this: please provide me with comments, feedback.  Tell me what’s good and what isn’t.  Tell me what or how to improve…  But first of all: enjoy it.  It’s there, it’s free, it’s for you!

Have fun with the pet project I’ve spent couple of months with:

The BizTalk Server 2004 Transactional DOTNET Adapter.

Special thanks to my girlfriend for all the patience she had! 

Share this post: Email it! | bookmark it! | digg it! | reddit!

Property Schema and Promoted Properties In Custom Pipelines

Topics Covered:

– Custom promoted properties inside a pipeline

– Schema Property: “Property schema base”

– Direct binding to the message box


I have spent a lot of time in the past few weeks working with promoted properties through property schemas.  I was using the standard approach, to simply set the promoted properties inside the schema.  This, in turn, would allow the properties to be promoted inside the pipeline with no additional effort on my part.


This led me into working with custom pipelines that promoted additional properties that are not inside my message.  Once custom properties are promoted, I wanted to use these items for routing to an Orchestration. 


Example: I want everything processed by pipeline X and pipeline Y to promote a property named myCustomProp that says “I belong to group A”.  Then, I want all group A items to be processed by Orchestration 1.  I accomplished this by writing a custom pipeline component that promotes predefined text into myCustomProp.  Then, I used direct message box binding on the Receive Shape of the Orchestration to pick up message that have this property. 


CRITICAL: You will need to change this property inside the Orchestration to something other then “I belong to group A” before you send the message.  Otherwise, when you send it and the Orchestration will start again because it will match the subscription again.


The key point here is I want to set up a filter shape based on my new promoted property that I promoted inside my custom pipeline.  This property does not exist inside the schema of my received message.  I started getting the following error when I tried to build my Orchestration:


“message data property ‘MyPropertySchema.myCustomProp’ does not exist in messagetype ‘msgStart’


Ok, so what does this mean? 

BizTalk is helping you out my looking inside your message and telling you ahead of time that your property does not exist.  What a great feature, but in this case we do not want this enforced.


How do I fix it?

Simple solution… Although it took me hours to figure out. 

There is a property that can be set on each element node inside your property schema.  This property is inside the Reference sections named Property Schema Base. 


What is this used for?

It is used to specify the origin of the data that will be promoted into that Element in the Property Schema.  I did not realize how important this property was. 


The two values for this property are:

MessageDataPropertyBase (Default) – This means the data used to populate this field will come from a message


MessageContextPropertyBase – This means the data use to populate this field may or may not exist in a message.  i.e. it could be prompted inside a custom pipeline and set to values not inside the message.


Well, it turns out that setting the Property to the correct value is the key to building your Orchestration. Setting this value to MessageContextPropertyBase will allow the Orchestration build since this property will not be expected inside the message.


Take Away: When you are working with Promoted Properties that may or may not be included inside your message make sure you set the Property Schema Base property to MessageContextPropertyBase.

MSMQT Observations…

A lot has been said regarding the MSMQT adapter for BizTalk 2004 already, but below
are a few recent observations that may be of help to you. 

When people ask me what MSMQT is, my short answer goes something like this: “MSMQT
is the name of the BizTalk 2004 adapter that implements the MSMQ network protocol
directly within BizTalk
.  It allows BizTalk 2004 to send/receive MSMQ messages
directly, and move messages to/from the MessageBox very quickly – without external
(DTC) transaction coordination.  Only private queues are supported for receives.”

Using the MSMQT adapter you can:

  • Use Send Ports to send to public or private queues on remote machines – but the
    queues must be transactional.

  • Use Send Ports to send to local queues, which will be private since that
    is all MSMQT supports.

  • Use Receive Locations to receive from locally defined queues, which will be private since
    that is all MSMQT supports.  These queues can be either transactional or non-transactional. 
    You cannot monitor remote queues (i.e. receive locations can’t reference remote queues.)

You cannot using the System.Messaging MSMQ APIs/COM MSMQ APIs/MSMQ C Libraries or
the MSMQ MMC console on a BizTalk 2004 machine that is running the MSMQT adapter! 
This is because these all rely on the native MSMQ Service running.

You can use the aforementioned APIs (though not the MMC console) from remote machines
(that are not running the MSMQT adapter) to put messages in MSMQT-defined queues.

To test MSMQT queues on the BizTalk 2004 server where they are defined, you can:

  1. Create a Receive Port and a File Receive Location that will monitor a particular directory…

  2. Create a static one-way Send Port that specifies MSMQT & the queue you want to
    test as the destination.  In the “Filters” portion of the Send Port configuration,
    define BTS.ReceivePortName = ReceivePortNameFromStep1

Now, when we you drop a file on your directory, it will be routed to your queue. 
The receive location could also use HTTP posts, in which case the WFETCH tool from
the IIS Resource Kit (or a similar tool) will let you easily get test messages to
the queue.

Note: If you have existing applications that used to send to public MSMQ queues, but
will now be sending to MSMQT queues, the most common problem is that they are referencing
those queues via the PathName property, rather than the FormatName property (with
‘DIRECT=OS:machine\private$\qname’ syntax.)  This is only a problem for users
of the COM/C-Library APIs – the System.Messaging.MessageQueue class actually allows
format names to be used with the Path property.

See this Microsoft
Support discussion
for lots of additional insight into MSMQT.

Sequential Convoys in BizTalk

I need to start off with a note about the Parallel Sequential Convoy sample that I am working on.  I am still working on it and it will probably be a few weeks before it is ready to be posted.  I have a sample working that does Parallel Sequential processing of a single message type.  I am now trying to get that to work with multiple message types in a single Orchestration, although I think this breaks one of the rules of convoys.  In any case, please check back later for that sample.

In the mean time, here is a sample working with a Sequential Receive Convoy inside an Orchestration.  This takes in an initial message to start the Orchestration.  This message will tell the Orchestration how many messages to process before it ends.  Rather then this approach, I could have used a time out or a set number of messages to tell the process when to end. 

This sample includes a Test Harness Win Form to run the process.  For set-up instructions, please see the ReatMe.txt.  No executable code is included with the samples; you will need to compile the code yourself. 

Download: Get the sample code here.

Key Take Home Points:

– This approach could be prone to zombies

– The Receive Port must be the same for the Start and Worker messages

– The Receive Port must be marked for ordered delivery

What’s coming next?

– A parallel sequential convoy sample

– A zombie maker “feature”

Take Away: Running the sample is kind of fun and working with convoys is cool.

Envelope Processing on Send and Receive Ports

Have you looked at the SDK example working with Envelopes in BizTalk 2004 for Flat Files and tried to get it to work with XML Documents?  Well, I did and I found it a little trickier than I expected.

Key Take Home Points:

– De-Batching / Envelope Processing for XML Document must take place on the Receive Port.  I have not been able to get it to work on the Send Port, but I am still trying.

– Properties can be promoted from the Header and demoted to the single messages

– No custom pipelines are required for Receive Port Processing as long as the Root Node Name and Namespace are unique throughout your entire deployed solutions.

DOWNLOAD: Get the sample here!

Set-up is easy, just unzip the SampleEnvelopes folder and put it on your C: drive.  Then, build and deploy the SampleEnvelopes project. I use early binding so the send and receive ports will be created for you. 


To run the sample, drop the start messages named StartFileInbound.xml into c:\SampleEnvelopes\In_Inbound.  You will get 3 messages in the Out_Inbound folder for each start message.  Also, note that the Orchestration will run 3 times.  Plus, I promote and demote Header information to show how it can be passed into the single messages.  If all else fails, read the ReadMe.txt file.


Take Away: Once you get the hang of XML Envelopes in BizTalk 2004 they can be a powerful tool for easy document splitting.

Envelope Debatching Inside a Pipeline

Working with envelopes in BizTalk Server is always a little tricky. This sample shows how to use an envelope to de-batch a larger Xml message.

Some Key Take Home Points:
– De-Batching / Envelope Processing for XML Document must take place on the Receive Port. I have not been able to get it to work on the Send Port, but I am still trying.
– Properties can be promoted from the Header and demoted to the single messages
– No custom pipelines are required for Receive Port Processing as long as the Root Node Name and Namespace are unique throughout your entire deployed solutions.

Get more information from the original blog post on this topic:

An orchestration pattern that can hog the master message box CPU


We had a customer scenario such that when a complex business process was executed at the rate of 1 request per second the CPU utilization of the SQL Server hosting the master message box quickly grew up to 100% sustained.  Having the master message box’s CPU utilization so high is not desirable as it can:

  • cause SQL connection timeouts, which has the side affect of restarting BizTalk host instances
  • SQL jobs fail, which has the side affect of aged data not getting cleaned out of the database
  • Performance impact, as it will take longer to submit messages into the message box, since the master message box is responsible for subscription matching


In this scenario, there were two SQL Servers allocated for BizTalk; one held only the master message box with publication turned off and the other had all of the other BizTalk databases including a secondary message box.  Both SQL Server machines had 8 hyper-threaded 3.0 GHz processors and 8GB of RAM. 


The orchestrations consisted of about 4 orchestrations chained via messaging including about 2 called orchestrations.


In this scenario a new business process is created per order and there can only be one business process running at any one time for a particular {customer, order} pair.  Each order can have updates which can interrupt the currently running business process handling that request.  An interruption can only happen at certain points in the business process (i.e. business process atomicity).  So if an update has come in and a business process is currently handling a request, then the update will be queued up to a point in the business process where it can check to see if this current business process instance should terminate and then allow the update to start a new business process. 


To accomplish this in orchestration we used correlations.  At certain points in the orchestrations there would be a Listen shape with a Delay of 0 on one branch and a Receive following a correlation on the other branch.  So when the orchestration gets to this point in the orchestration, if there is no update then the business process continues until it needs to check again at the next interrupt point.


In the design of the orchestration there are several .NET remoting calls made.  If the remoting call fails then an exception orchestration is “Called”.  Since there are several remoting calls, then there are several of these exception orchestrations called throughout the main orchestration.  The exception orchestration includes logic such that it can post a request to an operator to determine whether or not to terminate the instance or to try again.  Since there is a blocking call waiting for user input, there can be a significant window of time where the orchestration instance is running.  In the meantime an update message could come, which would invalidate this original request.  To accomplish this interruption in the exception handling orchestration, the correlation set was passed in as a parameter to the called orchestration so that it can either wait (listen) for the response from the operator or an update message that interrupts the currently running instance.

BizTalk Behavior

The master message box is responsible for doing subscription matching.  If publication is turned off on the master message box, then it will only do subscription matching and the other message boxes will handle message publication and storage.  The master message box can only be scaled up but not out.  So eventually it can become the limiting factor in how far the message boxes can be scaled. 


The orchestration engine creates the necessary subscriptions when the orchestration is instantiated.  One set of subscriptions it will create include the ones for followed correlations.  The orchestration engine will find all of the points where the correlation is followed and create subscriptions for them.  If a correlation set is passed to a called orchestration then the engine crawls the called orchestration to create those subscriptions as well.  A called orchestration is essentially in-lined code which means that the called orchestration will look like it is part of the orchestration that calls it.


A subscription, in this case, will consist of a message type, an orchestration instance, a port operation, and the properties used in the correlation set.  So the subscriptions for the called orchestrations will actually have the name of the caller orchestration as its orchestration instance name.


By default, when you create a port in the orchestration designer the first operation under the port is defaulted to Operation_1.  Unless a developer has an explicit purpose for changing this (for example, if he is exposing an orchestration as a web service this operation will become part of the method name) the developer will typically leave it with the default name.


Since the same called exception orchestration is called many times within the main orchestration with the same correlation set with the same port operation name, then identical subscriptions will be created in the master message box.  As a general rule of thumb, the master message box can get overwhelmed when trying to match a message to a subscription with more than 20 identical subscriptions (I won’t go into the complexities of what happens when a number of subscriptions are matched for a particular message). 

So in this case each request is putting a lot of strain on the master message box trying to match the request to a particular subscriber since the message would match the subscription for activating receive as well as all of the receives that are in each call to the exception orchestration.  In this case, messages which are destined for the receive points in the exception orchestration are not common, but they are still referenced since the subscriptions are all the same.  To alleviate the strain put on the matching processing we changed the names of the port operations to be unique.  Since the orchestration engine uses the port operation name as the distinguishing property for a subscription, the subscription for the activating receive is now unique and the master message box doesn’t have to waste CPU cycle trying to figure which of the other subscriptions it needs to match against.


In the above described scenario, after making this change, the CPU utilization dropped from 100% to about 20%.



Change the operation type name to something unique.  Even if the design doesn’t have a called orchestration with a correlation set passed to it, it is a best practice to change these names in case in the future these orchestrations get repurposed.  Also it allows you to more easily find the subscription for a particular operation port in the orchestration.


Lee  talks about this scenario in his blog with some more technical detail under his post on  Is there a pub/sub system underneath BizTalk?

Welcome to my blog

I suppose I should give a little background on myself before I start blogging to give my posts some context.  Before Microsoft I was designing and developing parts of foreign exchange transaction systems for Reuters.  I started at Microsoft about 7 years ago as a software design engineer on Windows working on the index server.  I spent much of my time porting the code to 64-bit.  As the year 2000 approached Microsoft was pushing harder into the Enterprise market with the 2000 series of products (Windows, SQL, BizTalk, Host Integration Server, and Commerce Server) on the horizon.  I was fortunate enough to get a position as an Enterprise Solutions Architect working closely with customers designing systems built on pre-released bits of Microsoft platforms and applications helping make them successful on Microsoft products and getting feedback of our findings back into the product teams.  I have been able to work closely with financial, telecommunication, retail and many other types of companies architecting solutions to work with our latest applications.  

Now I am part of the Business Process and Integration division still working closely with customers on design wins and architectures but now mostly with projects using BizTalk Server 2004.  While working in this position I have been able to experience both sides of our products; getting the customers’ perspectives and understanding the product group’s inner workings.  

In this blog I am planning on sharing some of these experiences and findings.  We sometimes discover that users don’t understand parts of the system, or didn’t know that they had to architect their system in a particular way (for example, many users don’t realize that the master secret server must be clustered for a high availability build out).  I will use this blog to share some of that information as well.


There are some folks from the BizTalk team who are already blogging:

Scott Woodgate

Kevin Smith

Lee (the dude) Graber (BizTalk Core Engine)
Eldar Musayev

They already have some excellent posts and I would recommend looking at them as well.




Working with IF Statements In an Expression Shape

I saw a recent post on Scott’s Blog about what’s valid inside an expression shape. 

It gave a helpful list of expressions that are not allowed inside the Expression Shape.  I was surprised by one of the items in the list, the “if”.  I had been using “if” inside an Expression Shape in the Beta.  I wondered if this was no longer possible in the RTM release.  So, I set up a simple sample project to test this out.

First off, you might ask: “Why would you not use the Decision Shape?”

Well, I used “if” inside the Expression Shape for a few reasons:

1.  To make the visual size of the Orchestration smaller

2.  To do quick Boolean checks like if a file exists

3.  It was easier to drop in a quick line of “if” code rather then mess with the huge Decision Shape.  Ok, this point could be debatable…

I have put together a sample testing the use of “if” inside an Expression Shape and a sample showing how “if” can not be used inside a Message Assignment Shape.

DOWNLOAD: Get the sample here!

Set up is easy, just unzip the SampleIFStatements folder and put it on your C: drive.  Then, build and deploy the ExpressionIF project. I use early binding so the send and receive ports will be created for you.  Then, try to build the MessageAssignmentIF project.  Note the difference how the two Orchestrations use “if”.  This project should not build and should say illegal use of “if”.


To run the sample, drop the start messages named Start_False.xml and Start_True.xml into c:\ SampleIFStatements\In.  You will get your results in the Out directory.  The Results node will be changed based on the Boolean value.  If all else fails, read the ReadMe.txt file.

Key Take Home Points:

– “if” is allowed inside the Expressions Shape

– “if” is not allowed inside the Message Assignment Shape

Take Away: It is possible to use “if” outside of the Decision Shape inside an Expression Shape.  But, it should be used cautiously since it could confuse developers who are looking for the Decision Shape.