Untyped Documents Using Promoted Properties

Untyped messages inside an Orchestration allow for many different types of messages to be received by the same Orchestration using System.Xml.XmlDocument. This is an update to the untyped messages sample referenced below in the Related Items section. This uses promoted properties to access data inside the untyped Xml messages.

This sample should work for BizTalk 2004 and BizTalk 2006.

Get more information from the original blog post on this topic: https://www.biztalkgurus.com/biztalk_server/biztalk_blogs/b/biztalk/archive/2004/09/17/working-with-untyped-messages-in-an-orchestration-_1320_-part-2.aspx

Performance Counters in BizTalk Server 2004

Have you wanted to know what exactly BizTalk Server 2004 is doing at a giving instance?  How many Orchestrations are about to Dehydrate?  How about how many persistence points inside your Orchestration?  Answers to these questions are easily available through BizTalk Performance Counters.

Aside on Persistence Points: Persistence Points are key when designing optimized Orchestrations.  A Persistence Point is any time the Orchestration saves its current state back to the message box.  The less Persistence Points you have, the better your Orchestration will perform.  Persistence Points are caused by specific shapes inside your Orchestration like: Send, Parallel Action, Transactional Scopes, and others.  Some more information is available in the help guide under “Persistence”

How to view the Performance Counters:

1. Go to the Start Menu and Select Run

2. Type in: perfmon and Enter

3. Performance Window should open up

4. Click on the + inside the window (or Ctrl-I)

5. Under Performance Objects, Select XLANG/s Orchestrations (note your host must be running)

6. Select the Counters from the list you want to watch, then press Add.  You can get information on each by clicking on Explain.

7. Run your Orchestrations

My Favorite Performance Counters:

Orchestrations resident in memory

Pending messages

Persistence points

Running orchestrations

If you want to know exactly how many Persistence Points you have inside your Orchestration, just run it and watch the counter!

Take Away: Performance Counters in BizTalk 2004 can give you a clear picture on the current status of your Server and an idea of how well your Orchestrations will perform.

More information can be found on Performance Counters in the help guide.  Just search for it.

Funky BizTalk Server toys!

Funky BizTalk Server toys!

Microsoft just made the results of the BizTalk Server contest  public.  I immediately started googling but it appears that Paul Somer’s entry is not downloadable yet… (Please drop me a line if I’m wrong.)  Neverthless, a few cool cool toys appeared on gotdotnet in the mean time.  My selection of these:

Please note: I haven’t tested nor downloaded all of these.  I do not make any judgement on the quality nor availability of the entries above.  My only intention here is to give a short overview of what’s currently available.

Have fun!

Share this post: Email it! | bookmark it! | digg it! | reddit!

Document Normalization

Document Normalization

Okay, I have seen this copied on two different blogs, so it is apparently usefull enough that others might be interested. I posted this originally on a public discussion alias in response to a question on where mappings should be done.

“While there are actually some performance related reasons to put your maps in the receive and send ports, there are much better business reasons for doing it outside of your schedule. We tend to refer to mapping in receive and send ports as document normalization. In the case of receive ports, you are normalizing the documents from the format of your customers into an internal standard format. On the outbound side, you are converting out of your normalized format and into the specific format of your trading partner or internal application. If you embed the map in the schedule and the partner changes the format, not only do you have to rebuild the map, you have to rebuild the schedule to use the new version of the map. Also, what happens when you add a new partner with a new format. That is a new map and if you have embedded the map in a schedule, it means a new schedule. This is exactly why we added support for multiple maps (one per source message type) on the receive port so that you could create a single location for all of your partners and easily handle normalize into your internal standard formats. Putting these types of maps in schedules would be a bad idea. There are times when it makes sense to use a map in a schedule. When you need to generate a new message in the schedule and use the modified (mapped) contents of an existing message as the base. When you want to map multiple parts of a message into one outbound message (this type of mapping cannot be done in a receive / send port). There are performance gains which come from doing mappings in receive ports sometimes, but they are mostly around how many persisted messages your scenario generates and it is a bit complicated to explain. The actual mapping technology is the same. To keep your internal business logic from getting tightly couple with the document formats of your trading partners, you should do your document normalization (mapping) in the send and receive ports.”

The key take away from this post is that it is important not to tie your business logic to the format of one trading partner. Performance aside (for those of you who attended my perf talk at tech-ed, yes there are some perf benefits to doing this in the ports), the goal of this design is to make your system more robust and able to change as your business grows and adds new partners and also allow you to react easier to changes in your partners data formats. Brandon Gross comments (in Jeff Lynch’s blog) that there are times when the “normalization” is quite complex and it is easier to model this in an orchestration than with our support in the mapper. It is true that there are cases when you simply have no choice but to do the mapping / data conversions in an orchestration, and in those cases, that is what you do. But in general, the best practice I am pushing forward here is a decoupling of your business logic from your partners data formats and so a more robust system.

Hope this helps




Okay, I have been neglect and am trying to remedy this. I have a couple of papers I am working on which will get posted here very shortly. Hopefully one will be here by end of the week. I am also busy at work on BTS stuff trying to make your life easier (I hope). Finally I was away visiting some customers and getting some good real world experience. I have actually visitted lots of customers but it is always good to stay knowledgable because it is too easy to get it the glass ball world over here and forget about everyone who really uses this stuff and the problems they face. Never want to have that happen.

So first off, I just read a write up from Charles Young (who I have now added to my links) on how the subscription routing mechanisms work. Very impressive stuff. He even got some of the sequence of stored procedures being called down. I think it is maybe a bit more technical than a lot of you need, but hey, if you are reading my blog, you must be into this kind of torture, so I would check it out. Good stuff.

Now let’s pick a topic to chat about. I have seen some of the other blogs out there and, well, I am sorry mine is not formatted so nicely. Basically you get me at 11:30 pm feeling like I need to shed some insight and just trying to brain dump. So I guess you will bear with me. Hows about debugging routing failures. Not exactly rocket science to the bts experts, but probably a usefull topic.

What is a routing failure: Routing failures occur when messages are published into the messagebox, but no service is found to have expressed interest in the message (ie the properties of the message do not match any subscriptions).

So how do I debug this? — When a routing failure occurs, the messageagent (see my last post) generates something called a routing failure report. A routing failure report is nothing more than a dummy instance, which holds a reference to a dummy message, which has no parts, but which has the context of the message which failed routing at the time it failed routing. We capture the data like this becomes some adapters do not suspend the message and even when the message is suspended, the context of the suspended message is often different from the context at the time of the routing failure. So what can you do with this. Well really there are only a couple of times routing failures should occur in your system. The first is as you are developing and testing your solution (please test your solution). In these cases you should have a reasonable idea where the message should be going. You can look at the context of the message which failed to route and check which properties were promoted and what their values were. Then, you can use the subscription viewer in the <instsalldir>\sdk\utilites directory to see what the subscripton actually looked like for the sendport or orchestration you thought the mesage should have gone to. Often it is simply that you forgot to promote a property or just got the value in your subscription a bit off. Or you forgot to start the orchestration or sendport.

   The second case where this can occur is when you try stressing a system with orcehstrations which use corelation and you pass in non-unique correlation sets. Don’t do this. Try to imagine what is happening with these messages. Now a response which was supposed to go to one orchestration gets broadcast to 20 orchestrations. And then the responses for those find the orchestrations completed and so fail routing. Actually, what would really happen is that half of those 19 would actually get there before the orchestration completed and you might get 9 zombied orchestrations (see earlier blog) and 10 routing failures. Lesson to be leasned, test with real data and in the real world, correlation sets must be unique.

   The third case you might get this in some type of stress is also tightly related to zombies (see earlier blog). If you think about it, a zombie is a race condition where the message got routed to the orchestration right before it completed and so you are “completed with discarded messages”. Well what would happen if the raec were a bit different. In that case instead of rouitng just before the orcehstration completed, the orcehstration would complete just before the routing happened. Then you would get routing failures. In these cases, this is what you designed the system to do, so I guess you decide what to do with the failed message at this point. Read my blog on zombies to get a better idea of when zombies can happen.


Sorry this was brief, but it is midnight and I’m tired. Like I said, if things work out correctly, I should have a really good blog coming shortly. Just have to get the signoff from Woodgate and a couple of other people. 🙂


Reply to Todd’s comments on the Transactional .NET Adapter

Reply to Todd’s comments on the Transactional .NET Adapter

Todd Sussman posted a few comments/requests on my Transactional .NET Adapter.  One of the things I was asked for several times already is whether it would be hard to make it work in a request/response way.  Now, to be honest, I did consider this when implementing.  A few random thoughts:

  • Since the adapter is transactional, and since .NET remoting does not support transactions: the code called by the adapter would always run in process in the same appdomain.  (I would need to do really funky things to do this otherwise.)
  • I would advice against doing too much work in the component called by the adapter; remember there’s a transaction in progress!  Ideally the component would access some queue, database or other transactional backend system.  Don’t start any actions that take a long time and could risk the transaction to time-out.
  • Don’t start any new threads in your component unless you don’t really need the transaction.  Any new threads would not retain the transaction context…  (Widbey would do a better job here.)
  • I decided not to promote request reponse too much since I felt that would raise the risk of blocking the worker thread too long.  I was convinced that people needing this functionality would better use another transport to correlate response messages with their request.  For example: dropping messages from within your .NET component back on a queue which is asynchronously read by the MQSeries or MSMQ adapter… 

Was I wrong?  Probably… I’d like to hear all comments you have on this.  If you feel you need to have request reponse on the transactional .NET adapter in a scenario, please let me know!

Note to myself: a few other enhancements that I could make:

  • load each custom client assembly in a separate appdomain, this would allow for:
    • separate security settings for each assembly
    • configurable .NET config file for each assembly
    • unloading of the appdomain, would prevent needing to restart the BizTalk Service to release an assembly handle
  • the request response I just discussed
  • provide user with multiple interfaces so they can choose to receive an XmlDocument, XmlReader or just the plain bytestream
Share this post: Email it! | bookmark it! | digg it! | reddit!

BizTalk 2004 "Deploy with Nant" Template Revamp

BizTalk 2004 "Deploy with Nant" Template Revamp

(Update: See the latest on the Deployment Framework here.)

It has been several months since I initially posted on
the topic of using NAnt to coordinate the deployment of BizTalk 2004 solutions, with
updates here and here
Since that time, I’ve been heads-down in a BizTalk project and have had a chance to
refine the practice quite a bit.

So, I took the opportunity to introduce quite a few improvements into the sample I
had previously released – hopefully turning it into a “template” that can be used
on your projects more easily.  To that end, I’ve created a GotDotNet
where you can download the current version of the build template, and
I invite any feedback/suggestions/participation.

The major changes I’ve introduced can be categorized as follows:

  • Split binding files: On a BizTalk project with multiple developers, I believe it is
    beneficial to split your binding files into content related to orchestrations vs.
    content relating to “everything else” (send ports, receive ports, etc.)  The
    new build template sample does just that.
  • Improved scripts: Many of the WMI-related scripts I relied on in the past were from
    BizTalk SDK samples.  These were modified to get better/different error handling
  • More generic handling of send ports, receive ports, receive locations, etc. in the
    script – though it still requires some maintenance.
  • Better NAnt citizen: I’m still not a NAnt expert by any stretch, but thanks to Duncan
    I realized that I had been remiss in applying quotes to a good deal of
    the text names I was using (i.e. &quot).  In addition, I started using built-in
    NAnt tasks for at least a few items where I had overlapped functionality, though not
    in cases where I needed slightly different behavior.  Finally, the build file
    naming convention proposed by the build template is “BizTalkSample.sln.deploy.build”
    (rather than “BizTalkSample.sln.build”) to avoid conflicting with a NAnt file that
    would actually be used to build the project (e.g. with Cruise Control), as opposed
    to deploying it.
  • Use of “Deploy with NAnt” in an MSI scenario: This is the biggest (and I believe
    most useful) change…Let me explain further.

As you study the build script, you will notice that it now also tackles such tasks
as deploying virtual directories and applying permissions to them, patching binding
files to match “local conditions”, etc. The initial goal here is to ensure
that in a team development scenario, the amount of “out of band” setup
required for any given developer to establish their BizTalk 2004 project environment
is minimal – and that the current topology for the project is communicated
efficiently (i.e. through the build file, as opposed to email or word-of-mouth.)

After a development team has spent weeks or months with this build script, refining
it to represent their exact situation and deployment needs, a question might arise:
Why not take this well-tested script and use it for production (or just non-development-machine)
deployments as well?

To do this, it is helpful to agree that a two-phase deployment of BizTalk-related
projects is both acceptable and useful. The first phase is an MSI-based installation
that simply installs all of the BizTalk-related assets in a specific directory, but
doesn’t deploy them to the BizTalk server. The second phase occurs when the
user goes to the Start-Programs-YourProjectName menu, and chooses “Deploy YourProjectName”.
The user also has the option to un-deploy (leaving the assets still on the file system
until the MSI is uninstalled) or redeploy (to support the case where a single file
is “patched”.)  See a picture of the
Start menu created by the build template’s MSI file.

The reason that two phases are useful is that if the deployment to BizTalk fails,
we would like a complete log of the results (which in this sample occurs in the DeployResults
directory) for analysis/diagnostics, and we would rather not have the MSI simply roll
back. The same holds true of un-deployment.

Dual-purposing the NAnt file you have been maintaining for “real” deployments
can be summarized as follows:

  • Use the build template’s approach (see BizTalkSample.sln.deploy.build in the
    download) for the “server.deploy” and “server.undeploy” targets,
    which make the build file able to work with “co-located” binaries in addition
    to the standard developer tree.
  • Create a Visual Studio Setup project that deploys your BizTalk assemblies according
    to the pattern shown in the BizTalkSample.Setup project. Notice that this setup project
    includes the NAnt build file, the “DeployTools” directory (with our scripts),
    as well as a subset of NAnt in a subdirectory of our installation (so that we don’t
    rely on NAnt being present on the target server.)  It also includes a few convenience
    batch files referenced by our Start menu entries for invoking NAnt with specific targets.

The BizTalkSample.Setup project in the build file can be built, installed, and deployed
to try this out.  (You will want to make sure you haven’t already deployed in
“developer” mode, because the “Deploy BizTalk Deployment Sample” passes a flag to
the NAnt script that will skip the undeploy phase in order to speed up “clean” installations.)  Note
that as your deployment grows more sophisticated – multiple hosts, servers, etc. –
you will find the NAnt script requires additional properties to govern these variations
between server deployments and developer-desktop deployments.

The build template sample also discusses the use of NUnit as a post-deployment verification
process.  See the documentation in
the sample for a full discussion —  basically all we are doing is making sure
our MSI includes a subset of our unit tests, NUnit, and appropriate test files. 
This is then wired up to the “Verify Deployment” Start menu option.  This can
be an extremely effective means of ensuring your operations team has a means of testing
the system interactively (not to replace standard automated heartbeat tests you might

Peter Provost proposed
a great generic approach to building BizTalk NAnt files a couple of weeks ago – and
I shamelessly stole his approach to reverse the “orchestration ordering” property
(very slick!)  If you do not wish to pursue the MSI option just discussed, and
you do not wish to integrate the “out of band” setup actions discussed above (i.e.
virtual directory creation, etc.), you might prefer his approach or you might want
to push the build template sample provided here in that direction (i.e. more property
driven.)  However, I believe tackling the kinds of concerns that we do here will
generally mean that a real-world project will no longer have a generic-looking NAnt

Again, you can download the build template sample here
Enjoy, and I hope you find it useful.