The BizTalk SQL Server Adapter isolation level

The isolation setting is fixed and is SERIALIZABLE. So beware that even your most basic receive locations that only execute those very simple SQL statements such as ‘select * from tablename’ can generate locks. BTW, this is a general misconception: usually people think that select-statements never lock resources. Nothing is less true of course.

 

While having a shared lock on a range of keys, in addition to the keys themselves having locked, no records can be inserted. 
Here’s a sample to demonstrate the effects of the SERIALIZABLE isolation setting:

 

Open your SQL Query analyzer
Open 2 seperate query windows to the local Pubs database
Copy & paste these samples statements:

 

Window A:

 

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
Select * from authors where contract = 1
GO

 

Window B:

 

BEGIN TRAN
INSERT INTO [pubs].[dbo].[authors]([au_id],[au_lname], [au_fname],  [contract])
VALUES(‘666-66-6666′,’Grego’, ‘El’, 1)
GO

 

Now, if you try to execute both queries you will see that the second query will always be blocked by the first, irrespective of the order. You can immediately unblock the process by typing, selecting and executing ‘ROLLBACK TRAN’ in the blocking transaction’s window. Now repeat this test and replace ‘SERIALIZABLE’ with ‘READ COMMITTED’ and you will see that when you first start batch A, you can still execute Batch B simultaneously (which wasn’t so with the SERIALIZABLE level).

 

Here are the isolationlevel-enumerator values from system.data:

 

[Flags]
public enum IsolationLevel
{
      // Fields
      Chaos = 0x10,
      ReadCommitted = 0x1000,
      ReadUncommitted = 0x100,
      RepeatableRead = 0x10000,
      Serializable = 0x100000,
      Unspecified = -1
}

 

The BizTalk SQL adapter always uses 0x100000. You can check this by viewing the requested locks in Enterprise Manager:

 


 

Now, I believe the above will rarely be a problem in real-life. You should only expect performance problems when you have a lot of transactions and a lot of simultaneous lock requests for the same heavy resource while having a bad database-design (having no or having the wrong indexes). You should also know that lock waits are perfectly normal: a simple wait for a lock is different from a deadlock. The waiting process will get the lock anyway when the process that’s holding the lock completes.
If you use SERIALIZABLE my best advice is to tune you sql statements for performance (also having correct db-design, normalization, the right indexes,…) in order to make your select statement execute as fast as possible .

 

**UPDATED**

 

Is there a solution?



  • If you try to add the ‘SET TRANSACTION ISOLATION LEVEL READ COMMITTED’ to a select-statement based receive location then you may have no issues at design time. But when you try to add a RL based on this SQL statement in the  BizTalk Explorer you will get a ‘The SQL statement must be either a select or an exec’ error.



  • You can add ‘With (Readcommitted)’ to the tables in the select, or data modification statement. ‘Select * from authors with (Readcommitted) where contract = 1’. This will override the default Serializeable isolation level, and keep the number of locked records to a minimum. Credits go to Dirk Gubbels from Microsoft…

SQL Server locking experts’ comments are very welcome…

Atomic Scope "Batch" property…

Charles Young has a great post on transactions within orchestrations.  In it he mentions the Batch property of the ATOMIC scope.  I concur with him that the documentation on this is very light, as well as any information out in the communities.  Doing a little testing on my current project with Stephen Thomas (be sure to check out his blog), it has become apparent that the property acts just as Charles suspects.


Our particular scenario –


We had an orchestration processing a large sum of messages.  At one point in the orchestration, we were calling a component to insert the records into the database.  While that orchestration was processing the large batch of messages, we sent several very small batches of messages that quickly reached the same point in their instance of the orchestration (inserting the records through the component).  It turned out that the smaller messages would not complete processing thier messages until the large batch completed.


After we changed the property to be False (the proprety for some reason defaults to True, which seems a bit dangerous), redeployed and rean the test again, the much smaller messages completed ahead of the slower, larger batch of messages (our expected outcome originally).


So, keep in mind when using the Atomic transaction that this property defaults to True, and may have ill effects on your message processing.


Cheers!!

About publisher policy assembly chaining

I’ve just found out the hard way that GAC’ed publisher policy assemblies do not chain. When you have a policy.1.0, redirecting the binding of assembly v1.0.0.0 to assembly v1.1.0.0 and also having a policy.1.1, for redirecting the binding of assembly 1.1 to assembly 1.2, this doesn’t result in a binding to the assembly 1.2 when your app requests 1.0 (but rather to 1.1). If you want this kind of binding-behavior I guess you will have to use publisher policy versioning.
Why do I tell all this? Well, I’ve never read anything about this before so I thought it would be nice to mention here.
I also want to thank Alan Shi, who confirmed and explained this binding behavior to me. For more excellent information regarding the GAC and Fusion you should definitely visit his blog here.

XLANGs.BTEngine.BTXTimerMessages Delivered, Not Consumed

What?  How?  When?  Why?  Useless?  You have no clue what I am talking about?



What are BTXTimerMessages?


They are messages BizTalk uses internally to control timers.  This includes the delay shape and scope shapes with timeouts. 



How will you see BTXTimerMessage?


You will see these messages in HAT.  They will be associated with running Orchestations.  If they show up, they will be in the Delivered, Not Consumed (zombie) message status.



When will you see BTXTimerMessages?


I see these messages when I am working with parallel actions, atomic scope, and convoys.  My sample, Limit Running Orchestrations, produces these types of message.  I have not been able to pin point the exact shapes or actions that cause these messages to show up. 



Why do you see BTXTimerMessages?


Good question.  I have a theory, but it is probably wrong.  It is that the timer message is returned to the Orchestration after if has passed the point of the delay or scope shape.  Thus, it is never consumed by the Orchestration. 



Are these messages useless?


I think so.  I always ignore them.  They will go away when the Orchestration completes or is terminated.  These do not seem to act like normal zombies in that they do not cause the Orchestration to Suspend.



Ok, so you have no idea what I am talking about?


Let me fill you in.  BTXTimerMessages in the Delivered, Not Consumed status are sometimes seen in HAT when working with Timers.  I have not really determined why they happen, but I suspect they should not show up in HAT at all.  I do not think they hurt anything and I pay little attention to them.  When the Orchestration finally ends or is terminated these messages simply go away.  They are annoying and in long running transactions can start to stack up in HAT.  Although, if you try to terminate these messages they will take down the running Orchestration with them.

Rules Engine and A4Swift gotcha…

I’ve been using the A4Swift (v2.1) accelerator a lot recently, which I have to say is extremely good. Around this I though it worth sharing a tip with you all that has caused me some pain on two occasions recently!! Depending on your scenario, the issue can cause a significant performance degradation on the receive side, the CPU utilization is pegged at close to 100% but the inbound processing rate is very slow.

 

The issue is caused by some of the QFE’s for the Rule Engine and possibly even the A4Swift accelerator, to be honest I don’t want to waste time digging thru the dependencies and which QFE cause it. The bottom line is that the following registry key needs to be checked after applying any of the QFE’s for the Rule engine and to be safe the A4Swift accelerator:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BusinessRules\3.0\CacheEntries = 512

 

A couple of the QFE’s that I have applied reset the Cache entries back to 32 which is the default for the Rule Engine, the problem is that A4Swift accelerator uses a large number of rules so the cache size needs to be bumped up to 512 otherwise the Rule engine will spend all of it’s time loading rules. So the bottom line is to check this registry key if you are using the Rule engine.

 

Thanks!

Rules Engine and A4Swift gotcha…

I’ve been using the A4Swift (v2.1) accelerator a lot recently, which I have to say is extremely good. Around this I though it worth sharing a tip with you all that has caused me some pain on two occasions recently!! Depending on your scenario, the issue can cause a significant performance degradation on the receive side, the CPU utilization is pegged at close to 100% but the inbound processing rate is very slow.

 

The issue is caused by some of the QFE’s for the Rule Engine and possibly even the A4Swift accelerator, to be honest I don’t want to waste time digging thru the dependencies and which QFE cause it. The bottom line is that the following registry key needs to be checked after applying any of the QFE’s for the Rule engine and to be safe the A4Swift accelerator:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BusinessRules\3.0\CacheEntries = 512

 

A couple of the QFE’s that I have applied reset the Cache entries back to 32 which is the default for the Rule Engine, the problem is that A4Swift accelerator uses a large number of rules so the cache size needs to be bumped up to 512 otherwise the Rule engine will spend all of it’s time loading rules. So the bottom line is to check this registry key if you are using the Rule engine.

 

Thanks!

The BizTalk Configuration Dilemma

I’ve watched several times Jurgen Willis’ excellent online presentation about the Business Rule Engine (BRE). One of the BRE usage scenarios demoed by him contains a sample orchestration that uses the BRE to dynamically configure a delay time.
This confused me since it added another option to accomplish dynamic configuration of business processes. We are definitely facing a configuration dilemma now; here are some of the alternatives:
  • Config Files
    Use the default .NET config files as the store (BTSNTSvc.exe.config) for your key-value pairs or custom types. You can easily read the settings using the default .net classes from inside your orchestrations.
    This is definitely the easiest option. But it makes your business processes host-instance depended (every host instance can be differently configured). It’s also not easily deployable, when having different environments you will have to manually copy paste your configuration sections, you could easily make mistakes. I guess there are no tools available for the business users to manage the values.
  • Business Rule Engine
    Although I have the feeling the BRE and its terminology is not really geared towards this simple functionality (storing key-value pairs). Most of the BizTalk included samples use schema-facts, some of them use class-facts but none addresses the config management purpose which was demoed in the presentation.
    I’ve tested a couple of things myself, including calling the BRE from code inside an orchestration by using several Stringbuilder instances or a Hashtable as the argument(s). This seemed a very strange solution to me (it’s not easy to define the rules/vocabularies when having several instances of the same class). Another option is to create a custom configuration class which gets and sets the values, this will simplify the vocabulary. Or you could always use the classic approach and create a custom schema to hold your configuration values.
Finally I emailed Jurgen, who appeared to be a very friendly and helpful man. He pointed me out that the BRE is in fact not specifically targeted at this scenario and that, in general, it focuses more on complex types than on using value types (especially when multiple instances are evaluated in the same policy).

I’m still alive :-)

Due to several reasons (priority shifts…), it’s a while ago I blogged here.  To eliminate all doubts: I’m still alive and still open to all BizTalk/XML/SOA related questions and discussions 🙂

I’m also glad to announce that tomorrow I’ll join Microsoft as a product technical presales.  It remains to be seen how many spare time I’ll have to dedicate to this blog 😉

Just one more thing I’d like to share here: if you’re into the enterprise space and love the integration solutions Microsoft is providing, make sure to check out MIIS (Microsoft Identity Integration Server) as well! 

Share this post: Email it! | bookmark it! | digg it! | reddit!

How to Name Output Files Inside An Orchestration

In many cases it can be useful to know the exact name of your output file that will be sent from your Orchestration using the File Adapter.  This can be difficult if you are using the %MessageId%.xml macro to write the file since this it set after the message is sent from the Orchestration. 



Delivery Notification can help you determine if your message was sent successfully but it can not give you the file name.



BizTalk 2004 has two ways to dynamically name your files from inside the Orchestration.  The two ways to accomplish this are either to use a Dynamic Send Port or to use the %SourceFileName% macro on the Send Port.



Dynamic Send Port


Dynamitic Send Ports are powerful and useful if you need to send your files to many different locations on the file system like sometime to C:\data\ and other times c:\root\.  The downside is you need to have all this information inside your message or hard code it in the Orchestration.  So, it can be difficult to change.



Source File Name Macro


This is my preferred approach to Output File Naming.  This does not require your data or the Orchestration to have any idea as to the directory you want to write your file to.  This requires using the %SourceFileName% macro to set the output file name inside the Send Port. 



Don’t want the same name as your input file you say?  Now, here is the trick.  Just change the File.ReceivedFileName property inside the Orchestration to be anything you want!  This can be done by creating a new message and changing the context property.  The code inside a Message Assignment shape would look like this:



// Create a new message


OutMessage = InMessage;



// Set the ReceivedFileName context property


OutMessage(FILE.ReceivedFileName) = “SetInOrch.xml”;



It is not required to demote this value into your message.  So, this method works with the Pass Through Send Pipeline because this context value is used by the File Adapter and not the pipeline.



CRITICAL: The %SourceFileName% macro does not need an additional extension (like .xml or .txt) after it like the %MessageId% macro.



I have put together a simple sample showing both of these types of file naming.  For information on how to run the samples, please see the Read Me file.



DOWLOAD: Sample Naming Output Files