The isolation setting is fixed and is SERIALIZABLE. So beware that even your most basic receive locations that only execute those very simple SQL statements such as ‘select * from tablename’ can generate locks. BTW, this is a general misconception: usually people think that select-statements never lock resources. Nothing is less true of course.

 

While having a shared lock on a range of keys, in addition to the keys themselves having locked, no records can be inserted. 
Here’s a sample to demonstrate the effects of the SERIALIZABLE isolation setting:

 

Open your SQL Query analyzer
Open 2 seperate query windows to the local Pubs database
Copy & paste these samples statements:

 

Window A:

 

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
Select * from authors where contract = 1
GO

 

Window B:

 

BEGIN TRAN
INSERT INTO [pubs].[dbo].[authors]([au_id],[au_lname], [au_fname],  [contract])
VALUES(‘666-66-6666′,’Grego’, ‘El’, 1)
GO

 

Now, if you try to execute both queries you will see that the second query will always be blocked by the first, irrespective of the order. You can immediately unblock the process by typing, selecting and executing ‘ROLLBACK TRAN’ in the blocking transaction’s window. Now repeat this test and replace ‘SERIALIZABLE’ with ‘READ COMMITTED’ and you will see that when you first start batch A, you can still execute Batch B simultaneously (which wasn’t so with the SERIALIZABLE level).

 

Here are the isolationlevel-enumerator values from system.data:

 

[Flags]
public enum IsolationLevel
{
      // Fields
      Chaos = 0x10,
      ReadCommitted = 0x1000,
      ReadUncommitted = 0x100,
      RepeatableRead = 0x10000,
      Serializable = 0x100000,
      Unspecified = -1
}

 

The BizTalk SQL adapter always uses 0x100000. You can check this by viewing the requested locks in Enterprise Manager:

 


 

Now, I believe the above will rarely be a problem in real-life. You should only expect performance problems when you have a lot of transactions and a lot of simultaneous lock requests for the same heavy resource while having a bad database-design (having no or having the wrong indexes). You should also know that lock waits are perfectly normal: a simple wait for a lock is different from a deadlock. The waiting process will get the lock anyway when the process that’s holding the lock completes.
If you use SERIALIZABLE my best advice is to tune you sql statements for performance (also having correct db-design, normalization, the right indexes,…) in order to make your select statement execute as fast as possible .

 

**UPDATED**

 

Is there a solution?



  • If you try to add the ‘SET TRANSACTION ISOLATION LEVEL READ COMMITTED’ to a select-statement based receive location then you may have no issues at design time. But when you try to add a RL based on this SQL statement in the  BizTalk Explorer you will get a ‘The SQL statement must be either a select or an exec’ error.



  • You can add ‘With (Readcommitted)’ to the tables in the select, or data modification statement. ‘Select * from authors with (Readcommitted) where contract = 1’. This will override the default Serializeable isolation level, and keep the number of locked records to a minimum. Credits go to Dirk Gubbels from Microsoft…

SQL Server locking experts’ comments are very welcome…