Connected Systems User Group

Brandon and I, with generous sponsorship from Microsoft, are founding a user group in Seattle area that focuses on BizTalk and integration topics across the Microsoft Connected Systems product Stack. An abstract is included below, but basically the user group will focus on implementing and supporting BizTalk (and other Microsoft Technologies as they apply to integration) across the enterprise. Stay tuned for specific details in regards to the inaugural meeting. Abstact The Seattle Connected Systems User Group is comprised of technically minded participants interested in the design, implementation, deployment, or support of integration solutions based on Microsoft Enterprise Development and Products Servers within the enterprise. The User group meetings will cover the Microsoft Connected Systems product stack with a focus on integration topics. The Seattle Chapter meets 12 times a year, usually on the second Tuesday of the month. Guest speakers will provide insightful presentations on creating integration solutions with Microsoft’s BizTalk Toolset and other related Microsoft products. Meetings take place at charter sponsor Microsoft’s Redmond Campus located in Building: 35, RM; Kalalach. Do you have expertise in an area of BizTalk Server? If you have an interest in sharing your knowledge with the Seattle Community, please contact us today. Sponsors The Seattle based Connected Systems User Group is being sponsored by the Microsoft’s Connected Systems Division Community Lead, James Fort and will be linked to and supported by the Microsoft PacWest office through Owen Allen. -Brennan…

Routing Messages to the MQ Dead Letter Queue

I recently had a fun problem to solve for a client of mine, essentially they wanted to route messages to the MQ Series dead letter queue and have the MQ dead letter queue handler move those messages to the queue they defined in the dead letter.


 


The first problem was getting the message on the MQ dead letter queue, John and Anil gave some great pointers here, it turns out to send a message to the dead letter queue the message needs to be pre-pended with the dead letter header (DLH) and the MQMD_Format property needs to be set to MQ “MQDEAD  ” so that MQ knows to expect the DLH.


 


Serializing the Dead Letter Header


To achieve this I wrote a DLH utilities component that allows the various fields of the DLH header to be set, the component also serializes the DLH into a byte[] so that it may be pre-pended to the message. The component was called from a custom send pipeline component which allows the key fields to be set at design time, the pipeline component also sets the MQMD_Format property to the message context.


 


The format of the MQ DLH struct is as follows:


 


char[] strucId = new char[4]; // Structure identifier – MQCHAR4 StrucId


int version = 1; // Structure version number


int reason; // Reason message arrived on dead-letter


char[] destQName = new char[48];// Name of original destination queue


char[] destQMgrName  = new char[48];// Name of orig dest queue manager


int encoding; // Numeric encoding of data that follows MQDLH


int codedCharSetId; // Char set identifier of data that follows MQDLH


char[] format = new char[8]; // Format name of data that follows MQDLH


int putApplType; // Type of app that put message on DLH


char[] putApplName = new char[28]; // Name of app that put msg on DLH


char[] putDate = new char[8]; // Date when message was put the DLH


char[] putTime = new char[8]; // Time when message was put the DLH


 


This struct needs to be serialised into a byte[] and pre-pended to the message:


 


public byte[] SerializeHeader()


{


      byte[] header = new byte[172];


      int index = 0;


      byte[] tmp = null;


 


      // strucId


      int written = System.Text.Encoding.UTF8.GetBytes(strucId, 0, strucId.Length, header, index);


      index += 4;


                       


      // version


      tmp = BitConverter.GetBytes(version);


      tmp.CopyTo(header, index);


      index += 4;                  


 


      // reason


      tmp = BitConverter.GetBytes(reason);


      tmp.CopyTo(header, index);


      index += 4;


 


      // destQName


      written = System.Text.Encoding.UTF8.GetBytes(destQName, 0, destQName.Length, header, index);


      index += 48;


 


      // destQMgrName


      written = System.Text.Encoding.UTF8.GetBytes(destQMgrName, 0, destQMgrName.Length, header, index);


      index += 48;


 


      // encoding


      tmp = BitConverter.GetBytes(encoding);


      tmp.CopyTo(header, index);


      index += 4;


 


      // codedCharSetId


      tmp = BitConverter.GetBytes(codedCharSetId);


      tmp.CopyTo(header, index);


      index += 4;


 


      // format


      written = System.Text.Encoding.UTF8.GetBytes(format, 0, format.Length, header, index);


      index += 8;


 


      // putApplType


      tmp = BitConverter.GetBytes(putApplType);


      tmp.CopyTo(header, index);


      index += 4;


 


      // putApplName


      written = System.Text.Encoding.UTF8.GetBytes(putApplName, 0, putApplName.Length, header, index);


      index += 28;


 


      // putDate – Format: yyyymmdd


      written = System.Text.Encoding.UTF8.GetBytes(putDate, 0, putDate.Length, header, index);


      index += 8;


 


      // putTime – Format: hhmmss00


      written = System.Text.Encoding.UTF8.GetBytes(putTime, 0, putTime.Length, header, index);


      index += 8;


 


      return header;


}


 


Pipeline Component


The MQ adapter does not directly support setting this property, so the custom send pipeline component is responsible for pre-pending the message data stream with the DLH and setting the message context property which the MQ adapter will then set on the MQ message:


 


private static PropertyBase mqmd_Format = new MQSeries.MQMD_Format();


       


private const string MQFMT_DEAD_LETTER_HEADER = “MQDEAD  ;


 


// Set MQMD_Format property to MQFMT_DEAD_LETTER_HEADER – to indicate the message


// is prepended with the dead letter header…


inmsg.Context.Write(    mqmd_Format.Name.Name,


mqmd_Format.Name.Namespace,


MQFMT_DEAD_LETTER_HEADER );


 


// Pre-pend the message body with the MQS dead letter header…


inmsg.BodyPart.Data = DeadLetterHelper.BuildDeadLetterMessage(


inmsg.BodyPart.GetOriginalDataStream(),


_DestinationQueue,


_QueueManager,


_ApplicationName );


 


All that is required then is for the send port to be configured to send the message to the SYSTEM.DEAD.LETTER.QUEUE.


 


Dead Letter Handler


Once the message is on the dead letter queue the dead handler (runmqdlq.exe) can be configured to take the message off the queue and put it on the destination queue defined in the DLH. This proved to be a little tricky to get working, thanks to Jason for his help in getting the dead letter handler working. 


 


runmqdlq.exe SYSTEM.DEAD.LETTER.QUEUE QM_demoappserver < qrule.rul


 


The handler may be fed a rule (qrule.rul), the example here removes the DLH and puts the message on the queue specified in the DLH, one thing to watch out for is the rule below needs to have a CRLF at the end!!:


 


INPUTQM(QM_demoappserver) INPUTQ(‘SYSTEM.DEAD.LETTER.QUEUE’) WAIT(NO)


ACTION(FWD) FWDQ(&DESTQ) HEADER(NO)


 

BizTalk 2004 side-by-side deployment example

Anyone tried to get side-by-side deployment to work in BizTalk 2004? Anyone found any examples? I know I couldn’t, so here’s my attempt at it.


The business scenario is as follows:



  • A long-running orchestration instance version 1.0.0.0 is awaiting a correlating response from a business partner – in the case of my client this will typically take 4 to 6 weeks
  • The orchestration is upgraded to version 1.1.0.0, side-by-side with version 1.0.0.0
  • New orchestration instances should instantiate under the new version 1.1.0.0
  • Existing orchestration instances should rehydrate by correlation under the old version 1.0.0.0.

Assumptions about the sample application:



  • Zipfile can be extracted to C:\temp\VersioningTest (if not, you’ll need to change the binding files manually before importing them)
  • BizTalk management database is called BizTalkMgmtDb and is on the local server (if not, change each project’s properties)
  • The default in-process host BizTalkServerApplication exists (if not, you’ll need to change the binding files manually before importing them)

Download this zipfile and extract it to C:\temp so that it expands to C:\Temp\VersioningTest\*.*


Open C:\Temp\VersioningTest\Source\V1.0\Version 1.0.0.0.sln and examine the orchestration. It looks like this:



Pretty simple stuff. The business process you will be performing is as follows:



  • a client application (this means you) drops a message into the Msg_from_client folder to initiate the business process
  • Project1.TestOrchestration instantiates, inserts a comment into the message noting which version processed it, then drops it into the Msg_to_partner folder
  • the partner application (this means you again) moves this message from the Msg_to_partner folder to the Msg_from_partner folder
  • Project1.TestOrchestration rehydrates by correlation, inserts a comment into the message noting which version processed it on the way back, and drops this message into the Msg_to_client folder

Deploy the solution, refresh BizTalk explorer, and you should see the following two assemblies:



  • VersioningTest.Project1(1.0.0.0)
  • VersioningTest.Schemas(1.0.0.0)

In BizTalk Deployment Wizard, import the assembly binding from file C:\temp\VersioningTest\Source\V1.0\Bindings_v1.0.xml and then start orchestration VersioningTest.TestOrchestration, accepting all the defaults to start the associated send and receive ports.


Testing version 1.0.0.0


Copy Test_message_1.xml from C:\temp\VersioningTest\Messages\TestMessages to C:\temp\VersioningTest\Messages\Msg_from_client.


This will get picked up, and a message will appear in C:\temp\VersioningTest\Messages\Msg_to_partner. Open this message, and view the text inside the InboundComment element that confirms it was processed by version 1.0.0.0 of the orchestration.


The business process is now waiting for its partner application to respond. Remember that in the long-running business process we are modelling, this response could take weeks or even months to arrive. To provide this response, move the message from C:\temp\VersioningTest\Messages\Msg_to_partner to C:\temp\VersioningTest\Messages\Msg_from_partner.


This will get picked up, and a message will appear in C:\temp\VersioningTest\Messages\Msg_to_client. Open this message, and view the text inside the OutboundComment element that confirms it was processed by version 1.0.0.0 of the orchestration:



Side-by-side deployment


In the above test, the partner application took as long to respond as it took you to copy a message from Msg_to_partner to Msg_from_partner. Imagine instead that this process takes weeks or even months, and in the meantime, an orchestration upgrade to version 1.1.0.0 occurs. New orchestration instances should instantiate the new version 1.1.0.0, but existing instances should drain out under version 1.0.0.0.


To prepare for this test, copy the first three test messages (Test_message_1.xml, Test_message_2.xml and Test_message_3.xml) from C:\temp\VersioningTest\Messages\TestMessages to C:\temp\VersioningTest\Messages\Msg_from_client, and ensure that you get three messages in C:\temp\VersioningTest\Messages\Msg_to_partner, with values for the ID element of 1, 2 and 3, and also with InboundComment element values indicating that they were processed by version 1.0.0.0 of the orchestration. Also check HAT’s Operations – Service Instances – Orchestrations view, and you will see these three active (and perhaps dehydrated) instances awaiting a response.


Now open C:\Temp\VersioningTest\Source\V1.1\Version 1.1.0.0.sln, and examine the differences: the Project1 assembly version number has changed, as has the version number inserted into the comments in the messages.


Right-click the Project1 project and select Deploy. Refresh BizTalk explorer, expand assemblies and orchestrations, and you will see the two versions:



In BizTalk Deployment Wizard, import the assembly binding from file C:\temp\VersioningTest\Source\V1.1\Bindings_v1.1.xml.


Now here’s the bit you have to get exactly right:



  1. Unenlist the old version – without terminating the active instances
  2. Enlist and start the new version
  3. Resume the suspended instances of the old version (since unenlisting the old version automatically suspended them all)

To do so, first refresh BizTalk Explorer, then right-click VersioningTest.TestOrchestration and select Unenlist – confirm, ensuring you do not terminate active instances. Then, start VersioningTest.TestOrchestration(1), accepting all the defaults. Then, go into HAT’s Operations – Service Instances – Orchestrations view, and resume the three suspended instances.


Test: old instance correlates and is processed by version 1.0.0.0, before any new instances of version 1.1.0.0 exist


Move the message with ID = 1 from C:\temp\VersioningTest\Messages\Msg_to_partner to C:\temp\VersioningTest\Messages\Msg_from_partner, and when it appears in C:\temp\VersioningTest\Messages\Msg_to_client open it up:



 You can see it was processed on the way back by version 1.0.0.0.


Test: new instances are processed by version 1.1.0.0


Copy Test_message_4.xml and Test_message_5.xml from C:\temp\VersioningTest\Messages\TestMessages to C:\temp\VersioningTest\Messages\Msg_from_client, and examine them when dropped into C:\temp\VersioningTest\Messages\Msg_to_partner:




You can see they were each processed on the way in by version 1.1.0.0.


Test: old instance correlates and is processed by version 1.0.0.0, whilst a new active instance of version 1.1.0.0 exists


Move the message with ID = 2 from C:\temp\VersioningTest\Messages\Msg_to_partner to C:\temp\VersioningTest\Messages\Msg_from_partner, and when it appears in C:\temp\VersioningTest\Messages\Msg_to_client open it up:


<IMG

MSMQT supports hardware load balancers… kinda

When migrating the solution from a single server to a multi-server staging environment, my client experienced problems with MSMQT service instance times running into minutes. We were using an Alteon hardware load balancer, but all the documentation only mentioned using NLB – I could find nothing that confirmed or denied that it was possible to use anything other than NLB. An example of this woolliness is KB 898702 which states you can try using NAT devices but they’re not officially supported.


Anyway, when we installed NLB instead, it worked a treat. And we got this definitive response from PSS:


We support hardware load balancing with the exception:

–          We don’t support NAT

–          Sticky IP must be implemented

Turns out our hardware load balancer uses NAT internally, so that was presumably our problem. Has anyone successfully used a hardware load balancer with MSMQT? I guess I don’t understand how the BTS app servers could bind to its IP address given it’d be external to them…

MSMQT adapter uses the default host

When you add the MSMQT adapter, the host name it uses is the default host at the time – this will usually be BizTalkServerApplication – and once you’ve selected it, it seems you can never change it. So if you want a separate host for your MSMQT adapter, before you add the MSMQT adapter to your server group, create a host called MSMQT_host (or whatever naming convention you’re using for your hosts) and set it as the default host. Now add the MSMQT adapter, and it will run under this host. Then, reselect the previous default host, and set it back to the default host.