XLANGs.BTEngine.BTXTimerMessages Delivered, Not Consumed

What?  How?  When?  Why?  Useless?  You have no clue what I am talking about?



What are BTXTimerMessages?


They are messages BizTalk uses internally to control timers.  This includes the delay shape and scope shapes with timeouts. 



How will you see BTXTimerMessage?


You will see these messages in HAT.  They will be associated with running Orchestations.  If they show up, they will be in the Delivered, Not Consumed (zombie) message status.



When will you see BTXTimerMessages?


I see these messages when I am working with parallel actions, atomic scope, and convoys.  My sample, Limit Running Orchestrations, produces these types of message.  I have not been able to pin point the exact shapes or actions that cause these messages to show up. 



Why do you see BTXTimerMessages?


Good question.  I have a theory, but it is probably wrong.  It is that the timer message is returned to the Orchestration after if has passed the point of the delay or scope shape.  Thus, it is never consumed by the Orchestration. 



Are these messages useless?


I think so.  I always ignore them.  They will go away when the Orchestration completes or is terminated.  These do not seem to act like normal zombies in that they do not cause the Orchestration to Suspend.



Ok, so you have no idea what I am talking about?


Let me fill you in.  BTXTimerMessages in the Delivered, Not Consumed status are sometimes seen in HAT when working with Timers.  I have not really determined why they happen, but I suspect they should not show up in HAT at all.  I do not think they hurt anything and I pay little attention to them.  When the Orchestration finally ends or is terminated these messages simply go away.  They are annoying and in long running transactions can start to stack up in HAT.  Although, if you try to terminate these messages they will take down the running Orchestration with them.

Rules Engine and A4Swift gotcha…

I’ve been using the A4Swift (v2.1) accelerator a lot recently, which I have to say is extremely good. Around this I though it worth sharing a tip with you all that has caused me some pain on two occasions recently!! Depending on your scenario, the issue can cause a significant performance degradation on the receive side, the CPU utilization is pegged at close to 100% but the inbound processing rate is very slow.

 

The issue is caused by some of the QFE’s for the Rule Engine and possibly even the A4Swift accelerator, to be honest I don’t want to waste time digging thru the dependencies and which QFE cause it. The bottom line is that the following registry key needs to be checked after applying any of the QFE’s for the Rule engine and to be safe the A4Swift accelerator:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BusinessRules\3.0\CacheEntries = 512

 

A couple of the QFE’s that I have applied reset the Cache entries back to 32 which is the default for the Rule Engine, the problem is that A4Swift accelerator uses a large number of rules so the cache size needs to be bumped up to 512 otherwise the Rule engine will spend all of it’s time loading rules. So the bottom line is to check this registry key if you are using the Rule engine.

 

Thanks!

Rules Engine and A4Swift gotcha…

I’ve been using the A4Swift (v2.1) accelerator a lot recently, which I have to say is extremely good. Around this I though it worth sharing a tip with you all that has caused me some pain on two occasions recently!! Depending on your scenario, the issue can cause a significant performance degradation on the receive side, the CPU utilization is pegged at close to 100% but the inbound processing rate is very slow.

 

The issue is caused by some of the QFE’s for the Rule Engine and possibly even the A4Swift accelerator, to be honest I don’t want to waste time digging thru the dependencies and which QFE cause it. The bottom line is that the following registry key needs to be checked after applying any of the QFE’s for the Rule engine and to be safe the A4Swift accelerator:

 

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BusinessRules\3.0\CacheEntries = 512

 

A couple of the QFE’s that I have applied reset the Cache entries back to 32 which is the default for the Rule Engine, the problem is that A4Swift accelerator uses a large number of rules so the cache size needs to be bumped up to 512 otherwise the Rule engine will spend all of it’s time loading rules. So the bottom line is to check this registry key if you are using the Rule engine.

 

Thanks!

The BizTalk Configuration Dilemma

I’ve watched several times Jurgen Willis’ excellent online presentation about the Business Rule Engine (BRE). One of the BRE usage scenarios demoed by him contains a sample orchestration that uses the BRE to dynamically configure a delay time.
This confused me since it added another option to accomplish dynamic configuration of business processes. We are definitely facing a configuration dilemma now; here are some of the alternatives:
  • Config Files
    Use the default .NET config files as the store (BTSNTSvc.exe.config) for your key-value pairs or custom types. You can easily read the settings using the default .net classes from inside your orchestrations.
    This is definitely the easiest option. But it makes your business processes host-instance depended (every host instance can be differently configured). It’s also not easily deployable, when having different environments you will have to manually copy paste your configuration sections, you could easily make mistakes. I guess there are no tools available for the business users to manage the values.
  • Business Rule Engine
    Although I have the feeling the BRE and its terminology is not really geared towards this simple functionality (storing key-value pairs). Most of the BizTalk included samples use schema-facts, some of them use class-facts but none addresses the config management purpose which was demoed in the presentation.
    I’ve tested a couple of things myself, including calling the BRE from code inside an orchestration by using several Stringbuilder instances or a Hashtable as the argument(s). This seemed a very strange solution to me (it’s not easy to define the rules/vocabularies when having several instances of the same class). Another option is to create a custom configuration class which gets and sets the values, this will simplify the vocabulary. Or you could always use the classic approach and create a custom schema to hold your configuration values.
Finally I emailed Jurgen, who appeared to be a very friendly and helpful man. He pointed me out that the BRE is in fact not specifically targeted at this scenario and that, in general, it focuses more on complex types than on using value types (especially when multiple instances are evaluated in the same policy).

I’m still alive :-)

Due to several reasons (priority shifts…), it’s a while ago I blogged here.  To eliminate all doubts: I’m still alive and still open to all BizTalk/XML/SOA related questions and discussions 🙂

I’m also glad to announce that tomorrow I’ll join Microsoft as a product technical presales.  It remains to be seen how many spare time I’ll have to dedicate to this blog 😉

Just one more thing I’d like to share here: if you’re into the enterprise space and love the integration solutions Microsoft is providing, make sure to check out MIIS (Microsoft Identity Integration Server) as well! 

Share this post: Email it! | bookmark it! | digg it! | reddit!

How to Name Output Files Inside An Orchestration

In many cases it can be useful to know the exact name of your output file that will be sent from your Orchestration using the File Adapter.  This can be difficult if you are using the %MessageId%.xml macro to write the file since this it set after the message is sent from the Orchestration. 



Delivery Notification can help you determine if your message was sent successfully but it can not give you the file name.



BizTalk 2004 has two ways to dynamically name your files from inside the Orchestration.  The two ways to accomplish this are either to use a Dynamic Send Port or to use the %SourceFileName% macro on the Send Port.



Dynamic Send Port


Dynamitic Send Ports are powerful and useful if you need to send your files to many different locations on the file system like sometime to C:\data\ and other times c:\root\.  The downside is you need to have all this information inside your message or hard code it in the Orchestration.  So, it can be difficult to change.



Source File Name Macro


This is my preferred approach to Output File Naming.  This does not require your data or the Orchestration to have any idea as to the directory you want to write your file to.  This requires using the %SourceFileName% macro to set the output file name inside the Send Port. 



Don’t want the same name as your input file you say?  Now, here is the trick.  Just change the File.ReceivedFileName property inside the Orchestration to be anything you want!  This can be done by creating a new message and changing the context property.  The code inside a Message Assignment shape would look like this:



// Create a new message


OutMessage = InMessage;



// Set the ReceivedFileName context property


OutMessage(FILE.ReceivedFileName) = “SetInOrch.xml”;



It is not required to demote this value into your message.  So, this method works with the Pass Through Send Pipeline because this context value is used by the File Adapter and not the pipeline.



CRITICAL: The %SourceFileName% macro does not need an additional extension (like .xml or .txt) after it like the %MessageId% macro.



I have put together a simple sample showing both of these types of file naming.  For information on how to run the samples, please see the Read Me file.



DOWLOAD: Sample Naming Output Files


 

Executing pipelines from inside an orchestration: Introducing the LOOPBACK adapter

Let’s go over some of the out-of-the-box options for executing a pipeline from inside your orchestration.

 

It struck my mind first that the MSMQT adapter IS in fact the messagebox.

 


  1. MSMQT

 


  • Set up a schedule with 1 MSMQT send port and 1 MSMQT receive port sharing the same MSMQT queue name (for example ‘ loopback’ would make a lot of sense)
  • Create a correlation type and set based on the my MSMQT label property.
  • Assign the MSMQT message-label to a newly created GUID inside your orchestration.

 

Try to do a send and a receive and you will see that your pipelines will be executed. As a test I could successfully parse a message from inside a schedule.

 

Another very simple option is using the HTTP adapter, credits go 100% to Scott Colestock. Scott used this as just a temporary approach to get past this problem, and then switched to the loopback adapter below…

 


  1. HTTP

 


  • Use a solicit-response send port 
  • Use bogus HTTP page that simply echoes the content back.

 


using System;
using System.Web;
using System.IO;

public class LoopbackHandler : IHttpHandler
{
   public bool IsReusable
   { get { return true; } }
  
   public void ProcessRequest(HttpContext context)
   {
      using(StreamReader sr = new StreamReader(context.Request.InputStream, true))
      {
         StreamWriter sw = new StreamWriter(context.Response.OutputStream, sr.CurrentEncoding );

         sw.Write(sr.ReadToEnd());

         sw.Flush();
         sw.Close();
      }
   }
}


  • Call it loopback.ashx for example, use IIS to host the page (for example http://localhost/loopback/loopback.ashx).

 

This can be easily explained because it requires no knowledge of correlation types and correlation sets, since you are using request/response ports the whole way. If simplicity is the main goal then this solution is probably superior to the MSMQT one.

 


  1. Custom LOOPBACK Adapter.

 

I also had another idea. Why not make your own loopback adapter by writing a bogus solicit-response adapter that returns a response BizTalk IBaseMessage that is a copy of the original request message. Recreate the original individual message-parts and copy of the request stream to the response stream. It’s very simple – download my sample loopback adapter here.

 


  • It’s a custom coded VB.NET non-batched but non-blocking (async) adapter that really doesn’t do anything.
  • Uses limited memory-footprint, streams to disk. 
  • To install use the MSI package. Don’t forget to add the adapter by using BizTalk administration…
  • It auto generates the transmit location URI (in fact it’s a GUID). I also added a Boolean property to specify whether you want the original message and part properties to be copied on to the response-message and parts.

 

Here are some of the benefits:

 


  • Trigger pipeline execution from inside your orchestration. Just define your outbound and inbound pipelines and they will be executed.
  • Send pipeline errors can be catched by adding a deliveryfailure-exception handler. Also adapter exceptions can be catched there, but since this adapter really doesn’t do anything this will be very rare (at least I hope so).
  • Receive pipeline errors can be catched by adding a general SOAP-exception handler.
  • Execute mappings from inside your orchestration by defining a map on the inbound and outbound port. When you use the later, mapping failures will be catched by the SOAP-exception handler.
  • It’s a black-hole adapter if you want. It can be used to ignore the processing of certain incoming messages. Make a send-port, name it ‘VANISH’ and subscribe to a given message-type or receive-port name for example.
  • It’s very useful for doing in-order processing. You can for example receive messages from MSMQT through a pass-through pipeline (which is recommended BTW) and process messages in-order from start-to-end while doing exception-handled parsing/validation/mapping by implementing a serial convoy.

 

Here’s a sample project that demonstrates some of the benefits.

 



Remarks:

 


  • The code can definitely be improved. It has been written very quickly in the scope of a POC. Alternatively you could make a better one using the adapter base classes and the wizard. In fact I’m hoping the community will see the benefit of this loopback adapter and someone else will make a better one. I’m not (yet, ha!) a streaming and multi-threading expert.
  • Apparently it is advised to return a read-only forwarding stream to the receive-pipeline. At least until SP1 arrives. I just used the VirtualStream class from the BizTalk Streaming assembly.
  • Do not forget, and as usual: there’s no warranty at all. I haven’t tested this very thoroughly yet so…use the adapter at your own risk.

WSDL First and BizTalk

I’ve held off posting on the WSDL-first approach so far because I’ve been waiting for the tools to arrive to make my gripes go away. Well, I’m still waiting… and waiting…


In the BizTalk world, we already have a Service Oriented mindset. We communicate in terms of schema-defined documents; we offer up services not APIs. We want to be interoperable with as many people as possible. I don’t care whether you’re using Java, .NET or ARM assembler – if you can send me a soap message, I’ll talk to you. The trouble is that the .NET world approaches web services from a very object-y view point. Just pop a [WebMethod] in your asmx page and let ASP.NET create you some WSDL describing your objects. Easy, right, for people who want to knock up a quick web service in code?


The trouble is this promotes a very code-first approach. This really sucks from a BizTalk developer’s point of view. I already have my schemas. I know the shape of the messages I want to receive. Where’s the support for the Schema- and WSDL-first approach? The “publish orchestration” wizard seems ropey – a colleague today ended up with unusable WSDL just because he’d used an xs:import in his schema. Besides, I don’t want a “publish orchestration” wizard, I want a WSDL designer with “create BizTalk orchestration template” and “import BizTalk definitions” options.


How about BizTalk’s support for consuming web services? If I add a reference to a Web Service, I want the types defined in that WSDL added as first-class schemas in my BizTalk solution, not hidden under MyWebReference\Reference1.xsd. I want support for promoting properties out of the web reference – just remembering them if I refresh the web reference would be nice. Even worse, if I already have those schemas included in my solution please don’t create a duplicate copy of the schema – share the reference (.NET 2.0 has an enhancement to WSDL.exe that may support this, I believe).


For the record, I am very much in favour of WSDL first. I just think there’s an awful lot of pain to get it working in anything but the most simple cases. Let’s hope Microsoft’s new Domain Specific Languages designers lead to some WSDL designers pronto. XML Spy looks nice, but it’s damned expensive!


This hasn’t been a very structured post, for which I apologise, but I’d love to hear from anyone else who has found any silver bullets for BizTalk and WSDL-first…

More on deployment…

Ok, sorry for the short hiatus….very busy times on my current project.  Anyway, I previously mentioned some points around deployment.  It turns out that using BTS Installer doesn’t work well when utilizing roles and parties.  This doesn’t mean we have to scrap the BTSInstaller all together, we just have to do a little extra work in creating scripts that do the deployment as well as the undeployment, and then call those scripts from our MSI package.


 


In the zip file (get it here), I’ve included some sample scripts that may give you a head start in the deployment process.  Keep in mind that if you are NOT using roles and parties, BTSInstaller will get you about 80% of the way there, and you will not need all of the scripts.  You will still need some scripts to go ahead and enlist and start everything, and some work is needed to automagically undeploy everything using Add or Remove Programs.  With everything included in the zip file, you should be able to gleam something that will help.


 


A few key points –


 



  • Binding files – everything is driven by the binding file.  Make sure it is correct, or you will have issues.  One thing to watch out for is your binding file contains the PublicTokenKey of the assembly it is derived from.  If during multi-developer development these somehow get out of sync, when you deploy you won’t get any errors, but nothing gets deployed either.

  • Deleting parties – if you are implementing parties and roles, you will have to un-enlist the parties and delete them prior to un-deploying the assemblies that contain the orchestrations.  Included in the SDK are 2 sample projects that when compiled create one exe to un-enlist the parties, and one exe to delete the parties.  These are coded assuming that the SQL database where your management database resides is local to BizTalk.  This is most likely not the case, so you will have to edit this code to determine what server the database is on, and supply that in the connection string.  It’s quite simple using MSI, and if you need some code snippets just send me a comment.

  • Removing orchestrations – running the MSI to delete orchestrations (if using the default BTSInstaller) won’t remove your Orchestrations since they are in the running state.  Chances are if you’re testing they are even in the active state.  In the supplied StopOrchestrations script, there is a flag you can pass into the Unenlist method that will terminate any active instances.  Pay attention to this if this is not your intended actions.

  • Write to the event log – you will notice in my scripts I have added some code to shell out and use the LogEvent method to write errors to the Application Log.  With the nature of VBScript, if you don’t do this and run into errors, you will never figure out where to start looking.

 


Use the scripts at your own risk.  They are commented, so you should be able to read along and see what’s going on.  Remember, if you are not using roles, you won’t need half the stuff in the Install.bat file, and some of the stuff in Cleanup.bat, but read through those to see the order things are called, and you will need some of the other scripts regardless.


 


Cheers!!

Halo 2! :-)

Things I like to see in my MSN window #1:


“Can you get up to TVP? Halo 2 team are here and we’re playing.”


WOOHOO! Just had a 6 minute deathmatch round against the UK XBox team. We lost about 50-20, which is pretty shabby, but I did manage to score a double kill. Dual wielding SMGs is teh roxx0rs! I just about resisted temptation to hit campaign mode 😉


Roll on Thursday.