Demystifying Direct Bound Ports – Part 2

Demystifying Direct Bound Ports – Part 2

Message Box Direct Bound Ports


Message box direct bound ports, as its name implies, allows you to drop messages directly into the message box without an explicit recipient and it allows you to subscribe to messages, not from any particular sender, which meet certain criteria.  To configure a message box direct bound port set the ’Partner Orchestration Port’ property to ’Message Box’.


Sending a message on a message box direct bound port is equivalent to publishing the message to a message bus, in this case the message box.  For any published message there can be any number of subscribers for that message.  If there are no subscribers interested in the message at the time you publish it a persistence exception is thrown with an inner exception of subscription not found.


Sometimes when sending a message through a message box direct bound port you may have an implicit recipient in mind and will set properties to particular values that you know a subscriber(s) is looking for and is therefore used as a method for loose-coupling.


Recipients of the message can be any type of service that can subscribe to messages which include orchestrations and send ports.


Receiving a message through a message box direct bound port is equivalent to subscribing to a message bus with filter criteria.  For an activating receive shape the subscription will be the message type and the filter and for non-activating receive shapes the subscription will be the message type and the correlation set.


Every receive shape always includes the message type as part of its subscription. 


If you don’t add any filter criteria to the activating receive shape connected to a message box direct bound port then the subscription will be:


http://schemas.microsoft.com/BizTalk/2003/system-properties.MessageType == MyMessageType


Which can be read as, “Give me every message whose message-type is MyMessageType”.  This is what differentiates a send port subscription from an orchestration subscription.  A send port subscription, if you don’t provide it with a filter, will only handle messages sent directly to it via a bound (specify-now or specify-later) logical orchestration port (i.e. the subscription for a send port includes its send port id as a clause OR’ed with it’s filter).  A message box direct bound receive port that does not have a filter explicitly added will receive every message that matches the message type that the port’s operation is configured for.  [Note: If the message-type is XmlDocument then this is a type-agnostic port.  Type-agnostic ports accept any message type.  To accomplish this the subscription won’t include a message-type, implying that it is not filtering based on message-type.  So if there is no filter on the port and the port is type-agnostic then the subscription will be empty and in this case will not match any incoming message as there are no predicates for the routing engine to match against.]


When using message box direct bound receive ports be careful to make the filter as specific as possible.  I typically caution developers to only use receive message box direct bound ports only when it is absolutely necessary or the benefits (e.g. loose-coupling) out-weigh the risks, as many developers make the mistake of not making their filter distinguishing enough.  The side effect of not having a distinguishing enough filter is that the orchestration can receive messages it didn’t intend to. 


An example of a mistake that is commonly made is the following; an activating receive shape with a filter using some custom property followed sometime later by a send shape sending the same message to a send port physically bound to an endpoint.  In this case the developer assumes that since the logical send port is bound to a physical send port then only that physical send port will get the message.  This is not the case.  What will happen is that after the first message is published to the message box an infinite number of orchestrations will be activated.  This will happen because the message being sent to the send port endpoint is published to the message box and, because every time a message is sent to the message box all subscriptions are checked for a match, the activation filter of the orchestration will match the message and a new orchestration instance will be started, then recurse to infinity.  If the orchestration is more complex and/or if the receive is using a correlation set as its subscription then the difficulty of trying to debug a similar scenario increases dramatically.


 



Figure 5 Orchestration that will consume its own messages


 


In this case the only way to not have the orchestration pick-up the same messages it is sending  (i.e. incorrectly activate a new instance of the same orchestration) is to either construct a new message and change MyProp property to some other value or send a message of a different message-type.

Cannot Link Field in BizTalk Server 2004 Mapper Tool

Cannot Link Field in BizTalk Server 2004 Mapper Tool

Has the BizTalk Server 2004 Mapper tool ever prevented you from linking a field in a map?  Have you ever spent two hours figuring out why?


Check to see if the field or record has a constant value.  If so, that is your problem.  The following is from the BizTalk Server 2004 documentation:


You cannot link to a Field or Record node in the destination schema that has a constant value associated with it. On the other hand, you can link to a required Field or Record node in the destination schema that has a default value associated with it. Note, however, that when you test the map, the default value will be used.


Have fun…


 

Demystifying Direct Bound Ports – Part 2

Demystifying Direct Bound Ports – Part 2

Message Box Direct Bound Ports


Message box direct bound ports, as its name implies, allows you to drop messages directly into the message box without an explicit recipient and it allows you to subscribe to messages, not from any particular sender, which meet certain criteria.  To configure a message box direct bound port set the ’Partner Orchestration Port’ property to ’Message Box’.


Sending a message on a message box direct bound port is equivalent to publishing the message to a message bus, in this case the message box.  For any published message there can be any number of subscribers for that message.  If there are no subscribers interested in the message at the time you publish it a persistence exception is thrown with an inner exception of subscription not found.


Sometimes when sending a message through a message box direct bound port you may have an implicit recipient in mind and will set properties to particular values that you know a subscriber(s) is looking for and is therefore used as a method for loose-coupling.


Recipients of the message can be any type of service that can subscribe to messages which include orchestrations and send ports.


Receiving a message through a message box direct bound port is equivalent to subscribing to a message bus with filter criteria.  For an activating receive shape the subscription will be the message type and the filter and for non-activating receive shapes the subscription will be the message type and the correlation set.


Every receive shape always includes the message type as part of its subscription. 


If you don’t add any filter criteria to the activating receive shape connected to a message box direct bound port then the subscription will be:


http://schemas.microsoft.com/BizTalk/2003/system-properties.MessageType == MyMessageType


Which can be read as, “Give me every message whose message-type is MyMessageType”.  This is what differentiates a send port subscription from an orchestration subscription.  A send port subscription, if you don’t provide it with a filter, will only handle messages sent directly to it via a bound (specify-now or specify-later) logical orchestration port (i.e. the subscription for a send port includes its send port id as a clause OR’ed with it’s filter).  A message box direct bound receive port that does not have a filter explicitly added will receive every message that matches the message type that the port’s operation is configured for.  [Note: If the message-type is XmlDocument then this is a type-agnostic port.  Type-agnostic ports accept any message type.  To accomplish this the subscription won’t include a message-type, implying that it is not filtering based on message-type.  So if there is no filter on the port and the port is type-agnostic then the subscription will be empty and in this case will not match any incoming message as there are no predicates for the routing engine to match against.]


When using message box direct bound receive ports be careful to make the filter as specific as possible.  I typically caution developers to only use receive message box direct bound ports only when it is absolutely necessary or the benefits (e.g. loose-coupling) out-weigh the risks, as many developers make the mistake of not making their filter distinguishing enough.  The side effect of not having a distinguishing enough filter is that the orchestration can receive messages it didn’t intend to. 


An example of a mistake that is commonly made is the following; an activating receive shape with a filter using some custom property followed sometime later by a send shape sending the same message to a send port physically bound to an endpoint.  In this case the developer assumes that since the logical send port is bound to a physical send port then only that physical send port will get the message.  This is not the case.  What will happen is that after the first message is published to the message box an infinite number of orchestrations will be activated.  This will happen because the message being sent to the send port endpoint is published to the message box and, because every time a message is sent to the message box all subscriptions are checked for a match, the activation filter of the orchestration will match the message and a new orchestration instance will be started, then recurse to infinity.  If the orchestration is more complex and/or if the receive is using a correlation set as its subscription then the difficulty of trying to debug a similar scenario increases dramatically.


 



Figure 5 Orchestration that will consume its own messages


 


In this case the only way to not have the orchestration pick-up the same messages it is sending  (i.e. incorrectly activate a new instance of the same orchestration) is to either construct a new message and change MyProp property to some other value or send a message of a different message-type.

Testing Pipeline Components

Testing Pipeline Components

As many of you know, there are basically two
ways
to test custom Pipelines and custom Pipeline Components in BizTalk Server:

  1. To create them and test them by using them “for real” in biztalk messaging scenarios:
    i.e. configure ports to use your custom pipelines and feed messages through them.

  2. Use the pipeline.exe tool included in the BizTalk SDK to run them standalone.
  3. >

Option 1 is required, definitely a needed option, but it is inconvenient, at
the least, for agile development: it is slow, cumbersome, and hard to automate. Option
2 is easier and faster, but it’s also inconvinient to automate.

So I had been looking at ways to automate the testing of pipelines and pipeline
components, and ran across this
article
, explaining how to use mocks to unit test pipeline components. Neat, but
inconvenient as well. Furthermore, it does not allow you to test pipeline components
inside a real pipeline, which is sometimes necessary to ensure your component works
well with the built-in components in BizTalk (disassembler in particularly can be
pretty picky).

I realized then that MS already provided most of what you needed to test pipeline
components in a better way, and one which was possible to embed inside NUnit tests
(or whatever unit testing framework you use): Pipeline.exe relies on a set of helper
components in the PipelineObjects.dll assembly included with the SDK, which does the
heavy work of mocking the most important BizTalk objects that execute pipelines, such
as IPipelineContext, IBaseMessage and IBaseMessageFactory and so on. I figured that,
with a nicer API built on top of this, I could come up with something that really
made it easier to test your pipeline components in a more agile manner, and that complemented
the core unit tests you are creating for internal functionality.

So I’ve been working on this for the past couple of days and I’ve already have a working
implementation. Here’s an example of what it might look like:

/// <summary>

/// Tests
that we can execute successfully a loaded pipeline

/// with
a flat file as input

/// </summary>

[Test]

public void Test_ExecuteOK_FF()

{

   ReceivePipelineWrapper pipeline =

      PipelineFactory.CreateReceivePipeline(typeof(CSV_FF_RecvPipeline));


 

   //
Create the input message to pass through the pipeline

   Stream stream = DocLoader.LoadStream(“CSV_FF_RecvInput.txt”);

   IBaseMessage inputMessage = MessageHelper.CreateFromStream(stream);

   inputMessage.BodyPart.Charset = “UTF-8”;


 

   //
Add the necessary schemas to the pipeline, so that

   //
disassembling works

   pipeline.AddDocSpec(typeof(Schema3_FF));


 

   //
Execute the pipeline, and check the output

   MessageCollection outputMessages = pipeline.Execute(inputMessage);


 

   Assert.IsNotNull(outputMessages);

   Assert.IsTrue(outputMessages.Count > 0);

}

The code above does the following:

  1. Loads a receive pipeline, which in this case contains a Flat File Disassembler

  2. Creates a new input message with a Flat File loaded from a resource stream

  3. Loads the Flat File Schema the FFDasm will use to parse the message and makes it available
    to the pipeline

  4. Execute the pipeline and checks the output.
  5. >>>

It’s cetainly not  a lot of code, and personally I think the API is looking pretty
nice right now. So far, I got it working with the following scenarios:

  • Support for both receive and send pipelines

  • You can create pipelines programatically, by adding components to any stage, or load
    an existing biztalk pipeline (the preffered method) using its type.

  • You can make schemas known to the pipeline before executing it by loading the necessary
    biztalk schemas out of a biztalk assembly. I’ve tested it already with both XML and
    Flat File schemas, schemas with one or multiple roots, schemas with promoted properties,
    and envelope schemas.

  • Both debatching receive pipelines and batching send pipelines are supported through
    the API.
  • >>>

Here’s another example, this time of doing multiple document batching into an envelope
using the XML Assembler:

/// <summary>

/// Tests
we can execute a send pipeline with

/// multiple
input messages and an envelope

/// </summary>

[Test]

public void Test_ExecuteOK_MultiInput()

{

   SendPipelineWrapper pipeline =

      PipelineFactory.CreateSendPipeline(typeof(Env_SendPipeline));


 

   //
Create the input message to pass through the pipeline

   string body =

      @”<o:Body
xmlns:o=’http://SampleSchemas.SimpleBody’>

         this
is a body</o:Body>”
;

   MessageCollection inputMessages = new MessageCollection();

   inputMessages.Add(MessageHelper.CreateFromString(body));

   inputMessages.Add(MessageHelper.CreateFromString(body));

   inputMessages.Add(MessageHelper.CreateFromString(body));

  

   //
Add the necessary schemas to the pipeline, so that

   //
assembling works

   pipeline.AddDocSpec(typeof(SimpleBody));

   pipeline.AddDocSpec(typeof(SimpleEnv));


 

   //
Execute the pipeline, and check the output

   //
we get a single message batched with all the

   //
messages grouped into the envelope’s body

   IBaseMessage outputMessage = pipeline.Execute(inputMessages);


 

   Assert.IsNotNull(outputMessage);


 

   using ( StreamReader reader = new StreamReader(outputMessage.BodyPart.Data)
)

   {

      //
d contains the entire output message

      string d = reader.ReadToEnd();

   }

}

One important difference with working with the API I’m creating and pipeline.exe is
that the API is meant to be used with existing biztalk artifacts, which you can create
with regular biztalk projects (though they don’t need to be deployed to use them).
Pipeline.exe, on the other hand, works with the raw *.btp and *.xsd files, which is
simpler for some things, but far more inconvinient for testing, at least in my opinion.

I should be ready to post this in a few days, as I still need to test it more throughly
and fix a few things, but if anyone is interested, let
me know
and I’ll pass on what code I have right now. Do note I’m implementing
this for BTS06 only, though I see no reason why you couldn’t port it to BTS04!

Promoting properties in generic documents

Promoting properties in generic documents

Often when developing a generic business process, using generic documents makes the most sense.


Consider the core functionality in the ESB project I’m on, we need to take a document of ANY type (type will not be known at development time), and apply a map that will result in a document whose type is not known at development time. Without using generic documents, the only way to do this would be to have a specific mapping process for each combination of input and output documents. However, this is not a workable approach as the numbers of schemas and maps start to grow. Generic documents solve this problem. In fact, for the ESB project I am working on, generic documents are the cornerstone that functionality is being upon.


With BizTalk 2004, you would specify a generic document by creating a new message, and using the type browser to navigate to the System.Xml.XmlDocument class. In BizTalk 2006, System.Xml.XmlDocument has been elevated to a first class citizen status and is now available without needing to browse to it, alongside System.String and System.Boolean.


So, in theory, for my generic mapping requirement, I would receive an inbound  System.Xml.XmlDcoument, and apply a map to create an outbound System.Xml.XmlDcoument.


This all works great, and in most cases is enough. However, there is one critical difference between using generic documents like this and using strongly typed documents. In the generic ESB case I was dealing with, the difference is enough to be a show-stopper. The problem is that promoted properties are not promoted in the outbound document if the document is not strongly typed (schema specified at design time).


To understand why, consider when property promotion happens. It can happen in a receive pipeline, once the document type is identified. Promoted properties for a known schema will have been defined in a property schema associated with that known schema. When you create a document of a known type in an orchestration, BizTalk is aware of the property schema associated with the new message and will promote the properties.


However, with a generic document, this does not happen. If you use direct-bound ports (which in my opinion is what all BizTalk developers should be doing in order to create extensible, loosely-coupled and scalable processes, see my previous post) you will find that an exception occurs when the outbound message is persisted to the MessageBox. The exception message is that BizTalk was unable to route the message because no subscribers were found. This will happen even if you have a valid subscription set up that is filtering on a promoted property in the message. This happens despite the fact that the associate property schema was deployed, and you can assign and read the properties in question in your orchestration. If you examine the message that got suspended, and look at the context properties, you’ll see that your values were assigned (so the subscription should have been found), but the property is in fact not promoted, so BizTalk has no way to match the subscription.


Pity, SOOOOO close to generic nirvana. So the question now becomes: is there a way to force property promotion for generic documents? The answer, very fortunately for those of us working with generic documents, is yes, it can be done.


In order to do this, you need to create a correlation set, and initialize it in the send port that is bound to the MessageBox. Although it may seem that you are creating a correlation set that you have no intention of following, but it does make sense as the subscriber(s) you have set up that are looking for those promoted properties are in effect following the correlation. You can create a single correlation type that references one or more message context properties, and they will all be promoted and become available to the messaging engine to do routing.


The punch line: in BizTalk 2006, this is actually now documented in the section dealing with correlation types. However, unless you know and understand the technique, you’d never go looking there. This is the reason I put this post together, enjoy, and happy travels as you do off on your loosely-coupled generic way!



 

Comet Schwassmann-Wachmann 3 returns

Comet Schwassmann-Wachmann 3 returns

Yes, it’s still alive and rolling. This is a famous comet discovered in 1930 that broke apart on it’s return in 1995. It keeps disintegrating and there are over 20 parts identified. The brightest ones are already visible in amateur instruments as they about 9th magnitude now. In the middle of May when it passes near Earth the comet may reach 4th magnitude which is in naked eye range given clear sky and away from city lights. Comet updates and finding charts can be found on NASA and at Sky and Telescope site. Got to plan a night out in mountains. 🙂

BizTalk Server: Understanding Application Upgrade

BizTalk Server: Understanding Application Upgrade

If you didn’t read Understanding Application Deployment, be sure to view the archive.  This explains the new features in BizTalk Server 2006 which assist with Application Deployment.


 


Now that you understand how to get applications deployed and running in production, at some point they’ll probably need to be upgraded.  Perhaps a partner’s schema will change so your schema and map will have to be changed accordingly.  Or maybe the business will require a new auditing API to be called during one part of the business process’s execution.  And for as good as our BizTalk developers are, there are always code defects in application code that will need to be fixed.


 


So how does BizTalk Server accommodate these upgrades?  Well, there are a couple of different ways to upgrade your running applications, and as always, different points to consider.


 


Simple Upgrade


 


One scenario we’ll deem the “Simple Upgrade Scenario”.  This scenario requires no downtime, requires no code changes, and can be performed by an IT Pro or Business Analyst (BA) single-handedly.


 


Some examples include:



  • An additional partner requests access to all orders of a certain type
  • A business rule changes
  • A queried web server will be phased out and replaced with a new one

Changes in this scenario are typically as simple as adding an additional send port to a send port group, adding an additional send port with appropriate subscription filters, changing the URI of a send port, updating a rule in the BRE, et cetera.


 


However, there may be more involved cases of application upgrade.


 


Patching Scenario


 


One such scenario is called the “Patching Scenario” in which existing application binaries in production must be edited and swapped with updated ones.


 


Typical examples include:



  • Patching an orchestration with a code change
  • Changing a schema
  • Updating a map

In this case, the customer scenario must meet the following conditions:



  • System downtime can be scheduled for application upgrade
  • Customer does not have long-running business processes
  • Dehydrated or suspended instances can be quickly resumed and completed, or alternately terminated

If the customer is updating an orchestration, say, and meets these criteria, the following steps might be their typical marching orders.  First, the application binaries will be updated and recompiled with name and version unchanged.  Then, downtime must be scheduled and new instances should be prevented from starting up.  During this period, running instances will have to be stopped and unenlisted, which requires all dehydrated or suspended instances to be either manually resumed and completed, or terminated.  After service instances are in the stopped and unenlisted state, the new application binaries can be deployed.  First, the group’s MgmtDb should be updated by performing a Deploy operation using the overwrite flag.  Second, each and every server in the group must be updated by GAC’ing the changed assembly.  Finally, all BizTalk host instances should be restarted.  At this point, the new orchestrations can be re-enlisted and started, and message flow can be resumed.


 


However, some customers may have more stringent requirements for upgrade.  For example, they may not be able to schedule downtime or may have very long-running instances which cannot be terminated.  In these cases, side-by-side versioning may be required.


 


Side-by-side (SxS) Versioning


 


This scenario, “Side-by-side Versioning”, allows two versions of the same application to be running side-by-side.  The .NET runtime inherently allows for same-named but different versioned assemblies to be deployed and running.  BizTalk also allows for this, although some discretion is advised.


 


BizTalk artifacts like maps are typically chosen by FQSN (fully-qualified strong name) which means the bindings are mindful of the version used.  Making the code change in the map, upping the version number (major and minor builds only), compiling, and deploying the additional assembly to the group (and all machines) would then allow users to simply select the new map for inbound or outbound mapping.  However, calling maps from orchestration may require code changes to the orchestration itself if the map reference is hard-coded.


 


Making changes to orchestrations can be a bit more involved.  If you have short-lived orchestrations, then the “Patching Scenario”, previously described, may be sufficient.  But if you have long-running orchestrations or cannot terminate existing instances, then side-by-side versioning will be your only alternative.  Typically, the SxS story for orchestrations would go something like this.


 


Mortgage Company X needs to update their orchestration, but has many existing orchestration instances in flight that aren’t expected to terminate for weeks or months.  They’d like for their existing instances to culminate on the live DLLs, but have new instances start up on the rev’d assemblies.  To do this, they begin by having the developer increase the orchestration’s version number (major and minor builds only) and make the code changes to the orchestration.  Next, they’ll deploy this new assembly to the group and GAC it on all runtime machines.  Then the customer has the option to create new receive ports and locations for the new version or use existing ports.  In the former is chosen, simply binding to the new ports and enlisting/starting the new artifacts will probably be sufficient.  If the latter is chosen, some additional steps apply.  The customer will have to (a) bind the new version of the orchestration to existing ports, (b) unenlist (but do not stop) the old orchestration version, and (c) enlist and start the new orchestration version.  The documentation includes a script which help you to do steps (b) and (c) in one transaction so that messages are not missing subscriptions in between manual clicking.


 


The end result is that future publications will be directed to the new orchestration version and already running orchestration instances will complete on the older version.  The product documentation should be consulted and followed closely for this kind of procedure, or for scenarios that depart from the “norm”, such as orchestration use of role links, direct binding, Specify Now binding, BAM activities, et cetera.


 


Upgrading pipelines has several alternatives as well.  The simple solution is to simply choose the newly deployed pipeline version in the send port or receive location.  This will replace the old pipeline with the new one.  However, if true side-by-side functionality is required for backwards-compatibility purposes, then new send ports and receive locations will have to be created and bound to with the new pipeline version specified. 


 


Finally, updating schemas also has some peculiarities.  Side-by-side versioning is absolutely allowed, but the schema resolution behavior for the XML disassembler should be examined closely (described well in the docs).  In certain cases, customers may want to hard-code references in the DASM properties to specific versions of the schemas to avoid dynamic resolution behavior.  This will allow for more “pure” side-by-side scenarios.


 


How can MSIs be leveraged for upgrade scenarios?


 


Upgrading applications are typically a deliberate and precise operation in production.  Because of this, following a manual checklist is advised.  However, certain steps may be streamlined by using MSIs.


 


MSIs are a great way to wrap up your application artifacts into a distributable package.  This may help when rolling out updated DLLs to multiple runtime boxes or assist with the group-level deploy.  Precaution should be used when creating the MSIs to exclude all other unchanged resources and bindings from the package.


 


If conducting a “Patching Scenario”, stopping/unenlisting/re-enlisting/starting steps will need to be done manually before and after Importing/Installing the MSI.  Similarly, numerous steps outlined above will have to be performed in the “Side-by-side Versioning Scenario” before and after using the MSI.


 


Still, MSIs can be useful for auditing purposes alone, since they leave a trace of the application’s upgrade installation on the machine in the Add/Remove Programs list.


 


More Information


 


In general, application upgrade is a cerebral activity and any change that you’ll ever make to your production environment should always be well tested and practiced in a pre-production environment.  Having a non-production environment which mirrors the live environment will allow you to dry-run this and other production changes.


 


Needless to say, you should also carefully consult the product documentation before making any changes to your production environment, but this is especially true in the case of application upgrading.  For more information on this topic, be sure to check out the documentation article entitled “Managing Artifacts”.


 


HTH,
Doug Girard


Note: This posting is provided “AS IS” with no warranties, and confers no rights.