Why GitHub?

Why GitHub?

Nick Heppleston, a fellow BizTalk blogger and
user of my PipelineTesting library, left
a comment
on a recent post asking why I chose to put the library code on GitHub instead
of CodePlex. I think it’s a fair question,
so let me provide some context.

As many of you are probably aware of it by now, there has been much talk lately about
Distributed Version Control Systems (DVCS), as an alternative to the more traditional,
centralized control systems that have been common in the past (and still are). DVCS
has gained a lot of traction lately, particularly with Open Source projects because
it really suits the already distributed nature of Open Source development.

For a long time I remained fairly skeptical of DVCS tools. To be honest, I just didn’t
get what the fuzz was about and centralized systems had worked just fine for me. I
use CVS, Subversion and Team Foundation Server on a regular basis, and you can use
all of them successfully with your projects. Obviously, each one has its strengths
and issues, but they are all very usable tools.

However, during last year I’ve been working on a bunch of different projects where
the workflow that suited my work style and my requirements best started to make using
a centralized source control system a bit harder than it used to be before.

This made me realize that for some of the things I do, a centralized control system
just doesn’t cut it anymore. In other words, I crossed some invisible threshold where
the SCCS stopped being an asset and started becoming a liability. Instead of having
source control be a painless, frictionless process, it was becoming something I dread
to deal with. And that’s when I finally understood what DVCS was all about.

Why GIT?

git-logo So
at that point I started to look into DVCS tools and playing a bit with them. There’s
a good discussion of some of
the most important DVCS tools around, but in the end I finally settled for GIT,
using the msysgit installation on
my windows machines.

So far, I haven’t really run into any really significant issues when running msysgit;
the core stuff seems pretty solid, at least in my experience. I know there are some
issues with git-svn in current builds, but I haven’t used it yet so I can’t comment
on that.

I’m still very much a newbie at this, but I’m slowly getting the hang of it, and so
far, I’m really liking it. Some aspects of git I really like are:

  • Speed: It’s pretty fast in most operations, even with large source code files (like
    tool-generated ones).
  • Local branches: I love being able to create and switch between branches very easily
    and fast. Once you realize how easy it is to use them, you start taking advantage
    of branching a lot more than on regular, centralized version control systems.
  • Single work-tree: Not having to maintain N-copies at the same time of your work directory
    when you’re dealing with N-branches is a real plus in many cases. Of course, you can
    choose to do so if you like, but it’s no necessary, like with other tools.

Why PipelineTesting?

I’ve always shared the code of my PipelineTesting library through this website. However,
I was only publishing snapshots of the code, and while that was fine given how few
people use it, it was sometimes a drag. I really did want to share the code more broadly
and make it easier to get to some of the changes I was working on even when I had
not explicitly released a new official version of the library.

Last year I even commented a bit on this topic and asked
for feedback
about what the best place to host the code for some of my projects
might be, but in the end I didn’t make any decision about it.

Why not CodePlex?

codeplex CodePlex
is a fine site for publishing and hosting your open source projects. I was skeptical
about it at first, but it really took off and has a number of things going for it.

The greatest strength that CodePlex has is precisely that it’s a Microsoft-technology
oriented site. This means that it is a natural choice both when publishing projects
that explicitly target the MS stack, and when you’re looking for open source projects
based on said technology.

I think that, overall, the CodePlex team has done a great job of keeping the site
running and making sure it became a viable and valuable service to the community (and
Microsoft itself).

The downside of CodePlex is, unfortunately, the technology it is based on: Team Foundation
System. TFS is a fine, robust, centralized source control tool. But it also has a
few things that manage to take the fun out of using it:

  • The support for working disconnected from the centralized server is just not good
    enough. Sure, it has improved a bit since the initial release, but it is far from
    a friction-less experience.
  • The TFS Client, integrated into Visual Studio. This is supposed to be an asset, but,
    honestly, I don’t always want my source control integrated into my IDE. It can be
    good sometimes, but it can also be very painful.

Just to set the record straight: Yes, I am aware of the command line tools for driving
TFS, and that’s certainly an option. Yes, I’m also aware of SvnBridge,
which I haven’t used myself yet, and it is a really good option and addition to CodePlex,
but means running yet another tool.

Why GitHub?

github The
surest way to get proficient at something is to do it. I want to learn more about
DVCS so that I can improve my workflow, and that means using my tool of choice.

For the time being, I’m choosing to stick with git for my personal projects (and some
of my work). Given this choice, GitHub was a natural choice as to host my public stuff.

There are several aspects about GitHub that I like, but most of all, its that it is
very simple overall, easy to get started with, and mostly stays out of my way. I also
find the social aspects of it very intriguing, though naturally I’m not using those

Of course, not everything is perfect in GitHub-land. Some will argue that it doesn’t
offer as many features as CodePlex in some aspects (like no forums) but that doesn’t
bother me at this point, as I don’t really need that for now.

A bigger issue, however, could be that GitHub is not yet a very visible site among
the .NET/BizTalk communities. Heck, I’m pretty sure PipelineTesting is the only BizTalk-related
project on it :-).
I think that anyone looking for my library is probably going to find it through this
weblog first, so I’m not that worried about it, and the BizTalk community itself isn’t
all that large (it has grown enormously, but it’s still small by comparison).

What’s next?

I plan to continue working on PipelineTesting and I have a few features in mind for
the next release. If anyone wants to contribute or has suggestions/bugs, please let
me know about it!

I will continue to offer a copy of the
library for download that includes a snapshot of the source code and a pre-compiled
copy of the library, like I’ve been doing so far. People shouldn’t have to install
git just to get a copy of the library and use it, unless they need something in the
code that’s not yet in an "official" build. Of course, I’m a nice guy, so
if you really really need it, just ask :-).

I also plan to start taking advantage of some GitHub features. In particular, I want
to migrate some of the "documentation" that I’ve written over time as blog
posts to a more appropriate format that’s easier to maintain and to use. For this,
I want to put the GitHub Wiki to use and also add a proper readme file to make it
easier to get started with the library.

technorati PipelineTesting, git, GitHub, CodePlex, BizTalk

Split into transactions, suspend interchange on error: KB956051

For those of you who have trading partners that deal with trading partners that send EDI in batch mode and can’t fix/resend edi transactions individually, there was really no way to deal with this in the current R2 process, as there was no schema that could be created to consume the XML message that was created when you choose the Preserve Interchange – suspend Interchange on Error

The issue is that if the interchange was valid nothing but the EDI send pipeline can consume it:

So Microsoft has created a new processing option Preserve Interchange – suspend Transaction Sets on Error

Which can be downloaded from KB956051

BizTalk Test Workshop, Cloud Platforms Whitepaper & Jon Flanders in Stockholm

BizTalk Test Workshop
I’ll be running the first public delivery of the BizTalk Test Workshop I have been creating on the 18-19th of September in Stockholm. It’s a two-day course that covers an introduction to test strategies, and then looks at applying those strategies to a BizTalk project. Unit testing, stress testing, integration testing, functional testing and user acceptance testing are covered. NUnit, VSTS Test, BizUnit and Load Gen will be used as the main testing tools, with InfoPath and Excel used to drive the user acceptance tests. The course will also look at using other diagnostics tools to examine the performance and reliability of a BizTalk Server solution. I’ve already taught the course twice before in private deliveries, so hopefully all the issues with the labs and demos should be ironed out by now.
If you are interested in attending, contact Informator in Stockholm who are hosting the course.
The course will be taught in English.
If you are interested in custom deliveries of the course at your location, feel free to contact me via my blog.
A Short Introduction to Cloud Platforms
David Chappell (the west-coast David Chappell) has published quite a good whitepaper on cloud based platforms, mentioning the foundations, infrastructure and application services that will make up a platform in the cloud. As with the other David-Chappell whitepapers it’s worth a read if you are interested in all things cloud based.
Jon Flanders Presenting in Stockholm
Jon Flanders will be presenting at the BizTalk User Group in Stockholm on September 4th, if you are in the area it will be worth attending, the link for registration is here:

Preserve Order while mapping

There is the issue of the behavior of the mapping process creates invalid XML.

The input instance looks like this:

<xml> <loopA /> <loopB /> <loopA /> <loopB /> </xml>

However, when using the mapper, you create your output and it ends up looking like this:

<xml> <loopA /> <loopA /> <loopB /> <loopB /> </xml>

How to get this to work (only in R2) is to open up the btm file and change the following attribute from its default value of No to Yes in the mapsource element


<mapsource Name="BizTalk Map" BizTalkServerMapperTool_Version="2.0" Version="2" XRange="100" YRange="420" OmitXmlDeclaration="Yes" TreatElementsAsRecords="No" OptimizeValueMapping="No" GenerateDefaultFixedNodes="Yes" PreserveSequenceOrder="No" CopyPIs="No" method="xml" xmlVersion="1.0" IgnoreNamespacesForLinks="Yes">

Here is the new code:

<mapsource Name="BizTalk Map" BizTalkServerMapperTool_Version="2.0" Version="2" XRange="100" YRange="420" OmitXmlDeclaration="Yes" TreatElementsAsRecords="No" OptimizeValueMapping="No" GenerateDefaultFixedNodes="Yes" PreserveSequenceOrder="Yes" CopyPIs="No" method="xml" xmlVersion="1.0" IgnoreNamespacesForLinks="Yes">

PipelineTesting: XML Assembler and E_FAIL

PipelineTesting: XML Assembler and E_FAIL

Fellow BizTalk developer Bram Veldhoen was kind enough to send me some suggestions
for a future version of my PipelineTesting library,
as well as with a question that could point to a potential bug in the library.

The problem basically revolves around consuming a stream returned by the XML Assembler
component in a send pipeline when testing under a library. What Bram noticed was that
executing the pipeline would seem to work, but trying to read the body part stream
of the output message would fail with a ComException with error code 0x8004005

I was fairly confident this should’ve been working, based on my own use of the library,
but I sat down to test it just to make sure. What I discovered was that indeed this
can happen if the pipeline context for the test is not aware of the schema for the
message being processed by the pipeline.

I added a new test to the library to make sure this was working correctly:


public void CanReadXmlAssemblerStream() {

   SendPipelineWrapper pipeline = Pipelines.Xml.Send()


   IBaseMessage input = MessageHelper.CreateFromStream(



   IBaseMessage output = pipeline.Execute(input);


   // doc should load fine

   XmlDocument doc = new XmlDocument();


   XmlNodeList fields = doc.SelectNodes("//*[local-name()='Field3']");

   Assert.Greater(fields.Count, 0);


There are a few things to keep in mind about this issue:

  1. If you’re using the XML Assembler, make sure your pipeline context has all the necessary
    schemas. There are three ways you can do this, depending on how you are creating the

    1. If you’re using the original raw API, you can use the AddDocSpec() method
      of the SendPipelineWrapper class.
    2. If you’re using the new, simple API, you can add the schema through the WithSpec() method,
      which is what the test above does.
    3. If you’re using the simple API, but you’re dynamically creating the pipeline, you
      can just add the schemas directly in the XmlAssembler configuration using the WithDocumentSpec() and WithEnvelopeSpec() methods
      (see the XmlAssembler.cs file for details).
  2. Make sure you’re testing the right thing. Sometimes, it’s enough to make sure that
    the pipeline can be executed successfully. Remember, however, that pipelines are streaming
    beasts, so a lot of the work will oftentimes happen just when you read the resulting
    stream, thus causing the processing to happen.

    This is exactly the scenario we’re seeing here today.

The second point is really important, but, for some reason, I never put much emphasis
in it when creating the library and when talking about it. I think this is important
enough to warrant doing something about it.

For starters, I’ve committed a few changes to the PipelineTesting
. Besides adding the test above, I’ve also added a few ConsumeStream() and ReadString() helper
methods to the MessageHelper class to make it easier to validate your
components work by simply reading the entire stream from a message. I’ll add a few
other helper methods for this later on, but the idea is to make it so that you can
write less code for your tests.

technorati BizTalk, PipelineTesting

Working with Fault messages & BizTalk 2006 R2

A while ago I needed for several request/response orchestrations to create a fault message.

When an exception occurs in an orchestration it’ll timeout and won’t send a soap fault message. At least what you want to do is send a message that something went wrong. In my case I wanted to send the detailed exception as well.

I found several sources on the Internet but they weren’t all that complete.

My scenario works at least for the following scenario:
I have a request/response orchestration, hosted as WCF receiveport. All underlying webservices are WCF hosted services.

The Fault messaging howto

Btw, this howto assumes some basic knowledge of BizTalk 2006 R2 and WCF

  1. First your orchestration needs to be long running. This is due to the fact that the send port is encapsulated by an atomic scope.
  2. Create an scope were all ‘the magic’ happens. (see screenshot)
  3. Now create the exception handler.
    You can choose exception or in my case I choose SoapException.
  4. Now create your request/response port. When that’s done, give a sane name to the operation and right click to create a Fault message.
    You can choose to create a string as fault message or something else. But a different approach can also be very handy.
  5. Create you own custom fault contract within WCF, example code:
    [DataContract(Namespace = "http://mynamesapce/2008/08/Faults", Name = "OperationFault" )]
    public sealed class OperationFault
        private string _message = String.Empty;
        /// <summary>
        /// Creates a new, uninitialized instance.
        /// </summary>
        public OperationFault()
        /// <summary>
        /// Creates a new instance.
        /// </summary>
        public OperationFault(string message)
            _message = message;
        /// <summary>
        /// Creates a new instance.
        /// </summary>
        public OperationFault(Exception error)
            if (error != null)
                _message = error.Message;
        /// <summary>
        /// Gets or Sets the message
        /// </summary>
        public string Message
            get { return _message; }
            set { _message = value; }

  6. Create a message within the orchestration and choose the OperationFault class as your message.
  7. Assign the message to the Fault port.
  8. Now comes the part were we take a look at the exception scope handler. If we look at the scope exception handler below:

    This is the send share with the atomic scope around:

    So if the send did not succeed. (e.g. An exception occurred earlier). At the moment the exception raises we need to construct the Fault message.

    The code for the messageassignment looks like this:

    FaultMessage =

    new MyNamespace.FaultContracts.OperationFault(soapException.Message + soapException.StackTrace);

    Btw you can put any kind of message you like.

    The extra decide is necessary to let the orchestration/compiler believe that the send only executes if the other didn’t complete or didn’t start.

    The code for the decide:


  9. Now you’re done for the orchestration part!. So for a recap the complete orchestration (Sorry without the connected receive and send shapes.)
  10. If you make from the orchestration call to external webservices via WCF, don’t forget to NOT enable this setting:

    Propagate fault message.

    This options is used if you want to have separate handling of (typed)exceptions from a (wcf) webservice. Theirs only one disadvantage that you can’t handle generic soapexception quite easy.

  11. On the consumer side of the BizTalk orchestration you can use the OperationFault class to catch the exception.


I’ve to thank my former colleague: Fedor Haaker Together we implemented this kind of error handling. (Although he came up with operationfault solution.)

So big up for Fedor J

FrOSCon 2008

On very short notice I got aware of the Free and Open Source Software Conference (FrOSCon) in St. Augustin near Bonn/Germany. Unfortunately the program neglects open source projects in the .NET/Mono field. But it still looked interesting so I attended some talks there.
First day’s keynote speaker was Andrew S. Tannenbaum (his book on Distributed Systems […]