BizTalk 2010 Installation and Configuration – Install and Configure SharePoint Foundation 2010 (Part 6.1)

BizTalk 2010 Installation and Configuration – Install and Configure SharePoint Foundation 2010 (Part 6.1)

Previously I explained in detail how to Install and Configure Windows SharePoint Services using Windows SharePoint Services 3.0 with SP2, to remember, BizTalk Server 2010 supports these two versions of WSS: SharePoint Foundation 2010 Windows SharePoint Services 3.0 with SP2 However, if you wish to use SharePoint Foundation 2010, I suggest reading Steef-Jan Wiggers post: […]

UK Connected Systems User Group – Update and next meeting

For those in the UK Connected Systems User Group, the content from our last meeting is in the below linked skydrive folder:

We are currently preparing the next event for Tuesday 15th Febuary to again be at EMC in London Bridge. We are still formalising the details of the event but it is now open for registration.

My 15 favorite blog posts of 2010

I’ve taken the time to reflect over the blogs I follow, and though about the impact it made to my work over the last year. I’ve also credited post for being just interesting or cool, and perhaps useful in the future.

I’ve done this post as a tribute to those who took there valuable time to share their thoughts and experience with the rest of us.


How to exploit the Text In Row table option to boost BizTalk Server Performance

by Paolo Salvatori

As I’ve spent much of the last year focusing BizTalk performance, I found these settings to make a huge difference. Applying the “text in row” table option on all tables storing the actual message, improved not only the message throughput but also greatly reduced the CPU utilization.


Creating a bootable VHD the easy way

by Hugo H%u00e4ggmark

Creating a bootable virtual hard drive is not as easy as it’s made to believe! I struggled quite a bit, and as I was complaining about it in the office one day, Hugo said: “I’ve written a blog post about it”. These posts are great, and it really is the EASY way!


How To Boost Message Transformations Using the XslCompiledTransform – Series

by Paolo Salvatori

Paolo Salvatori’s blog should be on every BizTalkers feed subscription. I’m amazed by the details and the effort of his research. In this post he shows how to use the XslCompiledTransform class instead of using the traditional XlsTransform which is used by the Transformation shape in orchestrations. I found there to be an issue of memory consumption that needs to be sorted out. Never the less, it’s a very interesting concept, and a great series of posts.

WCF-SQL Adapter Table Operations

by Steef-Jan Wiggers

As I was working with the WCF-SQL adapter a couple of months ago, I was really happy to find Steef-Jans post on the subject. The post shows how to use all the table operations using the “new” sql adapter. Thanks Steef-Jan, and again -Congrats on the MVP award.

Four Questions – Series

by Richard Seroter 

Every time my feed reader notifies me of a new “Four Question” post, I instantly have to read it. It’s always interesting to hear what other people within the community and Microsoft are up to, and Richards questions are always relevant to what is currently happening in the field. And of course it always ends up with the last question, where Richards humor and sarcasm comes to good use.

I was especially intrigued by the last post where he interviews on of the true heroes of the BizTalk community – Ben Cline.

Mapping in BizTalk 2010: My favorite new features – Series

by Randal van Splunteren

Randal, also a newly decorated MVP was kind enough to share the features of the new mapper that shipped with BizTalk 2010. If you’re a BizTalk dev, I really recommend you to read these posts as you’re likely to find out some features you didn’t already know about.

BizTalk 2010: Musing of the ’new’ SharePoint 2010 WS Adapter

by Mick Badran

As I was putting together a SharePoint / BizTalk lab using the new SharePoint adapter, I was happy to have found this great great post. Saved my lots of time, -Thanks Mick!

ShareTalk Integration (SharePoint/BizTalk) series

by Kent Ware

Yet another series of really good posts if you plan to integrate with SharePoint. Despite being a compulsive gambler from Canada, Kent is a great guy who shares a lot of BizTalk experience through his blog. Rumors has it he is also writing on a bookAnd besides his interest in BizTalk he also shares his thoughts on Windows Phone 7 through his new WP7 blog.

BizTalk Adapter Pack 2.0/SAP Adapter series

by Kent Ware

I’m not currently working with SAP even though know I’ll probably be in the future. By then I’m sure I’ll thank Kent again for taking the time to write these posts.

Benchmark your BizTalk Server (Part 3)

by Ewan Fairweather

I you’re a true hard-core BizTalker and think performance is important, then this is a “must read” article. Take your time and read it a couple of times as it’s very detailed. Ewan have also been kind enough to share his script files for helping you identifying bottlenecks.

Large Message Transfer with WCF-Adapters – Series

by Paolo Salvatori

I find it funny that Paolo sometimes makes an effort to split a topic into two parts. Every one of those posts could easily be split into five parts, and together make up for small book. Paolo does not write posts – he writes essays! 

Transferring large messages is a common challenge using BizTalk. This post cover the subject in detail, and shows how to effectively minimize the use of recourses, while still transferring large files through BizTalk. The content of these posts was also demonstrated by Ewan Fairweather during his talk at the Swedish BizTalk User Group.

XmlDisassemble in a passthrough pipeline?

by Johan Hedberg

The discovery of BizTalk adding an Xml disassemble stage to a passthrough pipeline, was an interesting fact to say the least. Frightening might be a better choice of words.  Johan explain under which circumstances this happens. “Funny” enough, I’ve come across this issue twice this year.

Modernizing BizTalk Server BAM with PowerPivot

by Jesus Rodriguez

I couldn’t agree more, – BAM is one of my favorite features of BizTalk. With the release of SQL Server 2008 R2 came PowerPivot for SharePoint and Excel, and even though I haven’t got around to test PowerPivot  yet,I really find this interesting, and I’m sure I’ll get back to the post later on.

Less Virtual, More Machine – Windows 7 and the magic of Boot to VHD

by Scott Hanselman

With Windows 7 came the “boot to VHD” feature. I generally don’t want to install anything but the Office suite on my laptop. This is because I tend to try out a lot of CTP releases, along with the fact that I work with different customers where it’s a good practice to separate the environments. I solve that by always working in virtual environments. Using Windows Virtual PC is only supported in 32 bits, and even if I’d use VMWare or Virtual Box, I would not fully utilize the capacity of my laptop.

I’ve come across many high level demos showing the “boot to VHD” feature in action. But it’s not as easy as it seams. Every time I need to add a VM to my boot menu, I return to this post.

Nesting Scope shapes more than 19 levels

By Jan Eliasen

The knowledge of this limitation is not likely to come to any good use (for anyone (I hope)). But the effort of finding this flaw can not go unnoticed! – Thank you Jan.

Configuring ASP.NET Session State to use AppFabric Caching

If you are using the information in this MSDN page to get started with the ASP.NET Session State Provider to use Windows Server AppFabric Caching you might run into some problems as the steps are less than complete. First make sure you have Windows Server AppFabric Caching setup and working and add the configSections and dataCacheClient elements as specified. If you next add the sessionState element as specified you will receive the following error:

Parser Error Message: Could not load type 'Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider'

The reason is the assembly information is not specified so the .NET runtime doesn’t know where to find the DataCacheSessionStoreProvider class. The solution is simple, either add the assemblies section to the web.config or add it to the sessionState provider like this:

<sessionState mode="Custom"
      type="Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider, Microsoft.ApplicationServer.Caching.Client, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35"

We are not quite done though because if we run the application now we get a new error:

Parser Error Message: ErrorCode<ERRCA0009>:SubStatus<ES0001>:Cache referred to does not exist. Contact administrator or use the Cache administration tool to create a Cache.

This is caused by the cache name of “NamedCache1”. Chancing this to the default cache name of “default” is one way to fix the problem. The other is to create the cache named “NamedCache1” using the following PowerShell command:

New-Cache NamedCache1

Remember to run the “Caching Administration Windows PowerShell” application as administrator with elevated privileges otherwise you wont be able to manage the Windows Server AppFabric Cache and you will receive the following error:

New-Cache : ErrorCode<ERRCAdmin002>:SubStatus<ES0001>:The operation has timed out and the result is unknown. Stop the cluster and run Export-CacheClusterConfig followed by Import-CacheClusterConfig to make the cluster configuration consistent.

At line:1 char:10

+ New-Cache <<<<  NamedCache1

    + CategoryInfo          : NotSpecified: (:) [New-Cache], DataCacheExceptio


    + FullyQualifiedErrorId : ERRCAdmin002,Microsoft.ApplicationServer.Caching



See here for a complete list of PowerShell commands used to manage the Windows Server AppFabric Caching environment.



Running ActiveMQ on Windows Azure: One Man’s Search for a Full Featured Messaging Broker Option in the Cloud


I’ve always enjoyed building systems that utilized flexible messaging platforms. Before joining Microsoft, I was a member of a small ISV team that built a commercial .NET based ESB product (a rare creature in and of itself) that was primarily a WCF based messaging system. We had a number of capabilities that were useful and somewhat unique in their execution but one of my favorites was the ability to provide transactional messaging across multiple platforms by leveraging ActiveMQ in our adapter model.

I’ve always admired the Apache folks. More often than not they seem to just get it (unless you’re talking about docs … but hey their stuff is free and even us ginormous mega power companies screw the pooch on docs sometimes). And ActiveMQ is no exception. It has pretty much  all of the capabilities you’d expect from a full featured broker including:

The list goes on. But you get it. It’s an actual message broker with tons of features, the most important of which is its .NET support via the .NET Messaging API.

There’s only one problem. ActiveMQ like pretty much every other messaging broker was developed pre-cloud. Granted, some of the AMQP brokers such as RabbitMQ have caught on with folks who make all of their money “off prem” but none of the brokers I’ve worked with are what I would call cloud friendly. Maybe there’s one out there and if there is please add a comment because I’d love to check it out.

So where does that leave us? Well if you’re just trying to get some things done in Windows Azure between roles and instances and you don’t need a full featured broker, stop right now and run to go see my colleague Valery Mizonov’s incredible blog about rolling your own. It’s the best thing I’ve seen on the subject and it was derived from real folks working on real challenges while deploying on Windows Azure. Valery is a genius though and he was built based on the design of the Cyberyne Systems Model 101 so it’s not a surprise that he was able to achieve such a masterpiece of engineering and bloggery.

If on the other hand you’re a simpler life form like me and feel a certain shall we say irritation that 58 billion in revenue and 14 billion in profit occurred without something more “out of the box” in the cloud for us messaging lovers then read on.  But first, a warning:


Now, that the warning is out of the way let’s begin the journey…


Step 1: Figure Out How To Develop

I’d seen the posts on running Tomcat in Windows Azure and downloaded the Tomcat Accelerator. It gave me hope that what I wanted to do was possible. For what I wanted to do however, I wanted a more “integrated” experience than all of the cmd files in that project not to mention wanting to avoid having to build my own,.

So, I decided to use the technique I had used long ago as a Java Developer dealing with classpaths and a bazillion jar files: sheer brute force.




Yes, that is really is a JRE and ActiveMQ embedded with the files set to be copied into the project. It took a few monotonous minutes to set up but it was well worth it in the long run.

Step 2: Figure Out How Deal With Local Storage and Dynamic IP’s

Anyone who has spent time with Azure knows they need to deal with the rules of Local Storage and the way endpoints are assigned. When you’re dealing with something like starting a JVM and running something like ActiveMQ that has a lot of startup options then things get even more harrowing.

My first goal was then figuring out to get all the things I needed over to the Local Storage and to edit the config files so that when ActiveMQ started up the dynamically assigned ports were used.

This problem is ugly no matter how you slice it (the tomcat example I downloaded had robocopy in a .bat) but after I was done I didn’t feel too bad.

 private void ConfigureDirectories()
            workingDir = Environment.GetEnvironmentVariable("RoleRoot") + @"\approot";
            javaHome = workingDir + @"\jre6";
            var activeMqFiles = workingDir + @"\activemq";
            runtimeDir = RoleEnvironment.GetLocalResource("Storage").RootPath + "activemq";
            Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(activeMqFiles, runtimeDir, true);


The awesomeness of the CLR shows up here as I leverage VB.NET for a little recursive copy assist.

Once everything was copied over then it was just a matter of setting up the environment and writing the IP’s to the respective config files

private void ConfigureEnvironment(Process proc)
            proc.StartInfo.EnvironmentVariables.Add("JAVA_HOME", javaHome);
            proc.StartInfo.EnvironmentVariables.Add("ACTIVEMQ_HOME", runtimeDir);


The rest of the set up is just Xpath boiler plate and Process set up so I won’t go into it here. One note, as of SDK 1.3 you have more options re: pinning the IP.

The important thing is after this part was done I had a broker firing up in dev fabric!

Step 3: Writing the Demo Code

This part was the easiest because the Apache guys have pretty much written everything hard which at the end of the day was the reason for trying to do this in the first place. I also wanted to try and use the durable and guaranteed features of ActiveMQ because there are already plenty of non durable, non guaranteed, multiple delivery options available without going to the pain of installing a 3rd part message broker.

My producer used the following simple driver logic and ran in one instance:

                    using (var session = con.CreateSession(AcknowledgementMode.Transactional))
                        ITopic topic = session.GetTopic("AzureHostedTest");
                        var producer = session.CreateProducer(topic);
                        var start = DateTime.Now;
                        for (int i = 0; i < messageCount; i++)
                            ITextMessage msg = producer.CreateTextMessage("Hello world "+ msgNumber++);

  I just used the NMS libraries directly and didn’t use any of the Spring overlays.

My consumer logic was similarly simple

   var topic = new ActiveMQTopic("AzureHostedTest"); var mySession = session.CreateDurableConsumer(topic, con.ClientId, null, false); mySession.Listener += (message) => { ITextMessage msg = message as ITextMessage; Trace.WriteLine(message.NMSMessageId + " " + msg.Text); session.Commit(); };



Again, what I was aiming for was the durable options not perf. Like everything else with the product you have a ton of options in terms of perf tuning.

Much of the logic in the solution was concerned with attaining a connection. In a cloud world you need to be very concerned with things spinning up out of order or disappearing. 

Ok. What are the odds of this running at this point. Pretty good actually, provided you wanted to use Local Storage as your durability option.

But we all know that’s just crazy. Nope you need something enterprise. Something serious. You need SQL Azure!

Step 4: Fighting Through the Topology Deployment Options

Once again ActiveMQ shines here in the features and flexibility department. However, this was also the part of the project where I knew the path that led me from being a pure open source guy to working for the evil empire was not out of naked greed or any malicious desire but was instead the need to get something @#$#$%#$$%^$ done! We have a lot of great products that let you actually do stuff. Sometimes the other nerd ecosystems out there seem to place productivity a distant second to coolness or “elegance” or their consultant friendly version of “enterprise” which results in a miserable experience for anyone outside of the tiny of circle of wonder nerds who grok/exploit their systems and style. 

Oh, it looks like an innocuous enough exercise if you read this or this except for the pesky fact that if you use those pages as a guide with the latest ActiveMQ bits and the latest SQL Server JDBC drivers your chances of it working are nothing. Thus, I put myself into the mindset I use to live in 24/7 back in the old days and started to dig my way through bug lists and blogs and forums and experimentation until finally eureka! Or as I tend to think of it now for #$%#$%#$ sake!

Before sharing the tricks and tidbits however, let’s take a short break to describe the deployment I planned on. The topology I chose to deploy was what ActiveMQ calls JDBC Master Slave. This topology uses a sophisticated algorithm that basically says if I get the lock first then I’m the Master otherwise you are. I know I know. I should back off the rocket science a little bit here and make things a little easier to understand by means of a salient and powerful diagram!

My favorite part of this choice was this quote

By default if you use the <jdbcPersistenceAdapter/> to avoid the high performance journal you will be using JDBC Master Slave by default. You just need to run more than one broker and point the client side URIs to them to get master/slave. This works because they both try an acquire an exclusive lock on a shared table in the database and only one will succeed

Heck, it practically springs off the page as the right option. Now, before I go on I would be remiss if I didn’t mention the “high performance journal” referred to in the quote. In order to take advantage of them in Windows Azure however, you’re going to have to likely use a complex topology that works around the limitations of Windows Azure Drives and multiple instances. That’s the kind of thing done in our team’s Design Wins program though and is beyond the scope of this effort.

Besides, doing it this way you can use SQL Azure (of course in true open source fashion you could combine journals and JDBC..all you have to do is get through the docs etc  ).

SQL Azure is an oasis of comfort and predictability in an otherwise turbulent world. It’s your solid and reliable friend in a cloud universe where we’re told that you might as well throw state out the window, abandon the thought of any guarantees and make everything idempotent because… well things are just going to happen in a way you don’t want them to ….so you need to be prepared. Granted, there’s some folks who have said people developing distributed systems should have been doing that all along even when they used on premises technologies, but just because they were right doesn’t mean we should have listened to them!

Now onto the secrets. After downloading the latest JDBC drivers I promptly waded through the xml docs and arrived at the following for my JDBC related settings:

      <jdbcPersistenceAdapter createTablesOnStartup="false"  dataDirectory="${activemq.base}/data" dataSource="#mssql-ds"/>

<bean id="mssql-ds" class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value=""/>
    <property name="url" value="jdbc:sqlserver://[replace];databaseName=ActiveMQ;"/>
    <property name="username" value="[replace]@[replace]"/>
    <property name="password" value="[replace]"/>
    <property name="maxActive" value="200"/>
    <property name="poolPreparedStatements" value="true"/>

I set createTablesOnStartup to true initially then false after they were created which seemed logical. Once you get past the unintuitive choice of class for datasource that’s not so bad. Unfortunately,when you start ActiveMQ that won’t work because you’ll run into bug similar to this one but with the added fun that the recommendations on the page don’t work for the 5.4.2 broker.

The answer to the issue is hinted at in this bug report. I’ll leave finding the hint to you gentle reader and skip right to its actual implementation because the hint will not help in isolation. You would need to carefully read the resulting stack trace after applying the hint and pair it with the knowledge gleaned to fully get the answer.

Here it is in a nutshell courtesy of 7-zip






See those files that are named slightly different than the ones that have the exact same content inside them as the highlighted ones. That’s the key. All you have to do is copy one of the files you need to base your new file on out of the jar, rename it using a slightly different convention, then stuff the renamed files back into the jar and away you go! What could be more natural?


Step 5: Deploy and Savor the Payoff

Once all the above was figured out all that was left was to grow a tree from an acorn while I waited for the project to upload to Windows Azure. Once it landed I monitored things with SSMS and a simple query

Then I got real brave and decided to cycle instance 0. Sure enough the pause in activity told me it was the master. When things picked up again after a few seconds I knew that the scruffy demo code had actually worked with failover. After that I forget because of the tears and the bourbon.

Here’s the code (be warned it’s fairly large because of the embedding).


Special thanks to Steve Marx, Paolo Salvatori and James Podgorski for their feedback and review.