MIX 09

MIX 09

Two weeks ago we held our MIX conference in Las Vegas.  MIX is my favorite conference of the year – since it nicely integrates development and design topics together in a single event, and is usually accompanied by some pretty cool product announcements.

I gave a first day MIX keynote again this year, and in it I talked about and announced a bunch of new Microsoft web development products.  These included:

My keynote also included a ton of demos and highlighted a bunch of great customers including: StackOverflow, NetFlix, NBC, Bondi Publishing, and KEXP.

Click here to watch the day one MIX keynote online.  Bill Buxton led off the keynote with a great talk about user experience for 20 minutes – I then talked for an hour and 50 minutes after him.

You can also watch all the breakout sessions from MIX online for free here (Greg Duncan has an easy to navigate list of them here as well).

I’ll be doing more in-depth blog posts in the days ahead on many of the technologies we introduced/announced and all the cool things you can do with them. 

Hope this helps,


Updated “How to maintain and troubleshoot BizTalk Server databases” article

Updated “How to maintain and troubleshoot BizTalk Server databases” article

The article on Microsoft support about maintaining and troubleshooting BizTalk Server databases has been reviewed and  updated. It even now applies to BizTalk 2009 databases (although I’m not sure they’ve added too much about 2009 there).  It’s definitely worth checking out if you are involved in maintaining and supporting BizTalk Server databases.
The support page link […]

Preserving white space in BizTalk map

Preserving white space in BizTalk map

Hi all

Alister Whitford has a question today on the online forums about preserving white
space in a map. He thought that the functionality has changed between BizTalk 2006
and BizTalk 2006 R2. He has done a great job looking into stuff and it appears he
is right. You can check out he thread here: http://social.msdn.microsoft.com/Forums/en-US/biztalkgeneral/thread/7dd28a9b-16b5-4c0e-90db-843caf4689ee where
he also shows how not to preserve white space in R2 and thus have the same functionality
as in 2006 (non-R2).

Hope this helps someone


“Weird” subscription when dealing with no subscribers found

“Weird” subscription when dealing with no subscribers found

Hi all

Disclaimer: I do NOT encourage the usage of he information in this blog post.
The post is merely about some silly experiment, he results thereof and a conclusion
on it.

So, this friend
of mine
(and former colleague) mentions every now and then that he isn’t all that
sure that the “No subscribers found” should be an error but maybe more a warning.
His man argument is, that if I have two subscribers for something, say all incoming
orders are put into an archive file drop and also sent to the ERP system, and the
send port that sends to the ERP system is unenlisted, then no errors will occur in
BizTalk, but from a business point of view, the system is definitely not working.
So the fact that you don’t get the error that indicates something is wrong with routing
is not actually very useful, because parts of the system may be down after all.

Anyway, we were discussing what to do about this in case you just don’t want that
error to occur if no subscribers were found. We came up with two options:

  1. Add a send port that uses Tomas Restrepos /dev/null adapter. you can find it at http://winterdom.com/dev/bts/index.html
    look for “BizTalk 2006 R2 Null Send
    ”. Using this adapter in the send port will cause everything going through
    the port to magically disappear.
  2. Mostly for fun we came up with the idea to have an orchestration that only as one
    receive shape. This receive shape should receive a message of type System.Xml.XmlDocument
    – since this will let the orchestration receive any message types. Also, it would
    have to be a direct bound port, so the orhestration would get ALL messages that are
    published to the MessageBox, so we would never get the “No subscribers found” error.
    Now, naturally, this solution is extremely silly, since we would fire up an orchestration
    for all published messages. But we started thinking what the subscription would look

The rest of this post is to explore item 2 above to find out how the subscription
would look like.

To do this, I created four scenarios – just to explain it to you.

The four scenarios are:

  1. An orchestration that receives a message of type ReceiveSchema.xsd and is linked to
    a “Specify Later” port. This is the normal and widely used scenario.
  2. An orchestration that receives a message of type System.Xml.XmlDocument from a “Specify
    Later” port. The common way of receiving binary files or any file without caring about
    what files they are.
  3. An orchestration that receives a message of type ReceiveSchema.xsd and is linked to
    a direct bound port. This is the common way to receive ALL published orders, no matter
    what receive port or orchestration they were published from.
  4. An orchestration that received a message of type System.Xml.XmlDocument and is linked
    to a direct bound port. This is not something I have ever seen used, but this is what
    I want to find out about 🙂

So, to summon up the subscriptions:

Scenario Subsription Description

http://schemas.microsoft.com/BizTalk/2003/system-properties.ReceivePortID ==
{C464C9C6-F4BB-4ADF-9322-B2E89E6C8885}  And

http://schemas.microsoft.com/BizTalk/2003/system-properties.MessageType ==

This is the most common subscription. It consists of both a ReceivePortID (Because
the logical port is bound to a physical port) and the message type (Because I am using
a strongly typed message).
2 http://schemas.microsoft.com/BizTalk/2003/system-properties.ReceivePortID ==
This subscription is partly like the first one. The ReceivePortID part is the same,
but no message type is specified. This is because I am using System.Xml.XmlDocument
as message type, and this is just a “catch all” message type.
3 http://schemas.microsoft.com/BizTalk/2003/system-properties.MessageType ==
This subscription has the part of the first subscription that was missing from the
second one and it doesn’t have the part that was actually in the second subscription.
This is because I am now using a direct bound port, and therefore the port ID becomes
irrelevant in the subscription. I am using a strongly typed message, though, so the
message type is relevant.
4   Surprised? A completely empty subscription. Kind of makes sense, when you think about
it, since we are using a direct port, so the port ID is irrelevant and we are using
a untyped message, making the message type irrelevant.

Now then As I wrote, it makes sense, but it wasn’t what I was expecting, actually.
With this empty subscription, I STILL get the “No subscribers found” error when I
pick up a message and publish it into the Message Box.

So instead of doing this, I started thinking about what else to do. So I created a
receive port with a receive location and let a message go through it and get suspended.
Looking at the details of the context of the message that was suspended I gor this:


So I need a subscription that is not empty, but that will make sure my orchestration
gets EVERYTING that is published into the MessageBox. This is done by setting the
filter on he receive shape of my orchestration in scenario 4. The filter will have
to include one of he above properties that is promoted. But looking at them I really
don’t expect a message created inside an orchestration and then sent to the messagebox
to have any of those properties set. So I decided to create a small orchestration
that will simply just send a message to the messagebox. The context of the message
published to the MesageBox looks like this:


As you can see, no overlap at all.

So, as I see it, a filter like “BTS.ReceivePortID exists OR BTS.Operation exists”
should do the trick. Now, this subscription works in my small example, but I cannot
guarantee it will work for all scenarios. I can’t think of an example right now where
either the ReceivePortID or the Operation doesn’t exist, but there might be examples.

So Basically, the whole idea about having an orchestration taking in ALL published
messages to avoid errors about no subscribers is REALLY silly and should not be implemented.
And if you choose to do it anyway, please remember that the above filter isn’t guaranteed
to work in all scenarios I was just playing around 🙂

Not sure this will ever help anyone but there goes 🙂


My wish list for next version of BizTalk

My wish list for next version of BizTalk

The 2009 version of BizTalk Server is soon to be released, but we have been asked to give feedback to the product team on what features we’d like to see in future releases. My plan was to focus on smaller things that I think would make a big difference (all though I let my self go a bit further at the end).

If you think I’ve missed something, please let me know and I’ll make sure to pass it forward.


  • In the 2009 version, as the BizTalk projects became C# projects we were given a better support for MSBuild and TFS Build. The possibility to build to and on a server that has not got Visual Studio installed is really great from a deployment perspective. However, since we have not been given any tooling for the actual deployment/undeployment, it requires lots of work to make it useful. I'd really like MS to either create a supported Deployment API or make the Microsoft.BizTalk.Deployment library more "public" by documenting it.
  • The bindingsfiles are one of the key components for automated deployment. First I'd like for you to make use of the Application property in the BindingInfo class. The problem is today that when you create your custom tasks for deployment/undeployment you would need to store the information about the Application and its references somewhere else since it's not to be found in the bindingsfiles.
  • Bindingsfiles are all together a very difficult thing to manage in different environments. A tool for this where we could edit and manage our bindingsfiles, would increase our productivity immensely.

Low Latency

  • There has been a lot of talk about  this. I don’t really think of it as a big issue anymore as Dublin is on its way. However, one small thing that would REALLY make a difference would be to move the polling interval setting from the group to the host.


  • In 2006 we were given the possibility for configuring pipelines at runtime. This was a really good thing, where we don’t need to have hundreds of almost identical pipelines. Although I'd really have like this feature to be implemented with a “real” property box that supported UI editors.
  • To be honest, I don't really like the notion of pipelines at all, and would rather see them as default settings of components, which I could select and use as-is or alter at runtime by adding or removing components.


  • Use table partitioning rather than the views and tables we have today. This way we could make better use of indexes and boost the performance. This feature where implemented in SQL Server in 2005catch up!
  • BAM administration in the admin console
  • Simple BAM reports in the admin console


  • Better schema support
  • Message Archiving

Adapter improvements

FILE Adapter

  • Supports regular expressions
  • Support for FTPS

SMTP Adapter

  • Remove the mail address from the URI, so that we can make better use for the adapter in a messaging scenario.


  • Enforcing a Tracking host
  • Failed message routing for orchestrations
  • Code windows for functoids and the expression should be made sizeable
  • Project templates for pipeline components and Functoids.
  • Improved visualization for mappings

Future features

  • Simulation and Orchestration debugging from VS
  • Self service portal. Almost 90% of all all issues that comes to our support is about -"Where is my f***** message?". Providing a BAM driven portal where these rude "customers" can find this information themselves, would greatly boost the productivity of our dev team.

Email is BACK

Email is BACK

I must have changed my password to my email server a LONG time ago, because it has been ages since I have been getting emails from my blog.

I finally took time to look into this and have reconfigured it to send me emails again.

If there are any unanswered emails out there, please let me know and I will answer them.

Sorry, as I do really want to hear from you!

Walkthrough: Composite Operations with the new WCF-based SQL Adapter

Walkthrough: Composite Operations with the new WCF-based SQL Adapter

The new SQL Adapter in the WCF Adapter Pack 2.0 supports composite operations, that is performing multiple SQL operations in response to a single input message. The purpose of this blog post is to provide a walkthrough of how this works. I am using the public beta of BizTalk Server 2009, the public beta of the WCF Adapter Pack 2.0 and released version of SQL Server 2008. See below for an important caveat, as well as a link at the end of the post to my test solution.

To start, as with most things in BizTalk’s contract-first world, we need a schema. In order to do this, choose “add generated items” from a BizTalk project in Visual Studio. Then, choose “Consume Adapter Service”. If you don’t see that Visual Studio template, then you haven’t installed the Adapter Pack, as that’s where it comes from.


Next, select the sqlBinding specify a server, and press “Configure”. Set the client credential type to Windows (assume appropriate SQL login rights), and then on the URI tab, specified the server and database to use:


Af6ter doing this, press “Connect”, and the metadata will be populated.

For the purpose of this walkthrough, I have created 2 stored procedures: the first one inserts a record into a table, the second returns all rows in that table. Those are shown in the UI below.

Note that in the category we have “Procedures” and “Strongly-Typed Procedures”. The distinction is that “Procedures” will create un-typed schemas, whereas “Strongly-Typed Procedures” will generate schemas that you can work with inside BizTalk for mapping, promoting properties, etc.


The “Filename Prefix” will be used as a prefix for all the generated schemas.

After that was configured, I clicked OK and all the schemas were generated for me.

Next step is that you need to create a composite schema that will define the message you send to the adapter. I’m not quite sure why this one wasn’t generated for me, it’d be nice (hint hint), but it’s trivial to do.

How I did this for the walkthrough:

  • create a new schema
  • rename the root to SQLMsg (or whatever you like, this is unimportant)
  • add a sibling record called SQLMsgResponse (this name does matter, it is the name of the request, with “Response” appended)
  • add two child records under SQLMsg, and another two under SQLMsgResponse (names don’t matter, they’ll get renamed below)
  • right-click the topmost “<schema>” node, and in the “Imports” property, add the “CompositeTypedProcedure.dbo.xsd” schema
  • in the first child under SQLMsg, set the “Data Structure Type” property to InsertIntoDestination (this is a reference that you just imported above)
  • in the second child under SQLMsg, set the “Data Structure Type” property to SelectAllDestination
  • in the first child under SQLMsgResponse, set the “Data Structure Type” property to InsertIntoDestination (this is a reference that you just imported above)
  • in the second child under SQLMsgResponse, set the “Data Structure Type” property to SelectAllDestinationResponse

Your schema should now look like this:


The, create an instance of the new composite schema to use as a test message, and populate the request. Here’s mine:


I then created a simple orchestration that would receive a request, call the adapter, and persist the response from the adapter. The request and response messages are of the type we just created in the composite schema:


Build and deploy the solution. After deploying it, note that there was a binding file generated along with the schemas, which is awesome, as this means you don’t need to manually create the send port. So, import the binding file which exists in you Visual Studio project.

HOWEVER pretty big caveat here. after importing the binding, you need to change the action mapping. If you use the default value, it will fail. You need to replace what is generated with the magic keyword “CompositeOperation”. This tells the adapter that it needs to call multiple operations, which it will resolve based on the schemas and namespaces. I believe the reason this works the way it does is that it allows you to import multiple operations in a single pass, and then use some subset of those operations in a composite operation, thereby enabling re-use of the generated schemas to potentially cover multiple different combinations of composite operations. Either way, watch out for this one. The error message tells you exactly what the problem is, however it won’t tell you about the keyword.


As an aside, and for the benefit of those who have not worked with this adapter yet, here are the binding configuration properties you have access to:



  • create an inbound file drop location
  • create an outbound file drop folder
  • bind everything
  • start the application
  • drop your instance doc into the file drop location, triggering the orchestration

Lastly, here’s the output file:


In closing, I think this is an awesome new capability, and I am really liking the new SQL adapter. In case you haven’t heard, the old SQL adapter is being deprecated, so you really should be working with this one going forward.

You can download my test solution here.

Breaking the WSS Top Link Bar in two with jQuery

Breaking the WSS Top Link Bar in two with jQuery

Since I got a fan of the jQuery Javascript library, usually I can’t resist showing off the power of this library in my SharePoint development courses at U2U. For example two weeks ago I was in sunny Cyprus talking about SharePoint, ASP.NET AJAX and jQuery and I told my students something in the line of “with jQuery you can change pretty much anything in the SharePoint UI by just making use of Javascript”. Promptly one of my students asked me if I could show how to break the Top Link Bar of a SharePoint site into two parts. I really like challenges in my SharePoint courses, but I couldn’t conquer this challenge on the spot, it took me a couple of hours in my hotel room to get this to work. Now that I’ve got some time to polish the code a little bit, I want to share the code with you! 🙂

First, let’s talk about the issue that this little jQuery script is going to solve: you probably know if you create subsites in SharePoint sites, those subsites are shown (by default) in the Top Link Bar of the parent site. So the more subsite you’ve got the more items this menu is showing. Life is all good until there are too many items to show in the menu so it’s get too big to fit on the screen. You won’t get an error or something like that of course, but the browser will give the menu the space it needs by adding horizontal scrollbar the page. The screenshot below illustrates this behavior: the top link bar has too many menu items making it pretty hard to access for example the Site Actions menu (you need to scroll to the right).

So how can this be solved with the help of jQuery? Well besides a very powerful DOM Selectors API, the jQuery library also has a DOM Maniplation API. This Manipulation API can change the HTML that’s rendered in the browser, by adding elements, removing elements etc. The idea is to write a Javascript function that adds a second Top Link Bar to the page’s DOM. The easiest way to accomplish this is to just copy the existing Top Link Bar entirely:

$("#zz1_TopNavigationMenu").clone(true).insertAfter($("#zz1_TopNavigationMenu")).attr("id", "zz1_TopNavigationMenuCopy");

This jQuery script will:

  1. $("#zz1_TopNavigationMenu")
    select the element with ID zz1_TopNavigationMenu
  2. .clone(true)
    clone that element
  3. .insertAfter($("#zz1_TopNavigationMenu"))
    insert the cloned element after the original menu
  4. .attr("id", "zz1_TopNavigationMenuCopy");
    set the ID attribute to a new value to be able to identify it uniquely

The result of this function is a page that shows two identical menu bars:

Now the only thing that’s left to do is to remove the unnecessary menu items from both menus:
var nrOfItems = ($("#zz1_TopNavigationMenu > tbody > tr > td").length + 1) / 3;
var splitIndex = (Math.round(nrOfItems / 2) – 1) * 3;
$("#zz1_TopNavigationMenu > tbody > tr > td:gt(" + splitIndex + ")").remove();
$("#zz1_TopNavigationMenuCopy > tbody > tr > td:lt(" + (splitIndex + 1) + ")").remove();

The first two lines calculate the number of items in the menu (each menu item consists of three td elements) and the index of the td element where the table should be “split”. The third line removes all the td elements of the original menu with and index higher than the calculated one. The last line removes all td elements of the copied menu with and index lower than the calculated one.

Now, let’s put everything together and get this to work in a real SharePoint site! The first thing to do is to make sure your SharePoint site loads the jQuery library. This can be done in a couple of ways, for example using the jQuery component of the SmartTools for SharePoint project on CodePlex (read my previous blog posts on jQuery in SharePoint for more info). Secondly the Javascript discussed above should be loaded, and once again there are a couple of ways to do this. For production scenarios I’d recommend to build a Feature using  a delegate control that loads the Javascript function (just like the SmartTools jQuery component does), but for testing you can also do this in a plain Content Editor Web Part (or, God forbid, using the SharePoint Designer tool). So just add a Content Editor Web Part to a page of your SharePoint site, and copy/past the following piece of code in it:

$(document).ready(function() {
  // Calculate where to split the tables of the menus
  var nrOfItems = ($("#zz1_TopNavigationMenu > tbody > tr > td").length + 1) / 3;
  var splitIndex = (Math.round(nrOfItems / 2) – 1) * 3;
  // Make a copy of the TopNavigationMenu
  $("#zz1_TopNavigationMenu").clone(true).insertAfter($("#zz1_TopNavigationMenu")).attr("id", "zz1_TopNavigationMenuCopy");
  // Remove items from original menu
  $("#zz1_TopNavigationMenu > tbody > tr > td:gt(" + splitIndex + ")").remove();
  // Remove items from copied menu
  $("#zz1_TopNavigationMenuCopy > tbody > tr > td:lt(" + (splitIndex + 1) + ")").remove();

When done, the SharePoint page will now display a Top Link Bar, split into two. Make sure you test the code in a WSS site (a Team Site for example), because Publishing sites (e.g. a Collaboration Portal has another menu, see the remark at the end of this post).

Important remark: the code discussed in this article is just an example for demonstration purposes, if you plan to use this code I strongly recommend to test it in various web browsers and check if it meets your performance goals. The identifiers of the Top Link Bar in the code are uses in WSS sites (e.g. a Team Site), SharePoint Publishing sites generate menus with other ID’s and or structures, so you have to tweak the Javascript. For example: the following code works for a Collaboration Portal using the default.master.

$(document).ready(function() {
  // Calculate where to split the tables of the menus
  var nrOfItems = ($("#zz1_TopNavigationMenu *.zz1_TopNavigationMenu_5 > tbody > tr > td").length) / 3;
  var splitIndex = (Math.round(nrOfItems / 2) – 1) * 3;
  // Make a copy of the TopNavigationMenu
  $("#zz1_TopNavigationMenu").clone(true).insertAfter($("#zz1_TopNavigationMenu")).attr("id", "zz1_TopNavigationMenuCopy");
  // Remove items from original menu
  $("#zz1_TopNavigationMenu *.zz1_TopNavigationMenu_5 > tbody > tr > td:gt(" + splitIndex + ")").remove();
  // Remove items from copied menu
  $("#zz1_TopNavigationMenuCopy > tbody > tr > td:lt(2)").remove();

  // Remove first item (current site) from copied men
  $("#zz1_TopNavigationMenuCopy *.zz1_TopNavigationMenu_5 > tbody > tr > td:lt(" + (splitIndex + 1) + ")").remove();