Woke up now yesterday morning Sydney time (GMT+10hrs) feeling great, sunny day the
beach was just glassy and really blue. What a day I thoughtall perfect. 🙂
Upon checking into Sydney Airport for the flight, the check-in lady mentioned extra
forms and electronic visa waivers that needed to be done (nothing that 5 mins online
didn’t fix – but I still had to fill out all usual visa paperwork as per normal –
so I’m questioning the value of my electronic visa waiver)
Before long I had met up with the usual suspects of Guy Riddel, Adam Cogan and new
kid on the block Alessandro (from Brazil – Hyper-V MVP) at the boarding gate. Adam
– just made it as he was late (I can’t print the *reason* why he
was late – but let’s just say it was a unique situation); Guy pulled out the ’I’m
going Business boys’ see you at the other end after a sleep, massage and personal
We were getting ready to go when the plane needed some battery replaced and the recharger
– few of use had AA batteries and could find a charger from somewherebut 1.5hrs later
the real guys had done the real thing.
14hrs later we touch down into San Franciso (beautiful place) and Adam didn’t clear
customs as ’gracefully’ as the rest of us, but none the less he made it.
We picked up our baggage, dropped it off for the domestic flight and proceeded to
the Alaskian domestic gate – Adam didn’t pass security as he was carrying 3 bottles
of water and a Qantas wine bottle. After drinking the 3 bottles in front of the security
official they let him carry on the wine (Adam liked the bottle)
So far all good
We were about to board the 12.30pm Alaska Air flight straight to Seattle – 90 mins
away from finish, a rest, shower and a bit of time to walk about Seattle.
The flight had a “Maintenance problem” which they were “fixing”2 hrs later still
I made an executive decision to try and get to Seattle either a) road trip, hire a
van and we all hop in; b) cycle; c) catch another flightwe took c) – a flight to
Portland (I later found out there’s 2 Portland’s in the States!!! One is about an
18hr round trip flight – we got the other)
Portland we were headed – it was a rush and we just squeezed in the door with something
that resembled a boarding pass (some of us had one, Adam had none and I had a boarding
pass for the 2nd half of the flight Portland -> Seattle at a much later time) –
we sorted all that out at Portland.
Finally landed in Seattle after taking another plane with propellers (like the planes
that go Syd<->Canberra) and Adam was moaning at me as he’s scared of flying
and I was forcing him into another take-off and landing.
Meanwhile our original plane in San Fran was still on the tarmac doing not much at
We finally arrived in Seattle at around 5pm – mission accomplished
or so we thought
Our luggage was sitting there getting dizzy – so now the big question is – Where is
It’s now 11pm and no sign, don’t think she’s coming tonight and you may see me in
shirts that say “I love Seattle” or “Save the Super Sonics” you’ll know why 🙂
I just caught up with some of the BizTalk MVP Crew and it was great to see familiar
faces after a while.
My brain is fried and I’ve decided to hit the hay.
It’s good night from Me.
If you have been a BizTalk developer long enough, you will know that application lifecycle management has always been a pain point when it comes to managing BizTalk projects.
The Out-of-Box support available in BizTalk Server is not too great for things like automated builds, continuous integration, code analysis, code coverage, unit testing… I can go on and on, but I am sure you get the picture.
Thankfully, most of these are set to improve with the upcoming release of BizTalk Server i.e. BizTalk Server 2009, but there are still improvement areas that I hope the product group will address soon.
One of the other hard things (IMHO) that one needs to do early-on in a BizTalk project is to decide on the naming convention and projects structure for modular deployment. Sometime this is dictated by your customer and sometimes you just follow what has been decided by your company, simple isn’t it? Wish life was that easyJ.
In reality you will find that customers want you to use their own guidelines that are often not complete or not fully thought-out, and you end up using that and try to fill the gap or modify it by adding your own stuff.
Is this so bad?… not if you have gone through this exercise and got it frozen and signed off before you start your development / coding, else, this can turn out to be the cause for endless frustrations for you and your team.
This is especially true for BizTalk projects, where you know that small changes to things like namespaces or typenames can cause a rippling effect across the project, and you end up rebuilding / reconfiguring all of the artefacts.
I have been on multiple projects in the past where naming conventions changed almost every week while the project was halfway through the development phase, so this is not an uncommon thing; however you need to be aware of this and take measures to avoid getting into such situations.
Another challenge is in sticking to the guidelines or conventions that you have so painstakingly chosen and frozen. If you have a large team, you will often see that not all developers are fully aware of the guidelines and conventions that are being followed in the team, especially when developers are constantly moving in and out of the team.
So what can be done in such situations? One of the things you can do on your BizTalk project is to run a static code analyser as part of your daily builds. I have always been a huge fan of build automation and the niceties that come along with it like automated testing, static code analysis report, code coverage report etc. There is nothing that makes you happier than getting up in the morning gazing at your PDA / mobile device to see that there were no errors or build breaks on the nightly build, especially when you are the Dev Lead / Architect on the project.
You must be thinking… “all this is great… but isn’t this is a BizTalk Solution?… how do you automate checking naming conventions?… especially on things like orchestration shapes, map names, physical ports etc…”. well you are in luck, you can now do static code analysis on BizTalk solutions too, thanks to Elton Stoneman for this cool new tool/plug-in called BizTalkCop available on codeplex.
Check out the links:
BizTalkCop is essentially a set of FxCop rules that will allow you to inspect BizTalk assemblies for structure and naming convention of your BizTalk artefacts such as Orchestrations, Pipelines, Schemas, Maps etc. It can even look into the deployed solutions inside BizTalk Server (using BizTalk Object Model and BizTalkMgmtDB) to validate the names of your physical ports and receive locations on your deployed BizTalk Server application.
If you are already familiar with FxCop, BizTalkCop provides you the same experience since it is just an extension of the FxCop ruleset.
BizTalkCop is currently is release 1.0 and contains a set of rules based on Scott Colestock’s naming conventions, but they’re (mostly) configurable so you can modify them to suit your own standards. You can create your new custom rules as well, since the full source code is provided with base classes and frameworks.
Another thing to note is that since BizTalk (2006/R2) projects are not integrated with Code Analysis tool in Visual Studio 2005, you need to download and install FxCop separately before installing BizTalkCop (even if you are using the Team Suite edition of Visual Studio 2005). I have not tried it with BizTalk 2009 Beta yet since I have it on Professional edition of Visual Studio 2008 installed on my VPC (Code Analysis is available only on Visual Studio Team Suite).
The link above provides instructions on how to install and configure BizTalkCop so I am not going to detail it out here.
Demo – The Bad Project
Let me illustrate the usefulness of BizTalkCop by running it on a badly done solution. I created a very simple BizTalk application a la “Hello World” in BizTalk (I call it the ’BadProject’) without giving much attention to the project structure or naming convention (just used the default names, like we do most of the time). The application basically receives an inventory replenishment request; the request is transformed into a purchase order request and is sent out. For simplicity sake, both the receive port and the send port use FILE adapters.
So essentially, my BizTalk project consists of a single BizTalk project within which there are two Schemas, a Map and an Orchestration as shown in the solution explorer screenshot below:
The orchestration, transformation and schemas are all very elementary as shown below (notice the naming of the artefacts highlighted).
After I have built and deployed the solutions, I fire up FxCop, load the target BizTalk application assembly and hit the Analyze button. I selected only BizTalkCop rules and unchecked others
Note: You also need to make sure that BizTalkCop is configured to point to the right BizTalk application so that it can validate the application level artifacts like the physical post names etc.
You see from the screenshots that many errors (about 28 of them) are generated for such a simple application.
Here are the errors:
The errors are pretty self explanatory so I don’t want to go through each of them.
1. Port names should be prefixed with their direction – start ‘ReceivePort1’ with ‘Receive.’
2. Port names should be prefixed with their direction – start ‘SendPort1’ with ‘Send.’
3. Receive Location names should be prefixed with their Receive Port name – start ‘Receive Location1’ with ‘ReceivePort1’
4. Receive Location names should be suffixed with their transport type – end ‘Receive Location1’ with ‘.FILE’
5. Artifacts should be declared in modules with the correct suffix – consider module name ‘BadProject.Schemas’
6. Schema names should end with the data format. Format: ‘inventoryRequest’ is unknown
7. Schema names should begin with the root node. Start: ‘inventoryRequest’ with: ‘Root’
8. Artifacts should be declared in modules with the correct suffix – consider module name ‘BadProject.Transforms’
9. Map names should have the format “SourceSchema_DestinationSchema”
10.Artifacts should be declared in modules with the correct suffix – consider module name ‘BadProject.Orchestrations’
11.Orchestration members should be Camel cased. Replace Message name: ‘InvReqMsg’ with: ‘invReqMsg’
12.Orchestration members should be Camel cased. Replace Message name: ‘PurchaseOrderMsg’ with: ‘purchaseOrderMsg’
13.Orchestration members should be Camel cased. Replace Port name: ‘Port_1’ with: ‘port1’
14.Orchestration members should be Camel cased. Replace Port name: ‘Port_2’ with: ‘port2’
15.Orchestration shapes should be correctly named – replace ‘ConstructMessage_PurchaseOrder’ with ‘Construct_PurchaseOrderMsg’
16.Orchestration shapes should be correctly named – replace ‘Transform_InvToPurchaseOrder’ with ‘Transform_inventoryRequest_purchaseOrderSchema’
17.Orchestration Shapes should have the correct prefix – start ‘MessageAssignment_1’ with ‘Assign_’
18.Orchestration Shapes should have the correct prefix – start ‘Receive_InvRequest’ with ‘Rcv_’
19.Orchestration Shapes should have the correct prefix – start ‘Send_POMsg’ with ‘Snd_’
20.Orchestration types should be Pascal cased. Replace Port Type name: ‘PortType_1’ with: ‘PortType1’
21.Orchestration types should be Pascal cased. Replace Port Type name: ‘PortType_2’ with: ‘PortType2’
22.Orchestration Types should have the correct suffix – end ‘PortType_1’ with ‘PortType’
23.Orchestration Types should have the correct suffix – end ‘PortType_2’ with ‘PortType’
24.Orchestration Types should have the correct suffix – end ‘Port_1’ with ‘Port’
25.Orchestration Types should have the correct suffix – end ‘Port_2’ with ‘Port’
26.Artifacts should be declared in modules with the correct suffix – consider module name ‘BadProject.Schemas’
27.Schema names should end with the data format. Format: ‘purchaseOrderSchema’ is unknown
28.Schema names should begin with the root node. Start: ‘purchaseOrderSchema’ with: ‘Root’
You can also save the FxCop project and use it in the command line option to generate a pretty cool report as show below.
Demo – The Good Solution
Now I took the same solution and made changes by following the naming conventions and also restructuring the solution to be suitable for modular deployment. You see from the illustration below, I have split the application into three separate projects so that it is not only easier to maintain and saves time during development, but it is absolutely critical that it is broken in this manner from an administration point of view when the solution is in production. Any seasoned BizTalk developer / administrator will understand what I mean here, it’s a whole different topic and don’t want to get swayed into that.
You also see in the solution explorer above that the name of some of the artefacts have also been changed in accordance with the naming convention. For example, the schema names indicate the data format and the root node name of the schema. I also made sure that the physical ports confirm to the naming standards.
The orchestration, map and the schemas below show that changes have been made in the names of the artefacts.
You can see from the below screenshots that the errors have been reduced to just one error.
To sum it all up, I feel FxCop / BizTalkCop makes a great tool to have in your arsenal for that next BizTalk project you are waiting for or even in your current project. Saves you (or the SQA team) a lot of time from having to check for the naming conventions manually on a routine basis and provides you reports on demand.
It can be used both on the individual machines as well as on the build server. You can integrate it with your build scripts so that your application health report now also includes static code analysis among other things. This way you can be assured and can sleep peacefully knowing that the code drop you just sent to the customer conforms to all the naming conventions without having to manually review it at the eleventh-hour, not to mention the amount of time saved. I hope to see many more new rules and improvements from Elton in future releases of BizTalkCop.
Sr.Consultant, MGSI (Microsoft Global Services, India).
While I acknowledge that I am an employee of Microsoft., any & all views expressed in this article are mine and do not necessarily reflect the views of Microsoft.
Those who have known me for any amount of time, know that I have always been a vocal proponent of my employer, Sogeti. For over 2 years they have been the vehicle by which I’ve advanced my career, and reached out to the developer community. Unfortunately, the time has come for that relationship to end, and a new one to begin. The choice was mine alone, and the leadership at Sogeti did their very best to get me to reconsider turning the page, but this is simply the right time for me to begin a new chapter in my career.
In this new chapter, it is time for me to continue to improving myself, and it so happens that improving is exactly what my new employer is all about. In a few shoud weeks I will be joining the ranks of Improving Enterprises, a consulting firm based in Dallas that provides consulting, training, mentoring, and rural sourcing. I am thrilled that with this move I will have the chance to help bring the Improving style to BizTalk and WCF projects around the region. I am also thrilled that I will be joining some of the brightest people I know on their staff. The list of MVPs alone is enough to boggle the mind, with Caleb Jenkins, David O’Hara, Jef Newsom, and Todd Girvin all on Improving’s rock star staff, and that’s just the MVPs.
I wish everyone at Sogeti nothing but the best, and as always anyone who would like to reach me can do so at Tim@TimRayburn.net.
I was writing to a Windows file share and saw behavior where the send port would not complete writing out the flat file or the flat file would contain more records than the inbound document. After opening up a Microsoft ticket, the following KB article solved our problem.
This has come up twice for me in the past week: once while reading the tech review comments on my own book (due out in April), and again while I was tech reviewing another BizTalk book (due out in July). That is, we presumptively say that the BizTalk “message type” always equals http://namespace#root when that’s […]
In a previous post we showed how to implement a basic WCF content based routing solution using the Windows Application Server (Dublin) forwarding service together with XPath message filters and filter tables. Even though XPath filters are a very appealing…(read more)
It is no secret that Microsoft has been working on bringing its Enterprise offerings up to date, readying them for the next generation of applications and services, and fixing small pain points that have vexed developers for years. In just a short while, BizTalk Server 2006 R2 will make way for BizTalk Server 2009, and another interesting product from Microsoft will realize version 2.0 status. That product is Microsoft Enterprise Service Bus (ESB) Guidance 2.0.
What is an Enterprise Service Bus?
Dmitri Ossipov, a Senior Program Manager for Microsoft working on the ESB Guidance, in his interview on .NET Rocks defined ESB as an "architectural paradigm for policy driven mediation." Nicholas Allen, a Program Manager at Microsoft working on BizTalk Server, argues that "the clearest definition of what companies think ESB means comes from looking at the products that they build." In the case Microsoft's ESB offering we see a solid implementation of the Routing Slip pattern built on top of BizTalk Server sprinkled with ample extensibility points. In the world of the ESB Guidance Routing Slips are called Itineraries, and act like an order placed at a menu of services that is the bus. The ESB Guidance provides flexibility through a loosely coupled design that allows routing and transformation decisions to be made at runtime instead of having to be statically configured at design-time. This enables service composition, dynamic transformation, and adds support for scenarios previously unimaginable in a BizTalk Server environment.
Version 1.0 of the Guidance was a paradigm shift for many BizTalk and .NET developers, but version 2 has the potential to take it to the next level. It introduces some killer new features such as the Itinerary Designer that can reduce XML induced eye-strain, Generic On-Ramps that allow you to send a message into the bus on the consumer's terms, and support for Server-side Itineraries that can place ESB developers back in control of the content of their Itineraries.
Someone once said, "XML is like violence, if it doesn't solve your problem, you're not using enough of it." I'm not going to debate the truth of this one way or another, but I do find it interesting that XML is compared to something that causes such a universal adverse reaction. When color-coded, perfectly indented, and collapsible, I can handle XML. However, at that point I have already resorted to looking at a more human friendly representation of the data instead of the raw data itself.
Those who have downloaded the January CTP of the Guidance, have found themselves in the midst of peace – no XML to be seen (don't worry it's still there if you dig). The January CTP now includes a Visual Studio designer for Itinerary models. Creating Itineraries is now as simple as dragging On-Ramps, Itinerary Services and Off-Ramps into a visual model that can be exported to a repository, or as XML – even mere mortals can do it.
Yes, I just said the word repository. Version 1.0 of the guidance was awesome, but it did leave developers with a puzzle: "How do I get the itinerary I need to route this message, and how do I get it to the server?" The answer of course was that you have the XML of the itinerary that you send within the header of the message when submitted to the Itinerary Processing web service.
But where do you get the XML from? How do you know it's valid? Well, with CTP2, you know an itinerary is valid because it was modeled in a designer that validated it before each save and export. With CTP2 the XML can be retrieved from a SQL database (which bears a striking resemblance to the rules set database in BizTalk Server), and applied to the message in a pipeline component.
BizTalk and WCF enthusiast Bram Veldhoen remarked on his blog that it would "be a good idea to have the ESB be responsible for assigning the Itinerary headers." Microsoft apparently agreed, and this is exactly what should be expected from this new version of the ESB Guidance.
The latest ESB 2.0 CTP adds three new resolvers for resolving itineraries: BRI, ITINERARY, and ITINERARY-STATIC. This means that not only can consumers rely on the ESB to apply itineraries for them; the ESB can do it dynamically. For the ESB Guidance uninitiated, Resolvers are these wonderful classes within the ESB Guidance that can take a configuration string, parse it and execute a query of some sort to look up information necessary for transformation, routing, or some other custom process.
The first is the BRI resolver; a typical resolver connection string would look like this:
This string, when interpreted by the BRI resolver (the BRE moniker was already taken for a resolver that cannot retrieve itineraries from the repository), will tell the resolver to use the Business Rules policy named SamplePolicy to determine the Itinerary to use for routing. It will also include the message as a fact when calling the Rules.
The second is the ITINERARY resolver; a typical resolver connection string would look like this:
This is a static choice of an itinerary named Zebra. Its sister resolver ITINERARY-STATIC does exactly the same thing but is implemented using the Unity Application Block, but that's a discussion to save for another posting.
You would use such resolver connection strings in the configuration of the ESB Itinerary Selector pipeline component, which is part of the ItinerarySelect* family of receive pipelines included with the ESB Guidance 2.0 CTP. Since this is all part of a pipeline, that means that in 2.0, you can create Itinerary On-Ramps that are first class citizens in the ESB which use transports other than an ASMX web service or WCF web service. The possibilities are limited only to the adapters installed.
Getting it Right
The new version of the ESB Guidance brings new features, enhancements, and fixes that really make it feel like a polished product. With version 1 they got it shipped, but with version 2 they're getting it right.
I haven't blogged about it, though Mikael has mentioned it , but in conjunction with the BizTalk User Group Sweden meeting I released a webcast about how to use BizTalk Server 2009 Beta 1 together with Team Foundation Server 2008 to automate build…(read more)
I developed a custom functoid and found the weird behavior that wouldn’t allow me to link the output to another functoid. I found the reason here:
FunctoidCategory also determines the functionality restrictions and behaviors possible with the custom functoid. For example, a custom functoid that uses the Logical FunctoidCategory cannot output a string value to the map’s destination schema, but will instead determine whether the destination record is created based on the Boolean value as mentioned at: Logical Functoids.
So using FunctoidCategory.Unknown to display the functoid in the Advanced Functoid category in the VS toolbox modified its behavior to not allow its output to link to other functoids.