Microsoft BizTalk Server Performance Optimization Guide has been released.


 


The new Microsoft BizTalk Server Performance Optimization Guide has been released to the web.


This document provides guidance & best practices on optimizing BizTalk Server performance for demanding production environments. It is based on numerous real-world engagements with customers by the BizTalk Customer Advisory Team CAT (Rangers), Premier Field Engineering and MCS.  


The guidance consists of the following 4 sections:



  • Getting Started: an overview of the BizTalk Server functional components that can affect performance. It also describes the phases of a BizTalk Server performance assessment.

  • Finding and Eliminating Bottlenecks: The Finding and Eliminating Bottlenecks section describes various types of performance bottlenecks as they relate to BizTalk Server solutions and information about how to resolve the bottlenecks.

  • Automating Testing: Describes how to implement an automated build process and how to automate functional and load testing using Visual Studio Team System, BizUnit and Loadgen.

  • Optimizing Performance: The Optimizing Performance section provides guidance for optimizing performance of specific components in a BizTalk Server environment

The guide is available in several formats and from several locations:



Make sure your BizTalk implementation is optimized by adhering to this guidance!


Regards,


Ofer 

BizTalk PERFORMANCE Optimisation Guide – First Edition!!!!

Guys – this document has recently hit the shelves and what a great guide it is. Written
and reviewed by a huge team within MMS mostly – the two main authors Ewan
Fairweather
and Rob
Steel
(both MS and very much project/client oriented guys – out in the
field!) did a superb job.

Firstly – grab the Performance
Optimisation Guide

(checkout Ewan and Rob’s blogs as there’s some great bits on there, as well as a section
dedicated to the BizTalk Performance Explorer)

What’s the meaty stuff I can expect to read? (I hear you ask…)

1. It serves as 2 things – a prescriptive guidance and two – best
practices around optimisation
(It’s also great to see BizUnit in there for testing and as part of LoadGen)

I’ve summarised below:

The key sections of the guide are:

%u00b7 Getting Started: Provides an overview of the BizTalk Server functional components
that can affect performance. It also describes the phases of a BizTalk Server performance
assessment.

%u00b7 Finding and Eliminating Bottlenecks: The Finding
and Eliminating Bottlenecks
section describes various types of performance bottlenecks
as they relate to BizTalk Server solutions and information about how to resolve the
bottlenecks.

%u00b7 Automating Testing: Describes how to implement an automated build process
and how to automate functional and load testing using Visual Studio Team System, BizUnit and
Loadgen.

%u00b7 Optimizing Performance: The Optimizing
Performance
section provides guidance for optimizing performance of specific components
in a BizTalk Server environment

 

Other ‘related stuff’ to download while you’re in the mood

  1. Microsoft
    BizTalk Server Operations Guide

  2. BizTalk
    Server 2006 R2 Installation and Upgrade Guides

  3. BizTalk
    Server 2006 Tutorials

  4. BizTalk
    Server 2006 R2 Runtime Architecture Poster

  5. BizTalk
    Server 2006 R2 Capabilities Poster

R2 EDI Reporting, what is lacking, and what I did about it

I finally was able to start working with BizTalk R2, and we do HIPAA/EDI. Within hours of ‘playing’ with it, I was SHOCKED! How did this get out the door?! Who signed off saying this solves anyone’s EDI integration problems?

Okay — so you are right — it allows the EDI shop to be able to handle eleventeen million different transactions out of the box, yes that is cool, and actually quite useful.

However, the part that keeps me up at night is not: “how am I going to translate this transaction?” but “what happened to this file yesterday, or the other file two weeks ago?”

I had gone to plenty of MS conferences where during the beta bits were shown and the reporting was going to be done using BAM, “cool” I thought, if MS is going down the BAM trail, I should too, so my whole paradigm of reporting changed and now I am an even bigger proponent of BAM.

Once I started playing with the R2 bits, maybe hour two, I was scratching my head, where was all of this famed reporting?

Let me not bore you too much with what R2 has for reporting, I will bullet list the things I required that are not present in the current reporting architecture

Error information in a repository that associated the file with the error

I am not a big fan of reading through the event log to try to find the message that failed and then equally tasking process of finding the event log entry that actually tells me what the error actually is. Even MS was not correct when they stated that the BizTalk 2006 Server message id error and the corresponding BizTalk Server 2006 EDI error are paired together:

Being able to associate the TA1/997 to the original message.

Yes, I know that there is a column in the BAM tables to associate it, but it is not used! The only suggestion was to put a warm and fuzzy query that pulls the control number/sender id/receiver id to associate the original message to the corresponding acknowledgment. This did not work for me, as I test like crazy, throwing the same file at a process eleventeen million times a time before I a satisfied, and there isn’t a great way to pair up the values programmatically.

Finding the original filename.

I know that in the perfect world, we don’t care about file names, we simply care about sender id/control numbers, etc: but our trading partners are in love with filename, bordering on creepy! When they call up the first thing that they say is “I dropped of ‘ABCDEFG12345.edi’ last Wednesday – I never got an ACK, what happened?” I have never gotten, without pulling out my crowbar; what their id is, and the control number.

Ability to see the entire EDI on both the receive and send side.

The transaction (that is stored deep in the DTADb database) doesn’t help me if I have a hard time associating it with it’s entire interchange.

What I did:

I wrote a little email to Steve Ballmer that got some people calling me to assist on an issue where if I wanted to reject an entire EDI interchange if any transaction was in error. It was/still is pretty hard to get a non documented feature limitation corrected. I decided that I had fought my battles and was not going to do it again, so I wrote some of my own components:

MY NEW EDI REPORTING

I have a separate datastore that is located on the same database as the BAMPrimaryImport (yes I know I shouldn’t) it is SQL 2005 so I can store quite a large amount of data in the following table:

The other things I wanted to log was all of the context properties that are part of the message (and some that aren’t). Here is a snapshot of the view that takes all that will need to know when a trading partner calls up:

Some notes, I have been testing with 50mb – 100mb files and the view was coming up pretty slowly, so I limited what is shown in the last column is limited in the view to 2500 bytes, and added the Archive Interchange Id so if you need to look at the whole interchange you can run a query against the EDI_Repository.

To get all of this data to show up, I created a couple of pipeline components:

Receive Pipeline

Which you have the following pipeline properties to set when you define your receive location:

And for our outbound EDI data:

Send Pipeline

Which you have the following pipeline properties to set when you define your send port:

You have should specify the database and server, if it can’t find one that is associated with them message (if it wasn’t already attached from the receive side (an automatically generated 997/TA1 for example)) then it will put it in the database you specify.

This is only geared to ANSI X12 transactions, including HIPAA transactions, I have not tested it for UN/EDIFACT.

If you are interested in having this, refer to this page.

ASP .NET 2.0 – IIS worker process terminating unexpectedly. Event ID: 1009 – The process exit code was ‘0xc0000005’

I’m in the process of migrating biztalk247.com to a new hosting provider. After deploying the application, I started doing the performance testing using Visual Studio Web/Load test features. I configured the IIS Application Pool to have a web garden (more than one worker process w3wp.exe) to serve the incoming requests. The application pool also got settings to recycle the worker process after certain usage of virtual memory/physical memory.

During the test, once the virtual/physical memory limit is reached, the worker process will recycle. As soon as this occurs I started noticing entries in the eventlog with following warning message

A process serving application pool ‘biztalk247’ terminated unexpectedly. The process id was ‘12308’. The process exit code was ‘0xc0000005’. (Event ID: 1009)

and an error message

Faulting application w3wp.exe, version 6.0.3790.3959, stamp 45d6968e, faulting module unknown, version 0.0.0.0, stamp 00000000, debug? 0, fault address 0x001f0001. (Event ID 1000)

Eventually after few of these errors, IIS Application pool will shut down and started getting 503 Service Unavailable.

After doing a bit of research I found this KB article really useful http://support.microsoft.com/kb/918041. According to this article, whenever a worker process shuts down it writes some information to the registry at the following location HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<ASP.NETVersion>\Names and there is not enough permission to do so. I checked the location in the server and to my surprise I can’t see the “Names” key. I created it manually and tried to assign the permission to IIS_WPG group as described in the KB article, but as soon as I right-click and select “Permissions” I got some kind of warning message saying the list is not in the order (“I didn’t capture the exact error message”).

For me it looks like ASP .NET 2.0 is not configured correctly. So I decided to uninstall and install ASP.NET 2.0 using the aspnet_regiis (-u and -i switch) to be on the safe side, since its the early stages of deployment.

After doing so, I right clicked on the “Names” key and didn’t see any warnings.

I reran the test again, this time the worker process shutdown properly without any warnings/errors. Looking at the registry revealed ASP.NET has written some values under the key “Names” as shown in the picture below.

Nandri!

Saravana

If you are in Perth and want to learn about WCF

I will be in Perth this week (July 7th to 11th, 2008)on a BizTalk Consulting engagement and will be presenting to the Perth .NET Community of Practice on one of my favorite topics WCF.The presentation will be a hands on demonstration of how to use WCF.For the details on the meeting go to:
http://perthdotnet.org/blogs/events/archive/2008/07/03/hands-on-wcf-with-bill-chesnut.aspx

hope to see everyone there.

Working with Global Parties – AS2, EDI ‘stuff’

I was recently working on a AS2/EDI project using BizTalk 2006 R2 and came across
an interesting question:

How do I create 500+ parties? and with the AS2 Properties included (or even
HL7 for that matter)

After a little digging – there is the BizTalk.ExplorerOM that we could drill into
and create the parties through code.

However, there’s a more hands off approach….using Bindings!!!!

(1) Export Bindings from an existing setup including Parties!!! to
an xml file.

(2) Modify the XML file – particularly the Party information.

(3) Import Bindings back into your new environment.

There’s a great blog post by the BizTalk Team on this subject a while back – http://blogs.msdn.com/biztalkb2b/archive/2006/10/25/automated-deployment-of-edi-properties-also-useful-for-bulk-import-of-party-properties.aspx

Download Posted – WF XAML Workshop

We posted up a new Windows Workflow Foundation (WF) workshop to the Downloads server. The WF XAML Workflow Workshop was created by Steve Danielson to help developers better understand the development of XAML workflows.

The topic of XAML Workflows is a popular one with customers, but there are some common questions about the best way to create and use XAML-only Workflows. This workshop illustrates several XAML Workflow topics to help WF developers better understand this topic.

Topics covered in this workshop include:

  • Different modes of Workflow authoring
  • Host considerations
  • XAML activation
  • Workflow compiler
  • Versioning and deployment
  • Dynamic update
  • Security considerations

The lab is available on downloads section of Microsoft.com. Two MSI files are posted for download, one containing the code and the PowerPoint presentation, and the a second MSI containing the video file. Additionally, you can view the workshop presentation on the MSDN media server (I promise to figure out how to embed WMV files in future posts).

Adventures with Live Mesh (CTP)

I have been playing around with the Live Mesh Community Techology Preview, and have been doing what I think is some pretty cool stuff with it (as a consumer), so I thought I’d post something about it.

First off, and let’s get this out of the way up front, this is NOT “another Ray Ozzie Notes/Groove”. What’s available today looks and feels like Groove (or FolderShare), but that’s only because this is the first implementation of something written on top of the Mesh Operating Environment (MOE). Today it gives you a way to synchronize files between machines and a “virtual desktop in the cloud”, but this is just the start. There will be a developer SDK available down the road that will open up this distributed environment to what I think could be a very interesting new class of applications (all SOA course!).

It is not my intention to go into detail about what it is, see the link below to Paul’s write-ups for that. It is my intention to share my experiences, good and not-so-good, and explain how I am using it.

Configuration

My configuration is:

  • My MediaCenter PC (at home) is a Live Mesh “device”
  • My notebook (also known as “my office” :)) is another Live Mesh device
  • I have created some folders on my virtual Live Mesh desktop-in-the-cloud that are synched with my devices

Project documents and artifacts

I do all of my development work inside virtual machines. Plus, I’m very mobile, and am often working in a disconnected state. How I use Live Mesh for this is:

  • from inside my virtual machine, I map a drive to a folder on my host
  • I have Live Mesh running in the host
  • When I drag project documents from inside the VM into the shared folder, they appear on the host
  • Live Mesh detects the new documents, and synchronizes them to the cloud
  • Live Mesh running on my MediaCenter PC detects the new files in the cloud and brings them down.

Presto… everything’s in synch! Pretty cool that I can do something inside my VM and it just shows up at home on my MediaCenter (complete with an RSS news feed for the folder saying who added/deleted what).

Photos

Basically the same as above, except when I plug my camera into my notebook I drag the photos in a folder that’s synched with my Live Mesh desktop. From there, they replicate down to my MediaCenter PC. I have my MediaCenter machine configure to automatically do backups to an external drive. Here Live Mesh gives me instant distributed backups, without having to think about it.

My not-so-great experiences

My not-so-great experiences were my own fault, nothing wrong with Live Mesh.

  • I didn’t understand the concept of a “device”. It is a combination of machine+login. I have 2 logins on my MediaCenter, a low-privilege one for everyone in the family, and my admin login. I had set Live Mesh up, under both those logins, to synchronize the same folder to my virtual desktop folder. Perhaps it could be a bit smarter and detect that scenario, but it didn’t, and the net effect was that I started getting duplicate file conflicts as the same files were being uploaded from different devices (even though from the same physical location) to the same virtual location. It turned into a real “Live Mess” 🙂 Solution was to set the MediaCenter machine to login automatically on boot, so Live Mesh would always be running, and remove the admin “device” from my mesh.
  • this one’s kind of funny, and shows what can happen when you forget what happens when you drag things. I was in Jordan, and had spent a weekend taking a bazillion pictures with my 10 megapixel camera. I pulled the pictures off, and it took all of a second to drag them to my synchronized photo folder. The upload to my mesh completed 5 days later 🙂

Learning more

Live Mesh is really cool, and useful technology. My biggest gripe right now, and a constraint on my usage, is the 5 gig limit. As was said on CNET’s Buzz Out Loud podcast, and I love this quote, “we’re going to need a bigger cloud”.

I would encourage everyone to get the CTP, or get on the list, and start using it for real.

Last I saw there was a waiting list to get in to the tech preview. That may or may not still be the case when you read this.

If you search around, you’ll find lots of info about Live Mesh, as a lot of people are (rightfully so) pretty excited about this. Some good starting points would be:

Technorati Tags: LiveMesh,Live Mesh

MVP Year – #2

I’m delighted to say I been awarded Microsoft Most Valuable Professional (MVP) for second year in a row. My first MVP year was awesome, its hard to believe the level of benefits you get from Microsoft for being an MVP. I’ll say the highlight of my award year was participating in the “Oslo” Software Design Review at Redmond during the MVP global summit. Without being an MVP its very unlikely I would have got a chance like this to hear directly from the Product team their future road map. It’s not all about hearing what’s coming soon, but also getting the opportunity to engage yourself in heated debate and to provide feedback directly to the product team to shape the technology you love the most.

One of the other exciting thing for me as part of “Oslo” SDR, is the chance to interact with the industry experts. It was limited set of audience (around 40 people I believe) including people like Don Box, Juval Lowy, Michele Leroux Bustamante , Jesus Rodriguez, Jon Flanders, Sam Gentile, Brian Loesgen, Charles Young, Richard Seroter, Scott Colestock, Stephen Thomas, to name the few (guys I haven’t left anyone intentionally, this list is from just on top of my head).

Not to mention, you get MSDN or Technet subscription free during your award year, which is great for a technical enthusiast to play with different things without the barrier of buying it.

I also need to thank everyone behind the scene, who nominated me for this award year.

Nandri!

Saravana