Recently, I shed some light on how Maps are compiled to .NET assemblies. Perhaps one of the most asked question on microsoft.public.biztalk.* is “Calling a map from C# or VB.NET?“. This post attempts to answer that question and clarifies a few things.
It is possible to run a map produced by the BizTalk 2004 mapper outside of BizTalk and it is even possible; under certain conditions; to run the map on a machine that does not have BizTalk installed. The steps required for using a map outside of BizTalk are outlined below:
- Extract the XSL document produced by the mapper,
- If the map uses functoids (out of the box or custom functoids), you will need to extract the Xsl Transformation arguments,
- Create a .NET System.Xml.Xsl.XslTransfom() object,
- Create a .NET System.Xml.Xsl.XsltArgumentList() if there were any functoids in the map and instantiate appropriate objects,
- Call Transform() on the XSL and optionally, the XsltArgumentList.
In the list of steps above 3, parts of 4 and 5 have nothing to do with BizTalk: these are just plain .NET programming. Creating the XsltArgumentList requires us to understand how the mapper saves functoids.
Extracting the XSL and the extension objects (if any) can be achieved by at least three different methods:
- If you have the map file (.btm) and can open it in Visual Studio 2003, you can right click on the map file in the solution explorer and select “Validate Map”. The output window will give you the path(s) to the XSL and the extension Object XML. The links can be shift-clicked to retrieve the files,
- If you only have the compiled assembly, you can use the excellent Lutz Roeder’s .NET Reflector to extract the required information as strings,
- If you only have the compiled assembly, you can write some code that loads the assembly, creates an instance of the map object and calls the appropriate members. See the format of maps assemblies.
The only speed bump is the format of the Extension Object XML document. I have extracted the extension associated with the map Scriptor_CallExternalAssembly from the “ExtendingMapper” SDK sample and formatted it:
Version=22.214.171.124, Culture=neutral, PublicKeyToken=f2aaad746c3d94f5″
Creating the extension objects is now very simple. For each “ExtensionObject” node, we need to load the assembly, create an instance of the given class and add the object along with its namespace to the XsltArgumentList. Of course, the map will only run if all needed assemblies are available. This is true for custom assemblies as well as out of the box functoids. The code below does exactly this and the full solution can be downloaded here:
/// Transforms XML instances using a BizTalk map.
public class BizTalkMap
/// Caches the XSLT stream.
private Stream xsltStream;
/// Caches the XSLT Arguments stream.
private Stream xsltArguments;
/// Cache the XslTransform.
private XslTransform xslTransform;
/// Caches the XSltArgumentList.
private XsltArgumentList xslArgumentList;
/// <param name=”XsltStream”>Stream of XSLT as XML.</param>
/// <param name=”XsltArguments”>Stream of Extension Objects as XML.</param>
public BizTalkMap(Stream XsltStream, Stream XsltArguments)
xsltStream = XsltStream;
xsltArguments = XsltArguments;
/// Transforms the given instance and returns the result as a stream.
/// <param name=”inXml”>Stream of the instance to transform (XML)</param>
/// <returns>Stream of the transformed XML.</returns>
public Stream TransformInstance(Stream inXml)
XslTransform transform = Transform;
XmlDocument xmlInputDoc = new XmlDocument();
// Make sure we do not destroy the formatting
// Output stream
MemoryStream outStream = new MemoryStream();
XmlTextWriter xmlWriter = new XmlTextWriter(outStream, System.Text.Encoding.UTF8);
// Formatting options
xmlWriter.Formatting = Formatting.Indented;
xmlWriter.Indentation = 2;
// Perform transformation – We do not specify a resolver
transform.Transform(xmlInputDoc, TransformArgs, xmlWriter, null);
// Prepare the output stream
/// Gets an instance of XslTransform for the given XSL/Extension objects.
private XslTransform Transform
if (xslTransform == null)
// Create a new transform
XmlTextReader xsltReader = new XmlTextReader(xsltStream);
XslTransform transformTemp = new XslTransform();
transformTemp.Load(xsltReader, (XmlResolver) null, GetType().Assembly.Evidence);
// Cache the transform
xslTransform = transformTemp;
/// Gets a XsltArgumentList from a BizTalk Extension Object XML.
private XsltArgumentList TransformArgs
if (xslArgumentList == null)
XmlDocument xmlExtension = new XmlDocument();
XsltArgumentList xslArgList = new XsltArgumentList();
if (xsltArguments != null)
// Load the argument list and create all the needed instances
XmlNodeList xmlExtensionNodes = xmlExtension.SelectNodes(“//ExtensionObjects/ExtensionObject”);
foreach (XmlNode extObjNode in xmlExtensionNodes)
XmlAttributeCollection extAttributes = extObjNode.Attributes;
XmlNode namespaceNode = extAttributes.GetNamedItem(“Namespace”);
XmlNode assemblyNode = extAttributes.GetNamedItem(“AssemblyName”);
XmlNode classNode = extAttributes.GetNamedItem(“ClassName”);
Assembly extAssembly = Assembly.Load(assemblyNode.Value);
object extObj = extAssembly.CreateInstance(classNode.Value);
// Cache the list
xslArgumentList = xslArgList;
This class can be used as follows:
FileStream fsXslt = null;
FileStream fsInput = null;
FileStream fsExtensions = null;
FileStream outStream = null;
fsXslt = new FileStream(xsltPath, FileMode.Open, FileAccess.Read);
fsInput = new FileStream(instancePath, FileMode.Open, FileAccess.Read);
fsExtensions = (extensionPath != null) && (extensionPath.Length > 0) ? new FileStream(extensionPath, FileMode.Open, FileAccess.Read) : null;
BizTalkMap map = new BizTalkMap(fsXslt, fsExtensions);
Stream sOut = map.TransformInstance(fsInput);
// Save stream to a file
string destPath = Path.Combine(Path.GetDirectoryName(instancePath), Path.GetFileName(instancePath) + “.trans.xml”);
outStream = new FileStream(destPath, FileMode.Create, FileAccess.Write);
outStream.Write(((MemoryStream) sOut).ToArray(), 0, (int) sOut.Length);
if (fsXslt != null) fsXslt.Close();
if (fsInput != null) fsInput.Close();
if (fsExtensions != null) fsExtensions.Close();
if (outStream != null) outStream.Close();
Okay, GotDotNet is way too slow getting stuff up. I guess they have to proofread and be careful since it is a public site. So, in the mean time, my friend (not my boss which appears to be the impression some people have. Scott and I got a good laugh) Scott Woodgate has let me put the doc on a private site of his. So this doc is now available at:
Take a look, read it over, use it when appropriate, send feedback. It has a lot of useful information on how to gather information programmatically from the msgbox. Hopefully all of this type of information will be exposed as perf counters, or through a BTS UI or some apis in the next release, but for now, here you go. I believe Paul Somers has also spent some time and might be making available a tool which will convert these queries into perf counters which you can adminster and monitor. I hope this helps a lot of you out.
The MSBTS_ServiceInstanceSuspendedEvent WMI event is fired by the BizTalk Engine whenever a “message” is suspended. This happens (amongst other reasons) when all send retries fail (for example when a connected system is down) or whenever exceptions occur during the pipeline execution (for example when message validation fails).
For this little article I’d like to focus on the first category: outbound suspended messages caused by systems being temporarily unavailable. Usually admins like to receive notifications whenever one of their systems goes off-line. You can set up a NT service that consumes BizTalk MSBTS_ServiceInstanceSuspendedEvents and notifies the admins for example by e-mail (to the Biztalk admin and/or appropriate system admin). They can then fix the problem and bring their failed system back up.
Now the problem is that you still have to deal with those suspended messages that sit in the message box as a result of the failure. They simply have to be resubmitted and this can be done using the HAT tool. This can be a quite repetitive task that’s very prone to errors (try resending the wrong message). In many cases an automatic solution would make a better one. Let’s see what can be done:
One could react by upping the retry count to its maximum value? Ok, this would automate things a lot but has 1 very big disadvantage: you lose your notifications because the message will never suspend 🙁 Who will bring the system back online? You NEED these notifications so this isn’t a viable solution: you have to lower the retry count back to an acceptable level. Is there another solution? Yes there is: add your own custom WMI events to BizTalk Server!
What if we could add extra custom events that create the functionality to receive notifications upon the retry itself – without requiring a message to be suspended? This sounds very complicated but is in fact a very easy task when you use the Microsoft Enterprise instrumentation Framework to create a custom event class.
For those that never heard of it, this is what EIF
can do for you! From the EIF README:
‘The Microsoft Enterprise Instrumentation Framework (EIF) enables you to instrument .NET applications to provide better manageability in a production environment. EIF is the recommended approach for instrumenting .NET applications. It provides a unified API for instrumentation that uses the existing eventing, logging, and tracing mechanisms built into the Microsoft Windows%u00ae operating system, such as Windows Event Log, Windows Trace Log, and Windows Management Instrumentation (WMI). Members of an operations team can use existing monitoring tools to diagnose application health, faults, and other conditions.
An application instrumented with EIF can provide extensive information such as errors, warnings, audits, diagnostic events, and business-specific events.’
The EIF also forms the basis/prerequisite for the Microsoft .NET Logging Application Block. This block represents the ‘new’ way to do logging /exception management – replacing the older EMAB (exception management application block).
Let us – by means of example – add the following events to our BizTalk Server ‘event source’:
- TransportLostEvent: a system goes down for the 1st time = First failure
- TransportRecoveredEvent: a system comes back up again = A retry succeeds
I have used very basic sample classes for these events. Feel free to e-mail me to get the sample – I haven’t got uploads working yet 🙁
The context of the message will provide us with our basic building blocks and determine when these events have to be triggered. The ‘http://schemas.microsoft.com/BizTalk/2003/system-properties’ namespace contains 3 unpromoted properties named ActualRetryCount, RetryCount and RetryInterval.
- RetryInterval is a static value representing the value set on the Send Port.
- RetryCount – on the other hand is – very confusing – not a static value but represents the number of retries still available – and is lowered by 1, by the Engine for each failed transmission attempt.
- ActualRetryCount, which BTW returns no results when you look for it in the help, is also a dynamic value that is incremented by 1 for each transmission attempt.
So how can we make this all fit together, wherefrom can we trigger our events? There are 2 scenarios possible: a custom adapter and a custom pipeline component (for integrating with existing adapters).
Here’s a sample piece for a custom adapter:
‘If message transmission succeeded
oProperty = msg.OriginalMessage.Context.Read(“ActualRetryCount”,”http://schemas.microsoft.com/BizTalk/2003/system-properties
If Not oProperty Is Nothing Then
‘A previously failed send operation has now succeeded actualretrycount>0 and msg.TransmitSuccesfull is true)
If System.Convert.ToInt32(oProperty) > 0 Then
myTransportRecovered = New TransportRecoveredEvent
obj = msg.OriginalMessage.Context.Read(InterchangeIDProperty.Name.Name, InterchangeIDProperty.Name.Namespace)
If Not IsNothing(obj) Then myTransportRecovered.InterchangeId = System.Convert.ToString(obj)
‘Continue to build and finally raise the event
‘If message transmission failed
‘If actualretrycount is 0 then this means that system failed for the 1st time: raise TransportDownEvent
oProperty = msg.OriginalMessage.Context.Read(“ActualRetryCount”, RetryIntervalProperty.Name.Namespace)
If Not oProperty Is Nothing Then
If System.Convert.ToInt32(oProperty) = 0 Then
myTransportLost = New TransportLostEvent
and so on…
A very nice idea would be to develop an “EIF enhanced” version of the adapter base classes that trigger additional events. Anyone interested?
I’ve also succeeded into making a sample pipeline component that raises the TransportLostEvent from inside a pipeline – creating the opportunity to integrate this functionality with existing adapters. I tested this very rapidly/effectively thanks to the pipeline component wizard
by Martijn Hoogendoorn…
The downside for pipelines is that – if I’m correct – we have no notion of the current transmission attempt result from inside (the result) so we cannot raise the TransportRecoveredEvent. What we do have, is the result of the previous attempt and the actualretrycount – creating the possibility to trigger the TransportLostEvent upon the first message retry.
As a final note always remember that WMI events do have a performance impact. So you may want to just send them to the WTE (trace events) sink if you don’t need them and performance is an issue. The nice thing about the EIF is that this can be done dynamically and doesn’t require a restart of the ‘Event Source’ or changing the pipeline. See the configuration section in the readme of the EIF for more info.
Here are some stats for the curious – Event sink Average events per second:
- Windows Event Log 220
- MSMQ 120
- WMI 520
- SQL Server Basic Log 300
- SQL Server Flexible Log 70
- EMAB with the Windows Event Log 120
Do you remember those good old BizTalk 2002 days when you only had to make a couple of clicks through the BizTalk messaging manager to refresh a mapping? Well it’s time to share with you a nice technique that makes BizTalk 2004 mapping updates just as easy to make and deploy:
- For this technique to work you have to make sure you always compile and distribute your mappings into a ‘mapping only’ assembly – this will simplify things a lot. Do not put your referenced schemas inside this mapping assembly.
- Keep always track of the version-number your mapping assembly has (by setting the version number in the assembly config file.). Remember that version numbers are in the format “Major.Minor.Build.Revision”.
- Deploy your finished solution into your production environment. This can be done in a variety of ways: using an msi or you could just deploy assemblies manually using the deployment wizard. Usually you make this decision depending on the complexity of your solution.
- Now you discover a mapping error requiring an update after everything is already deployed. You are afraid to touch the installed assemblies – you know how complicated things can get when undeploy redploy. You don’t want to break any dependencies or anything…but it’s not a breaking change – just a fix – changing a couple of mapping links…
- The solution: Just recompile your updated mappings project and up the version number and deploy your updated assembly using the deployment wizard while leaving your old assembly in place. You can deploy your new assembly to the GAC and optionally – not mandotary – also into BizTalk. This makes then a total of 2 versions of your mapping assembly deployed side-by-side into Biztalk at the same time (the original and the updated one). This doesn’t affect your running solution at all, because BizTalk references mappings strongly, it will still keep using the original assembly from the GAC.
- Now change the .NET assembly binding behavior by inserting a binding redirection into the XML config file.
You can do binding redirections on both application level (BTSNtsvc.exe.config in our case) and on machine level by updating the machine.config file. You can update config files manually or use the .NET configuration framework mmc snap-in that we will use for simplicity sake. Here’s a sample binding redirecting from my machine.config file:
<assemblyIdentity name=”StoraEnso.Mappings.Fenix” publicKeyToken=”15b134fbf6ea6bf5″ />
<bindingRedirect oldVersion=”126.96.36.199″ newVersion=”188.8.131.52″ />
- Recycle (restart) the BizTalk host instances – to recycle the app domains that use the mapping assembly. Voila: BizTalk is now using the new version!
- Not happy with the result? Manually remove the binding redirection again or using the snap in, recycle the hosts and you are back to the old version, it’s just that easy!
For the curious: I’ve also tested BizTalk assembly redirections using a publisher policy assembly and it seems to work fine. Remember to up the Major or Minor part of the version for this to work. You now have the capability to create an msi that installs/uninstalls an updated mapping assembly.
Additionally I’d also like to mention that binding redirections can be used for BizTalk schema assemblies too. This, though, creates extra complications because BizTalk will always match duplicate assemblies to a given message type if you have 2 versions deployed into BizTalk. As a consequence you will have to make all your receives/pipelines what I call ‘Strongly Typed’ and this indicates extra property promotion work (the type) and I had to play with SchemaWithNone in my code to make FFDASM and other pipeline components work correctly…
So I guess – never tested this myself – it’s better to only GAC updated schema assemblies – if you’d want to use this technique at all for schema assemblies….
A final word of advice: technically it’s possible to keep on patching forever and ever – though this would make a very bad practice 😉
You all probably know that MSMQ stands for ‘Microsoft Message Queuing’ and MSMQT is the acronym for describing the BizTalk MSMQ adapter. A quick reminder for those who already forgot: the “T” in MSMQT stands for transactional (not “Transport”)
Probably less common knowledge is what MSMQ acknowledgments (ACKs) are, so I have chosen this to be the topic for my first post. What are ACKS and in what flavors do they come?
Acknowledgments are system-generated confirmation messages that are sent to the administration queues specified by the sending application. When an application sends a message, it can request that Message Queuing return acknowledgment messages indicating the success or failure of the original message.
- A reach acknowledgment tells you that the message reached its destination queue.
- A receive acknowledgment basically tells you that some application successfully received a message from a queue.
Both acknowledgment types have positive (ACK) and negative (NACK) variants indicating – this is too straightforward – a success or a failure: positive arrival acknowledgments, positive read acknowledgments, negative arrival acknowledgments, and negative read acknowledgments.
If you want to get rapidly acquainted with acknowledgments you should definitely read this excellent article on MSDN: Reliable Messaging with MSMQ and .NET
(Building Distributed Applications)
Personally having no prior experience with MSMQ acknowledgments, I started my first tests having big expectations. I built a .NET application and sent my 1st MSMQ message with an ACK-level set to full_reach and full_receive using a non-transactional admin queue. (my machine is a simple MSMQ independent client so I send to other machines using outgoing queues). Also not wanting to be too haughty I targeted my first message send at a regular MSMQ server. As expected my message arrived quickly at its destination queue where I subsequently could read the message through code.
Wonderful, I had 2 ACKs in my sending machine’s admin queue:
ACK1 MSMQ message class = The message reached the queue.
ACK2 MSMQ message class = The message reached the queue.
Since this first test being a big success I already wondered what would happen if I’d send the same message again, but now targeted to the BizTalk MSMQT adapter.
I quickly set up a BizTalk MSMQT receive location with a passthru pipeline and a file send port subscribing to the MSMQT port. After BizTalk created the flat output message I was really surprised – even a bit disappointed at the same time – because I only found 1 acknowledgment in my admin queue:
ACK1 MSMQ message class = The message reached the queue.
Where the heck did the receive ack go? Did I make a mistake? I did have a valid subscription and it was consumed correctly because I had a file. After doing some research, newsgroup postings and e-mailing, a very helpful person from MS gave me the answer: ‘For Biztalk, multiple “applications” can receive the message and MSMQT doesn’t know how many got it, so it was decided that receive ACKs would only confuse the customers.’
Slowly getting over my depression I started focusing on the ACK I did receive. I started to think about the opportunities this ACK creates: you now have a level of traceability at the sender’s side. For my next test I sent a message to an invalid MSMQT queue – more precisely I used an invalid name being a non existing queue on MSMQT. Admin queue:
NACK1 MSMQ message class = The destination queue does not exist
- Event viewer: Destination queue ‘wrong queue’ cannot be reached. For local BizTalk queues, the receive location may not exist or may be disabled.
- HAT: Nothing
Conclusion: if the sending of a message fails the sender receives a NACK message in the admin queue and eventually after exhaustion of the retries MSMQ places the bad message from the outgoing queue in the dead-letter queue (optional: see below for more information). Darren Jefford advices to always use passthru pipelines for MSMQT receive locations. Now let’s ignore his advice and try the opposite. What would happen if I would send a message to an existing queue but an exception is raised from inside the receive pipeline?
I quickly configured my MSMQT receive location to use a pipeline implementing the flat file parse (FFDASM) component. Next I intentionally sent a malformed FF message to the MSMQT queue – generating a parsing error. Here is what I got in my admin queue:
NACK1 MSMQ message class = The destination queue does not exist
I expected to have a more meaningful description as for example ‘Parsing failed’, but I received the same – now totally irrelevant – description. As a consequence the message sender cannot see the difference between these 2 different types of errors. I asked my MS contact again and he gave me this explanation: ‘because we cannot define new values for MSMQMessage.Class, we were pretty much stuck with the existing ones.’ This means, I think, that they were stuck with the standard MSMQ API error enumerators.
So here’s my conclusion: If you send messages to MSQMT receive locations you should always choose between
(A) Use a passthru pipeline on the receive location or
(B) Use a custom pipeline but specify a dead-letter queue for the message and/or implement acknowledgments/logic to handle failed messages due to pipeline execution errors.
As for the sake of completeness I also want to mention that MSMQT also sends positive ACKS for each successful message receival. This assures us that your message was handled fine and survived pipeline execution without any error.
Now having all these extra bits of information regarding ACKS I’d like to finish by issuing a little warning. An example of a bad design would be to create some kind of service that sends messages 1 by 1 to a MSMQT node – while waiting in between each send for an acknowledgment. This would probably work from a functionality point of view but never from a performance perspective. Read this: ‘Using the Right Tool for the Right Job’
Firstly apologies for the lack of activity on my blog, I have several reasonable sized pieces of work that I’ve been trying to finish off ready for posting, this is the one of those pieces, and I wanted to get this one out now as I’m off to Redmond for a couple of weeks and if I don’t do it now, it ain’t gonna happen for a another month!
So, we all understand the value and importance of testing, but, there is of course a cost in terms of developing the volume of test cases in order to get sufficient test coverage for a given Biztalk solution or any other solution for that matter. There are of course many aspects to testing enterprise systems including unit tests, functional tests, integration tests, performance tests, user acceptance tests and stress tests. Functional and unit tests typically benefit from being automated; this helps to provide a metric to measure progress during the development process as well as helping to prevent regressions. Of course, after a solution has gone live, regression testing should be performed on any patches that are applied to the solution, so again, a suite of automated tests can give a high degree of confidence that a regression will not take place.
Recently, I started to use NUnit, I have to say I really like NUnit, but it didn’t quite meet my needs for testing end to end Biztalk scenarios, what was needed was a frame work to enable test cases to be created very quickly with little code, I also wanted to be able to change Url’s for receive and send ports etc without recompiling my test code so that I could use the same test cases for my development and production environments.
Enter the “Test Framework for Rapid Test Case Development”! (I really need to think if a decent name for it J). The approach that the Framework takes is very simple, firstly it treats Biztalk as a black box for the most part, it uses the notion that test cases are composed from many discrete test steps which maybe composed to form a single test case. A test case has three phases, setup, execution and tear down, each of these three phases may be comprised of multiple test steps. Further, a test step may use a validation step, the purpose of which is to validate that certain conditions are true, for example a stream containing Xml may be validated against a schema. The cleanup phase is always executed for each test that is started, the approach being that each test should leave the system in the same state that it found it as far as possible.
The individual test steps are .Net classes that are re-usable across different tests and even within the same test, the Framework creates them by inspecting the type name and assembly path specified in the Xml for that step. The Xml configuration is passed to the test step, the format of the configuration for each step is entirely up to the test step, so only the test step needs to understand its configuration. The interface that test steps must implement is shown below along with the Xml for the HTTP Post step:
public interface ITestStep
void Execute(XmlNode testConfig);
<TestStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.HttpPostStep”>
The advantage of this approach is that test cases can be developed very quickly by simply authoring an Xml configuration file, the Xml could even be generated. Once the Xml is created, it is driven by constructing the TestDriver and then calling Run, the following code snippet illustrates how it can be run from NUnit:
public void Http_To_File_Test_01()
TestExecuter testDriver = new TestExecuter(@”.\TestCases\Http_To_File_Test_01.xml”);
Let’s look at a test case for a Biztalk scenario. An Xml File is submitted to a one way HTTP receive location, the receive port is bound to an orchestration that is activated and then waits for a second message that is received on a FILE receive location, at which point the orchestration transmits a message to a FILE send port. This scenario could be created by using an HTTP post step, a FILE creation step, and a FILE validation step which reads the FILE written to disc validates it against a specific schema and performs a number of XPath queries to test key fields in the data are correct.
The Test Framework is driven from an Xml configuration file, the Xml for the test case described above can be seen below:
<TestStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.HttpPostStep”>
<TestStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.FileCreateStep”>
<TestStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.FileValidateStep”>
<ValidationStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.XmlValidationStep”>
<XPathValidation query=”/*[local-name()=’PurchaseOrder’ and namespace-uri()=’http://SendMail.PurchaseOrder’]/*[local-name()=’PONumber’ and namespace-uri()=”]”>PONumber_0</XPathValidation>
<!– Test cleanup: test cases should always leave the system in the state they found it –>
<TestStep assemblyPath=”” typeName=”Microsoft.Services.UK.TestFramework.FileDeleteStep”>
<!– Clean up .\Rec_01\InDoc1.xml in case the test failed!! –>
The output from the test case is written to the NUnit console window and is shown below:
S T A R T
Test: Http_To_File_Test_01 started @ 15:50:18.097 18/09/2004
Setup Test: Http_To_File_Test_01
Execute Test: Http_To_File_Test_01
Step: Microsoft.Services.UK.TestFramework.HttpPostStep started @ 15:50:18.097 18/09/2004
Info: HttpRequestResponseStep about to post data from File: .\TestData\InDoc1.xml to the Url: http://localhost/TestFrameworkDemo/BTSHTTPReceive.dll
Data: HttpPostStep response data
Step: Microsoft.Services.UK.TestFramework.HttpPostStep ended @ 15:50:19.469 18/09/2004
Step: Microsoft.Services.UK.TestFramework.FileCreateStep started @ 15:50:19.469 18/09/2004
Info: FileCreateStep about to copy the data from File: .\TestData\InDoc1.xml to the File: .\Rec_01\InDoc1.xml
Step: Microsoft.Services.UK.TestFramework.FileCreateStep ended @ 15:50:19.469 18/09/2004
Step: Microsoft.Services.UK.TestFramework.FileValidateStep started @ 15:50:19.469 18/09/2004
Info: FileXmlValidateStep validating file: .\Rec_02\InDoc1.xml
Data: File data to be validated
<?xml version=”1.0″ encoding=”utf-8″?><ns0:PurchaseOrder xmlns:ns0=”http://SendMail.PurchaseOrder”>
Validation: Microsoft.Services.UK.TestFramework.XmlValidationStep started @ 15:50:19.479 18/09/2004
Info: XmlValidationStep evaluting XPath /*[local-name()=’PurchaseOrder’ and namespace-uri()=’http://SendMail.PurchaseOrder’]/*[local-name()=’PONumber’ and namespace-uri()=”] equals “PONumber_0”
Validation: Microsoft.Services.UK.TestFramework.XmlValidationStep ended @ 15:50:19.479 18/09/2004
Step: Microsoft.Services.UK.TestFramework.FileValidateStep ended @ 15:50:19.479 18/09/2004
Tear Down Test: Http_To_File_Test_01
Test: Http_To_File_Test_01 ended @ 15:50:19.479 18/09/2004
P A S S
I’ve created a workspace on GotDotNet to host the Test Framework, the first version has the following test steps:
And the following validation steps:
I’ve used this version of the Framework to test some pretty complex Biztalk scenarios, but there are still plenty of other test steps that would be useful. I’ve included some very good ideas into the Framework from my colleague Greg Beech who has been testing Biztalk enterprise solutions in the Microsoft Solution Development Center for the last three years, when we compared notes we found out that we had similar approaches; one approach tied the steps together using code the other using Xml.
What would be really very cool is if we can get some community collaboration behind this, if other people find the framework useful and contribute other test and validation steps, I will merge them into the Framework so that every one benefits from them. The Readme.htm in the work space contains more detailed information on the Framework.
Okay, so I don’t have a link for it, but I have submitted it so hopefully it will show up shortly. I have written a paper entitled Biztalk 2004 Advanced Messagebox Queries which is designed to help you automate a lot of your operational health management work and also perform advanced troubleshooting of your system. My original version had what Scott Woodgate referred to as a lot of “Lee’isms” so he editted it a bit, but still left some of my humor. 🙂 That version has for the time being been posted to gotdotnet (I don’t have a link yet, I will update this with the link, but hopefully it will pop up this weekend so you can look for it). An official version will be posted to MSDN shortly but my friend Syd is now working to formalize it (which means take out all of the fun and leave it very dry. Apparently my humor does not always translate well 🙂 ). Like I said, I will post the link when it becomes available, but look for it. Eventually I will probably take it off GotDotNet and just point to MSDN version as we would do updates to that doc. As the doc points out, we are attempting to automate and make easier as much of this as we can, but I and the rest of the product team realized you bought the current version and need whatever help you can get, now. So this is the first in a series of papers I will hopefully get to (probably most will be much more official and you won’t see my name on them). Hope this helps all of you out there. Let me know if there are any other types of information you need to find. A special thanks to Paul Somers who helped with identifying some of the usefull queries. I don’t know if he has a blog, but he has a cool tool you can find on GotDotNet which took 1st place in the BizTalk Dev Competition. Congrats to Paul.