BizTalk EDI Batching Custom Naming

So there are a few companies out there that want to send out a file with a standard naming convention.

I thought, this will be EASY, you simply assign the file name to the transactions, and the Batching orchestration will re attach the context properties after its batching process completes and it sends it out so I can use %SourceFileName% in the Send Port.

WRONG, I get a file named %SourceFileName% in the folder.

To get a custom filename you have to attach the filename into the payload of the message somewhere. In my case I was needing to put the batch run id in the filename.

How I did it was put it in the ST03 element (since it gets rewritten in the EDI Send Pipeline.

I then created an orchestration that picks up messages using the filter recommendations in the party definition. Since it is a batched message, there is no schema that can be made that actually represents the data that the batching orchestration creates, I used an XMLDocument message.

Now came the problem of extracting the value from the message. I thought it would be as easy to use

BatchId=System.Convert.ToString(InMsg.MessagePart,”//ST03[1]/text()”);

However I was getting the error that the first argument needed to be a message, and since it was not a message, but a XLANGs.BastTypes. Any message, I converted to a XMLDocument in an expression:

TempXML=new System.XML.XMLDocument(); TempXML=InMsg.MessagePart;

However I could not use the XLANGs function xpath to extract the value because I did not have a true message, so I had to use C#.

I had to create the following variables (in an atomic scope because it was late and I was tired and did not want to see if the variables were serializable or not)

Variable Name .NET Type
TempXML System.XML.XMLDocument
StringRdr System.IO.StringReader
XPathDoc System.Xml.XPath.XPathDocument
XPathNav System.Xml.XPath.XPathNavigator
XPathExpression System.Xml.XPath.XPathExpression
XPathNodeIterator System.Xml.XPath.XPathNodeIterator

In the expression shape I have the following code:

TempXML=InMsg.MessagePart; StringRdr=New System.IO.StringReader(TempXML.InnerXml); XPathDoc=new System.Xml.XPath.XPathDocument(StringRdr); XPathNav=XPathDoc.CreateNavigator; XPathExpression=XPathNav.Compile("//ST03[1]"); XPathNodeIterator=XPathNav.Select(XPathExpression); while(EPathNodeIterator.MoveNext()) { BatchId=System.Convert.ToString(XPathNodeIterator.Current.Value); }

Which extracts the value into BatchId from the message for me, now I can set the output file name to whatever is in the ST03 element.

ACSUG February Meeting – “Oslo” modelling platform overview – Jeremy Boyd

ACSUG February Meeting – “Oslo” modelling platform overview – Jeremy Boyd

Here are the next Auckland Connected Systems User Group meeting:
"Oslo" modelling platform overview – Jeremy Boyd
Jeremy Boyd (JB) is an experienced solutions architect, developer and consultant with over 7 years experience in the New Zealand market. He is a Microsoft MVP and Regional Director, and a Director of Mindscape.
JB will be […]

Reconfigure BizTalk with MsBuild

The other day I was having an issue with our build server so I decided to write a script to reconfigure BizTalk on our build or development servers. My plan was to run this regularly on a scheduled basis to ensure our build servers were kept clean.

I thought id write a little about this as it may be useful to others. Note the script is aimed at a single machine hosting BizTalk 2006 R2 and SQL Server.

Before I started I exported the configuration of our BizTalk Group to a file and then amended the credentials etc.

The list of tasks I would perform to clean and reconfigure our server is as follows:

Task

Description

Delete log file

When I run the BizTalk Configuration tool from the command line I will make it log to a file for troubleshooting any problems. Before I start the first clean up activity is to delete any old logs

Stop the following services:

  • W3SVC
  • SQLServerAgent

I stop any services which may want to access the BizTalk databases

Restart the following services:

  • MsSQLServer
  • MsDTSServer

Remove the EDI BAM definitions

When EDI is setup the BAM definitions for this feature are configured creating the usual BAM components. We need to remove these using BM.exe before we unconfigure BizTalk because unconfiguring the group will not remove the BAM Definitions

Use the BizTalk Configuration Editor to unconfigure the group.

Stop the WMI Service and any dependant services as advised by the BizTalk documentation

Delete the backup of the SSO Key

Note this is just for a development machine, you need to be careful about this on any real environment. I just delete the backup file here so I can recreate it when configuring the group

Unregister and delete the SSNS application for BAM Alerts

I will use NsControl.exe to do this

Run a SQL Script to drop the BizTalk databases

Delete the share and folder used by BAM Alerts

These are not removed when the group is unconfigured so need to be done manually

Start the following services:

  • Winmgmt (and any dependant services which I stopped earlier)
  • Start w3svc

Ensure any services which may be used while configuring BizTalk are started

Run the BizTalk Configuration Editor passing in the file containing the desired configuration.

To implement this I created a Visual Studio solution and wrote an MsBuild script and used some of the Microsoft.Sdc tasks. The below picture shows the files in this solution.

Its pretty simple and I will provide a link to the sample at the end of this post.

Some of the other ways I have extended this script since include:

  • Using an MsBuild task to register some custom adapters
  • Using an MsBuild task to register the WSE2 and Enterprise LOB adapters
  • Using an MsBuild task to add adapter handlers
  • Using the MsBuild configuration dictionary stuff I have blogged about in previous posts to configure the BizTalk Configuration xml file for different environments

The link to the sample for this post is: http://www.box.net/shared/t4k0m6ps5y

FOR XML PATH

In creating xml data from table data, I came across the need to create empty elements even in there are null values in the columns I am querying.

I first used this logic to force the generation of the tag:

SELECT EmployeeID as "@EmpID", FirstName as "EmpName/First", MiddleName as "EmpName/Middle", LastName as "EmpName/Last" FROM HumanResources.Employee E, Person.Contact C WHERE E.EmployeeID = C.ContactID AND E.EmployeeID=1 FOR XML PATH, ELEMENTS XSINIL

However that returned the result:

<row xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" EmpID="1"> <EmpName> <First>Gustavo</First> <Middle xsi:nil="true" /> <Last>Achong</Last> </EmpName> </row>

I just wanted:

<row EmpID="1"> <EmpName> <First>Gustavo</First> <Middle></Middle> <Last>Achong</Last> </EmpName> </row>

So I changed the code to:

SELECT EmployeeID as "@EmpID", ISNULL(FirstName,'') as "EmpName/First", ISNULL(MiddleName,'') as "EmpName/Middle", ISNULL(LastName,'') as "EmpName/Last" FROM HumanResources.Employee E, Person.Contact C WHERE E.EmployeeID = C.ContactID AND E.EmployeeID=1 FOR XML PATH, ELEMENTS

‘Oslo’ SDK – January 2009 CTP

The latest CTP version of the Oslo SDK was released a few days ago. I spent some time yesterday installing it and having a look. From a functionality perspective, there is not very much difference to the October 2008 CTP released after last year’s PDC. However, if you open up the main assemblies in Reflector, it quickly becomes apparent that the code has undergone significant refactoring and improvement. The code looks much tidier and closer to production quality. For example, the October 2008 CTP contained generated parser/lexer code for MGrammar. In the latest version, this has been removed, and it appears that the parser code is now dynamically generated at runtime (the ‘preferred’ Oslo approach). There has been lots of tidying up done in terms of type and method names. Additional functionality has been added to manage various issues, and the entire code base looks tighter and better constructed.

In terms of new functionality, this has been discussed elsewhere. See, for example, http://www.alexthissen.nl/blogs/main/archive/2009/01/31/improvements-and-changes-to-oslo-sdk-and-repository-in-january-ctp.aspx. Most attention has been given to the ability to include actions on the RHS of token productions. In the work we have undertaken to date, we have come across at least one situation which requires this new feature, and which we could not properly address in the October 2008 CTP.

Perhaps the most intriguing aspect of the January 2009 CTP is the way this new feature is described in the Release Notes. I suspect that Microsoft has inadvertently let slip a feature that they didn’t mean to go public on. The Release Notes published on the web site state that:

“Any production in a token can now have a code action or a graph action (formerly known as term construction)! You can now specify a return type for a token definition in the case of code actions, similar to a syntax definition.”

In the October 2008 CTP, actions are limited only to MGraph expressions. An action is an optional implication of a production that controls the output of the MGraph abstract syntax tree created by the parser. As far as I can tell, this is still the case in the new CTP. Unlike many similar technologies, the CTP version of MGrammar does not support the inclusion of code statements as semantic actions. This was discussed by Clemens Szyperski (an Oslo architect) in a comment to the blog article at http://weblogs.asp.net/fbouma/archive/2008/11/05/designing-a-language-is-hard-and-m-won-t-change-that.aspx, and the suggestion appears to be that this is a deliberate strategy in order to ensure that MGrammar remains (relatively) simple to write and focussed only on composable DSL creation.

The Release Notes statement suggests that Microsoft is looking at including code actions in MGrammar. As I say, it would appear that this feature is not actually supported in the January 2009 CTP. If it is, there is certainly no documentation explaining how to use this feature. Interestingly, this may be the explanation for a feature within the October 2008 CTP which seems to have disappeared from the current version. In the previous CTP, the MGrammar assembly included code for parsing C# statements. The parser didn’t appear to be designed as a full-blown C# parser, but looked like it was designed to parse code statements and expressions. This code appears to be missing from the January 2009 CTP.

It is generally a fool’s errand to speculate on what is happening behind the scenes, and I certainly have no special insight or knowledge about Microsoft’s intentions. However, I can’t help wondering if Microsoft has accidently let us see that they are considering supporting code actions in MGrammar when it is released, and have built code to support this feature which they do not wish to make public at the current time. If this is the case, there is no guarantee that this feature will make it into the final release. For my part, I have been thinking about this issue for a few months now, and am undecided, myself, as to the desirability of supporting full-blown semantic actions in this fashion. The issue, I think, is about how useful this feature will really be in mainstream DSL creation. Does the reduced problem domain of a domain-specific language imply language simplicity that generally avoids the need to handle complex semantics at the parser level? Clearly, there is no fundamental reason why a DSL should not exhibit such complexity, but if the vast majority of DSLs are inherently simple, maybe it would be wiser to stick to the current labelled graph-only model employed by MGrammar parsers, and work around this restriction. As I say, I am undecided. It may be that the Oslo team are currently also uncertain of the best strategy. It would be interesting to hear views from the wider modelling community.

In a related issue, am I the only person to spot an uncanny philosophical resemblance between MGrammar and Labelled BNF (LBNF)? Did LBNF have any bearing on the Oslo team’s thinking? The mechanics of the Oslo approach are different, and IMHO generally superior, to LBNF, but some of the underlying thinking is similar, including the emphasis on creation and shaping of labelled graph ASTs.