by community-syndication | Jan 16, 2008 | BizTalk Community Blogs via Syndication
It’s that wonderful time of the year again. I have four fundamental career goals for 2008:
- Provide more value to customers through improved training content and products
- Increase the effectiveness of our website
- Publish more technical/training content on this blog
- Follow the Oslo wave and explore new areas of interest
I also have one crazy personal goal for 2008:
- Run my first marathon
I have a bunch of other personal goals that I won’t bore you with but suffice it to say that 2008 is going to be an interesting year in a lot of ways — a year of huge change (hmm, perhaps I should run for President).
by community-syndication | Jan 15, 2008 | BizTalk Community Blogs via Syndication
W3C has released new drafts of the Service Modeling Language specifications. The Service Modeling Language (SML) Working Group has published the third Working Drafts of Service Modeling Language, Version 1.1 and Service Modeling Language Interchange Format…(read more)
by Richard | Jan 15, 2008 | BizTalk Community Blogs via Syndication
Why loose coupling?
There was a question in Microsoft BizTalk forums the other day about how one could implement a Scatter and Gather pattern in a more loosely coupled fashion.
Most examples on the implementation of this pattern in BizTalk use the Call Shape functionality available in BizTalk orchestrations. This however creates a hard coupling between the “Scatter orchestration” and it’s “partner orchestrations”. The downside of that is that when one adds or removes a partner the whole solution has to be recompiled and redeployed.
If one however could use the publish-subscribe architecture in the MessageBox to route messages between the “Gather orchestration” and it’s partners, it’d be possible to add partners without having to worry about the rest of the solution. This post shows and example on how to implement a solution like that.
The BizTalk process in steps
>
> [](../assets/2008/01/aa559774-noteja-jpmsdn-10.gif) NOTE: Notice the difference between the **Partners**Request, the **Partners**Response, the **Partner**Request and the **Partner**Response messages. The names are unfortunately very similar.
>
>
>
> The **Partners**Request and **Partners**Response messages are used for communication between the Scatter and the Gather orchestrations. It’s also a **Partners**Request message that activates the process.
>
>
>
> **Partner**Response and **Partner**Request are used for communicating between the Scatter and the Gather orchestration and all the partner orchestrations.
>
>
-
Request and scatter
A PartnersRequest message is received. This message is an empty message and is only used for activating the process in this example scenario. The PartnersRequest message is consumed by the Scatter orchestration. The Scatter orchestration creates one PartnerRequest message. The orchestration also generates a unique key called a RequestID and start a correlation combining that that id and the PartnersRequest MessageType. Finally it post the PartnerRequest message to message box, writes the generated RequestID to the request messages context (RequestID is a MessageContextPropertyBase based context property) and dehydrates itself.
-
Partners
All the enlisted Partner orchestrations pick up the PartnerRequest message from the message box. These orchestration then communicates with their specific data source (could be a service, database, file or whatever), receives a response. Finally these orchestrations transform the response they received and creates a PartnerResponse message that’s posted to the message box. Notice that the RequestID that was generated by the Scatter orchestration is also part of the context of the newly created PartnerResponse message.
-
Gather
The PartnerResponse messages are routed to the Gather orchestration. This orchestration uses a Singleton pattern based on the RequestID which all PartnerResponse messages carried with them in their context. This means that it’ll receive all the PartnerResponse messages containing that same RequestID into the same orchestration instance (ergo all the Partners that were activated by the request message being sent from one Scatterer). For each message instance it receives it add it’s price to a total price variable. When the Gather orchestration has received all the PartnerResponse messages (the orchestration knows how many Partners responses it should expect from one Scatterer orchestration and we can timeout if we don’t get all expected with a timeframe) the total price we calculated is written to a PartnersResponse message.
-
Response
This message is routed back the Scatter orchestration by using the correlation it initialized in the start. It’s finally this orchestration that send the final outgoing message (a PartnersResponse message).
Example solution
An example of the implementation can be downloaded from here.
The solution contains five different schemas.
-
PartnersResponse
Used for initializing the process.
-
PartnersRequest
Send from the Gather orchestration to the Scatter orchestration. It’s also the final result and outgoing message from the process.
-
PartnerRequest
Picked up and activates all enlisted Partner orchestrations.
-
PartnerRequest
Send from the Partner orchestration containing the result from the Partner Service and send to the Gather orchestration.
-
LooselyCoupledScatterGatherExampleProperties
Property schema for storing the RequestID and to correlate all the PartnerResponses as well as the final PartnersResponse back to the Scatter orchestration.
Five orchestrations
-
Scatterer orchestration
The “main orchestration” that receives a request message from outside and “scatters” party requests to all the party orchestrations.
-
Gatherer orchestration
Gatherer orchestration that gathers all the responses from the partners and transforms these to a reply that is being routed back to the Scatterer and back out.
-
Partner1, Partner2 and Partner3 orchestration
Partner orchestrations that communicates to different services and receives price information.
Setting up and testing the example solution – it’s easy!
When the solution is built and deployed one needs to setup and bind two ports; one outgoing port and one incoming port (this could also be a Request-Response port by changing the port type in the Scatterer orchestration). That’s it!
Enlist and start everything by dropping a PartnersRequest test message (you’ll find one among the zipped files) in the incoming folder. A PartnersResposne message should then be published in the outgoing folder containing a calculated price from all the Partner orchestrations.
Test message and a binding file are part of the zipped solution.
What would be different in “real life” solution?
I’ve made some major simplifications in this example to make it easy for setting it up and test the concept. These would very different in a “real” solution.
Partner Services
The Partner orchestrations are very simple. They actually don’t communicate with outside world at all. All the do is setting a hard coded price and post a response. In a real solution these would not be part of the same solution as the Scatter and Gather orchestration (otherwise we would be force to redeploy when adding a Partner orchestration to the dll).
The Partner orchestrations would also communicate with some sort of outbound source like a web service or database for example. This would however complicate the setup therefore I’ve skipped that part in the example.
Managing partners
One of the benefits with a loosely coupled implementation is the possibility to add and remove Partner orchestrations without having redeploy the rest of the solution. Using this implementation the Gatherer orchestration needs to know how many Partner responses it should wait for before timing out. This requires that value being set in a config file or something similar. In this example the number of Partners are hard coded into the Gather orchestration (it’s set to 3 Partners) to simplify the setup.
Final thoughts
Knowing how to create loosely coupled solutions like this is good knowledge to have. It’s my own and others belief that this architecture makes it possible to create more robust and separated solutions that one can update without having to do a lot a work and disturb the current processes. It’s however not the best solution performancewise as it adds a lot of extra hits on the MessageBox database and generates more work for the MessageAgent.
There are also a few things to watch out for:
Eternal loops
It easy to end up in a situation where you’re subscribing to the same message as you posting to the MessageBox. That’ll create and endless loop and cause a lot disturbance before you’ll find it. Think through and document you subscriptions!
Correlations for promoting values
When doing a direct post in BizTalk most properties are not promoted. To force you properties to be promoted you’ll have to initialize a correlation on the property as you send it. I can’t say I like this. There should be a other way of saying that one wants it promoted.
A couple of other useful articles as we on the subject:
Download the example orchestration and let me know how you used it and what your solution looks like!
by community-syndication | Jan 15, 2008 | BizTalk Community Blogs via Syndication
Why loose coupling?
There was a question in Microsoft BizTalk forums the other day about how one could implement a Scatter and Gather pattern in a more loosely coupled fashion.
Most examples on the implementation of this pattern in BizTalk use the Call Shape functionality available in BizTalk orchestrations. This however creates a hard coupling between […]
by community-syndication | Jan 15, 2008 | BizTalk Community Blogs via Syndication
I know that this script is ubiquitous across the ‘net, but whenever I google for it I come up with elaborate stored proc’s that are overkill for my needs – so here are the commands necessary to rename a SQL Server instance, for posterity:
– Get the current name of the SQL Server instance for later […]
by community-syndication | Jan 14, 2008 | BizTalk Community Blogs via Syndication
Although current BizTalk release does not support .NET generics it doessupportconcept of genericity at the message level. It is possible, of course, through “untyped” messages ormessages that don’t havespecific type attached to their context.Such messages are represented as System.Xml.XmlDocument type in orchestrations. To read more on message typing aspect of BizTalk please refer to this excellent post by Charles Young. I’d likeshow practical examples of applying generic programming to typical BizTalk tasks. These examples will help to make your BizTalk solutions leaner,more flexible, andeasier to maintain.
Let’sstart with very common pattern where BizTalk is used as merely message dispatcher – Message Broker. The goal of our generic message brokerwill beto dispatch any type of message to its destinationdecided at runtime. So,by avoiding type dependency and early routing binding (basically, hardcoding) we would get single very flexible orchestration which can easily handle requirement changes -one of the top goals of good application design.
Some things to consider beforecreating BizTalk orchestration. First, we need to figure out how and where to store input to output destination map. Since, it’s just key-value pairs, application configuration file will work fine. In my work, I prefer to keep collections of homogenous setting like this in separate xml config files and read them in custom XmlSerializable map (objects like NameValueCollectionor Dictionary being non XmlSerializablewon’t work in orchestration unless placed inside atomic transaction scope). Let’s keep it in application configuration file to keep it simple. Next thing, is to decide what to use as the key. Since messages don’t have type, we should select something that uniquely maps them to destination. Let’s assume that per our requirements it’s a input file name, although it can be anything inside message content. The destination will be expressed as complete URL, i.e. [protocol]://[path]/[fileName], for example ftp://myserver/files/dest.dat. So, the map entry can look like this: <add key=”emp.dat” value=file://myserver/incoming/employees.dat />. I use .dat extension in the example just to emphasize it’s applicable to anyfiles andnot only XML ones.
Once these questions have been answered, the orchestration comes together fairly quickly. On the global level it has one input port, expression to read routing configuration, Decide shape to branch on routing availability condition, and dynamic send port. Receive shape receives input message of XmlDocument type. Output message of the same type is sent through dynamic send port after routing address and all properties have been set:
Inside Decide_IfRoutingAvailable shape there are two branches of execution.If routing address is found thenit will read destination URL from the configuration and select transport protocol. Otherwise, destination will be set to the backup location from configuration file in order for the message to be preserved:
Once protocol is identified, we can create output message and set its properties and destination address. In case if protocol is not supported, we route message to the backup location using the same dynamic send port. That’s how it looks inside Choose Protocole decide shape:
Let’s dive into ConstructFtpMessage shape, to see how properties set:
msgOutput = msgInput;
msgOutput(FTP.UserName) = System.Configuration.ConfigurationManager.AppSettings[“FTPUserName”];
msgOutput(FTP.Password) = System.Configuration.ConfigurationManager.AppSettings[“FTPPassword”];
msgOutput(FTP.PassiveMode) = System.Convert.ToBoolean(System.Configuration.ConfigurationManager.AppSettings[“FTPMode”]);
msgOutput(FTP.RepresentationType) = System.Configuration.ConfigurationManager.AppSettings[“FTPRepresentationType”];
msgOutput(FTP.BeforePut) = System.Configuration.ConfigurationManager.AppSettings[“FTPBeforePut_” + receivedFileName.ToUpper()];
msgOutput(FTP.CommandLogFileName) = System.Configuration.ConfigurationManager.AppSettings[“FTPCommandLog”];
First line simply copies our generic input message content to the output message. Susequent lines set FTP properties from configuration file. Then, all that left is to set destination URL on dynamic port and send output message through it:
DestinationSendPort(Microsoft.XLANGs.BaseTypes.Address) = destUrl.ToString();
As the result, wehave a single orchestration that can handle hundreds of different message schemas and multiple protocols. Anotherpositive outcome isthat deployment greatly simplified.Theapplicationhas one orchestration,two ports, but only one of them is bound at deployment time. Also, noteno schemas, no maps at this time, we will add them later when weaugment application with content transformation functionality.
by community-syndication | Jan 14, 2008 | BizTalk Community Blogs via Syndication
The Connected Systems Divisions has announced an exciting new event that developers interested in RFID should check out!
Microsoft cordially invites you to participate in a 2 day solution extravaganza
At the first edition of our WW RFID Solution Days
18-19 February 2008
The event will showcase real-world RFID solutions being deployed across various
Industry verticals on the Microsoft platform today and innovations in the field of RFID via a partner expo.
Visit the event website (click here) to learn more. We look forward to seeing you in February!
Venue:
Westin Bellevue
601 Bellevue Way NE
Bellevue, Washington 98004
+1.425.638.1000
Optional Post conference training:
Feb 20-21 @ the Microsoft Campus
Contact:
[email protected] or send mail to [email protected]
__________________________________________________________________________________
Invite courtesy of Connected Systems Team Canada
by community-syndication | Jan 14, 2008 | BizTalk Community Blogs via Syndication
We are implementing a new solution for a client that requires that we connect to an FTP site to gather their data. This is normal stuff. The challenge came when the requirement was brought to our attention that the FTP password was going to be changed every 30 days. We did not want to have to coordinate when the password was going to change and who was going to be doing it. We needed a way to programmatically change the password of the FTP receive location so we did not have to baby sit the receive location.
Taking the SDK as our base, we developed some code that interrogated the ReceivePort, its RecieveLocations, and then once found it, would modify the IReceiveLocation.TransportTypeData.
What started off as a simple request to change the password ballooned to many other properties of the FTP adapter. I was happily coding until I got to the URI, and realized that it had a bunch of downstream affect, where I then gave up. The following code only deals with a few of the properties that can be changed in the FTP adapter, but it gives you a starting point to develop more code against.
This code assumes a few things; you accept the disclaimer to the right and that you already know which port and which receive location you want to change, and it will go in an change the particular property in the adapter.
We also had the requirement to change a pipeline component password (it picks up a zipped file that has a password, so the same
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.BizTalk.ExplorerOM;
namespace ChangeConfigration
{
public enum Properties
{
Password,
UserName,
RepresentationType,
MaximumNumberOfFiles,
PassiveMode,
FirewallType,
FirewallPort,
PollingUnitOfMeasure,
pollingInterval,
ErrorThreshold,
MaxFileSize
}
public class ReceiveLocations
{
static void ChangeExistingFTPLocation(string ReceivePort, string ReceiveLocation, Properties PropertyType, string NewProperty)
{
string thisPropertyType;
switch ((int)PropertyType)
{
case 0:thisPropertyType = "password";
break;
case 1:thisPropertyType = "userName";
break;
case 2:thisPropertyType = "representationType";
break;
case 3: thisPropertyType = "maximumNumberOfFiles";
break;
case 4: thisPropertyType = "passiveMode";
break;
case 5: thisPropertyType = "firewallType";
break;
case 6: thisPropertyType = "firewallPort";
break;
case 7: thisPropertyType = "pollingUnitOfMeasure";
break;
case 8: thisPropertyType = "pollingInterval";
break;
case 9: thisPropertyType = "errorThreshold";
break;
case 10: thisPropertyType = "maxFileSize";
break;
default: thisPropertyType = "unknown";
break;
}
BtsCatalogExplorer root = new BtsCatalogExplorer();
try
{
root.ConnectionString = "Server=.;Initial Catalog=BizTalkMgmtDb;Integrated Security=SSPI;";
//Enumerate the receive locations in each of the receive ports.
foreach (ReceivePort receivePort in root.ReceivePorts)
{
if (receivePort.Name == ReceivePort)
{
foreach (ReceiveLocation location in receivePort.ReceiveLocations)
if (location.Name == ReceiveLocation)
{
string OldTransportTypeData = location.TransportTypeData;
int start = OldTransportTypeData.IndexOf("<" + thisPropertyType + ">") + Convert.ToString(thisPropertyType).Length + 8;
int end = OldTransportTypeData.IndexOf("</" + thisPropertyType + ">");
int length = end - start;
string ExistingProperty = OldTransportTypeData.Substring(start, length);
string NewTransportTypeData = OldTransportTypeData.Replace( "<" + thisPropertyType + ">" + ExistingProperty + "</" + thisPropertyType + ">", "<" + thisPropertyType + ">" + NewProperty + "</" + thisPropertyType + ">");
location.TransportTypeData = NewTransportTypeData;
location.Enable = true;
}
}
}
root.SaveChanges();
}
catch (Exception e)//If it fails, roll-back all changes.
{
throw e;
root.DiscardChanges();
}
}
}
}
logic can be used, and instead of modifying location.TransportTypeData, you simply modified IReceiveLocation.ReceivePipelineData.
This goes without saying that we picked up the file and then waited for a given amount of time and then changed the password for the ftp receive location and logged what that password is in BAM so that if we ever needed to find out what the password is, we could look at the BAM view.
Yes, I know I have the possibility of getting hate mail by stating that I am going to log the password as a clear text value in BAM, however, using the IReceiveLocation Interface API, I can see what the password is in clear text anyway. I might as well make it easy for the support staff to log into an internal web site and see what the password currently is so they can go out to the FTP site and see if there are files outstanding than having to call up the client to find out what the password is, or having to call the BizTalk team to have them have a program that goes in and pulls the data and sees what the password is and tells them what it is. (whew, I almost ran out of breath with that run-on sentence!)
by community-syndication | Jan 14, 2008 | BizTalk Community Blogs via Syndication
Last Friday I delivered an internal presentation on the overall concepts of cloud computing, talking about what some of the players in this space are doing: SalesForce and it’s “Platform-as-a-Service” (SalesForce takes the “-as-a-Service” waaaay to far for my personal taste), Facebook and its app directory and SDK (over 13k apps at the moment), Google’s Apps and OpenSocial (how can this be a %u00abstandard%u00bb if Facebook is not participating?), Amazon’s fascinating web services and cloud computing offer (including Dynamo), Microsoft’s hosted services (CRM, Exchange), PopFly and BizTalk Services, and finally Yahoo Pipes.
After this long description, ,where the lines blur enterprise and social, we discussed the impacts of using the cloud as a computing fabric, where distribution is everywhere, there are no transactions but rather reconciliations or consistency corrections, and where “all data from distant stars is from the past” (in the words of Pat Helland), and where there is no synchronicity, only eventual replication.
One of the interesting discussions was around the question of if these cloud platforms (especially in the MS space) will be able to give us simple, easy to use application interfaces, where we can forget about the fact that our application is hosted somewhere in a distributed datacenter environment, and just assume some kind of “SLA” in “ADO.NET.Cloud”, where the distribution aspects are hidden from the developers. Or rather, if on the contrary we will have to change the way we develop to be aware of this environment: Amazon’s Dynamo points in this latest direction, where some of the conflicts between different versions of data have to be solved in the semantic/application level.
Other aspect of this discussion, and one that I consider to be especially fascinating, is around data models. How to store it, how to have different copies of it, replicated, and how to reconcile it. There are some interesting initiatives here, especially Amazon’s, in its data store service and in SimpleDB. I had been studying the ideas of Linda Tupplespaces, and found it really interesting that Amazon is using these ideas. Other interesting idea is Facebook’s, which seems to have created a “domain specific” information store, around the notion of the user profile, for their own needs.
Anyway, it was a very interesting discussion, and eye opening in some cases. Let’s see what the future brings.
In the social space specifically, some interesting ideas came up: what if Microsoft added to SharePoint 2007+ some kind of Facebook app hosting? a Web Part to host Facebook apps, for instance, would be great! And an implementation of OpenSocial in SharePoint, again to interop with its Web Parts, would also be interesting, especially if OpenSocial ever becomes successful. 🙂

by community-syndication | Jan 13, 2008 | BizTalk Community Blogs via Syndication
We have written a custom pipeline component (code and sample projects using this pipeline component are attached) that you can use to make your BizTalk orchestrations calling RFCs and BAPIs written with Microsoft BizTalk Adapter v2.0 for mySAP Business Suite work with the new WCF SAP Adapter. All you need to do is to use custom send and receive pipelines that replace the usual XMLTransmit and XMLReceive pipelines, and plug in the new adapter in place of the old one. No other changes. The source code of the pipeline component is included so you can make changes to it if you want to.
The custom pipeline component performs the following transformations:
1. Old adapter Request XML to New adapter request XML
2. New adapter Response XML to Old adapter response XML
The result? Your orchestration and messages do not have to change at all!
The major differences between the old and new adapters’ XML structure for RFCs are as follows:
1. Namespaces:
%u00b7 Old Adapter : Root element is in the namespace http://schemas.microsoft.com/BizTalk/2003 while other elements have an empty namespace
%u00b7 New Adapter: Elements with depth 0 and 1 are in the namespace http://Microsoft.LobServices.Sap/2007/03/Rfc/ while elements with depth 2 and 3 are in the namespace http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/
2. Name of the Root Node:
%u00b7 Old Adapter: The root node is named {RFC_NAME}_Request for the request XML, and {RFC_NAME}_Response for response XML
%u00b7 New Adapter: The root node is named {RFC_NAME} for the request XML, and {RFC_NAME}Response for the response XML
3. Array of Complex types:
%u00b7 Old Adapter : There are multiple nodes of with the name of the parameter, each of which contain one instance of the structure as the child nodes. The name of the complex type does not appear in the XML. For example:
.
.
<ComplexParameterName>
<Element1>Value1</Element1>
<Element2>Value2</Element2>
</ComplexParameterName>
<ComplexParameterName>
<Element1>Value3</Element1>
<Element2>Value4</Element2>
</ComplexParameterName>
.
.
%u00b7 New Adapter : There is only one node with the name of the parameter, which contains multiple child nodes whose name is set to the name of the complex type , each containing one instance of the structure. For example:
.
.
<ComplexParameterName>
<ComplexTypeName>
<Element1>Value1</Element1>
<Element2>Value2</Element2>
</ComplexTypeName>
<ComplexTypeName>
<Element1>Value3</Element1>
<Element2>Value4</Element2>
</ComplexTypeName>
</ComplexParameterName>
.
.
4. Tables Returned:
%u00b7 Old Adapter: All tables that have any output value are returned.
%u00b7 New Adapter: Only the tables that were present in the request XML are present in the response XML.
How does the custom component work?
Changing Namespaces and Name of the root node is straightforward. To figure out the name of the ComplexParameter’s type, we use reflection on the RFC specific dll created by the old adapter, and expect it to be present at “C:\Program Files\Microsoft BizTalk Adapter v2.0 for mySAP Business Suite\Bin\{RFC_Name}.dll” (The old adapter creates this Dll automatically when you generate the RFC Schema). To make sure the tables returned are same as what the old adapter would have returned, we append empty nodes for all the table parameters that were not included in the request XML by using the schema in your project. That way, the response XML contains all tables. Now we simply eliminate the tables that are empty to get the response similar to that of the older adapter.
Using the custom pipeline component
1. Build the custom pipeline component or use the dll included
2. Copy the assembly to “C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components” and also add it to the GAC
3. In your existing BizTalk project, create new send and receive pipelines and include the custom component in encode and decode stages. (you’ll need to add reference to the custom pipeline assembly and add it to the pipeline toolbox)
4. Build and deploy
5. Go to the BizTalk administration console and restart the host instance.
6. For the ports that send or receive to/from SAP, select the new SAP adapter and configure it as described in the documentation. Set the ’enableSafeTyping’ binding property to true
7. Use the custom pipeline you created with the ports that talk to SAP
8. Start!
A few sample BizTalk projects that use schemas generated with the old adapter and include custom pipelines are included in the attachment. Sample configuration bindings are also included. To build the samples, you should change the name of the SQL server in project deployment properties.
Update: The attached Pipeline component has been updated to transform the context property ‘ConnectionType’ too. See this post for more info.