How the WCF BAM Interceptors Work Under the Hood

How the WCF BAM Interceptors Work Under the Hood

Business Activity Monitoring is a feature of BizTalk Server that allows businesses to capture important pieces of information throughout a BizTalk Solution. Once the information is captured, or recorded, any reporting tool such as SQL reporting services, Crystal Reports, or even Performance Point can view the information and display it in a very graphical/ statistical and possibly analytical way.

The baseline architecture in BAM is a "wiretap" pattern. Various BAM components live inside a BizTalk Solution to "spy" on various pieces of data, and events. This "wiretap" pattern is implemented through components the BAM infrastructure calls "Interceptors". Interceptors "intercept" data as it flows through the system, recording information such as Date and times, as well as the actual data at various stages of execution. Interceptors implement a "notification" or callback system. Every interceptor that exists inside of BAM contains two major functions; one to "intercept" or gather the data (check points), and another to listen for the interception and record it to the BAM data store. The interceptors use a rich non-blocking design to listen and record data; thus minimally affecting the overall performance of the whole system, while recording data.

BizTalk provides many interceptors out of the box for the three out of four major artifacts of a BizTalk solution. An interceptor is included for Adapters, Pipelines, and Orchestrations, and where there are missing areas, BizTalk provides a framework for manually invoking the interceptors through code (for example to handle maps).

The BizTalk BAM Architecture

Pipeline Interceptors are implemented through the Messaging Event Bus, while Orchestration Interceptors are implemented through the Orchestration Event Bus. Currently BizTalk only supports one type of Adapter interceptor, the WCF interceptor which is implemented through a Direct Event Bus. An Event Bus is commonly referred to as an "Event Stream". The Event Stream is none other than a group of classes that stream the captured data into the BAM Data store. There is also support for Windows Workflow, a WF Interceptor is implemented also through a Direct Event Bus. If these interceptors are not enough, BizTalk also provides a framework and example to create your own interceptors using the BAM application programming interfaces that support BizTalk Messaging, Orchestration and direct event busses which update the BAM Data stores.

No matter which interceptor you choose, the design is the same. An interceptor must be created and configured. Configuration of an interceptor requires information about the BAM Data store and BAM activity to record the data that is being tracked, and the actual data or event being tracked of course. How to configure the interceptor widely depends on the type of interceptors. Messaging and Orchestration interceptors utilize a graphical designer for configuration. This designer is known as the BizTalk Tracking Profile Editor (TPE). The output of the TPE is an XML file that contains the complete configuration of the Pipeline and Orchestration interceptors. The TPE also allows you to create the interceptors, and, in lack of a better word, "execute" them, or as in the words of the editor, "Apply the tracking profile". Configuration of the WCF and WF interceptors currently requires a manual configuration. Simply put, one must build this configuration file by hand. To help, BizTalk provides three base schemas: CommonInterceptorConfig.xsd, WCFInterceptorConfig.xsd, and WFInterceptorConfig.xsd, to help with the validation of an Interceptor Configuration file for WCF and WF. Configuration of the programmatic framework is done, as expected, inside of code.

The purpose of this article however, is not to explain how to use the TPE, nor use the BAM Framework, for this is discussed well in other documents. The purpose is to explain how the major components of the WCF interceptor work under the hood, in hopes to help eliminate some of the difficulty in using WCF Interceptors.

First off, to really understand how WCF Interceptors work, understanding of WCF extensible points are required. Here’s a couple a good architecture images to help explain what is being insinuated:

The Architectural Model of WCF

In the image above, WCF is best explained with three basic architectural layers: Data, ServiceModel and Channel. Each layer design is duplicated and created in both a Client Proxy application as well as a Server dispatcher application. The Data model contains the actual data or message flow of the data in and out the system. The Service Model layer contains the endpoints, addresses, contracts and binding elements that do not deal with protocols, encoding nor transports. The Channel layer deals with the protocol, encoding and transport binding elements.

The Extensibility Points:

The Extensibility Points

WCF contains many areas where it can be extended to include features and customization not supported by default. These areas include Behaviors: Service, Operation, and Endpoints, as well as Custom binding elements, encoders, and transports. BizTalk contains a framework to build custom Binding Elements, encoders and transports known as the WCF LOB Adapter SDK. (See an upcoming article). For the WCF Interceptor, BizTalk has included a custom Endpoint Behavior, implemented specifically through the Parameter Inspector and Message Inspector classes to gather the BAM checkpoints, listen for various events, and create the direct event stream to record the data to the BAM database.

So let’s get dirty shall we???

The WCF BAM Interceptor allows a WCF Service (dispatcher) or WCF Client (proxy) to participate within a BAM activity by recording Key Performance Indicators (KPI’s) and events into the BAM Infrastructure. The interceptor provides for all the capabilities that BAM offers such as continuations, adding references to other documents, starting, updating, and completing BAM records. The WCF Interceptor accomplishes this using a WCF dispatcher or proxy configured to use the WCF BAM Endpoint Behavior. The endpoint behavior attaches itself to a WCF Endpoint, which is contained within a WCF Channel (Binding Element). Once the WCF Channel is opened, the endpoint loads all behaviors attached to it.

In the case of BizTalk server, in a WCF Receive adapter (a dispatcher), a WCF Channel is opened by means of a ServiceHost Base derived class that is created when you start an in process hosted receive location. In the case of an out of process hosted receive location, a Service Host factory derived class opens the WCF Channel. For all WCF Send adapters (proxies), the WCF Send adapter does a lazy initialization pattern. The Send adapter doesn’t create and open the WCF Channel until a subscription is fulfilled for the adapter. At this point internally a dynamic ChannelFactory is created using either IOutputChannel or IReqeustChannel interfaces to open the WCF Channel and send the message out. (See my WCF Sending Adapter blog for more details…)

The WCF Endpoint Behavior that is attached to one of these WCF Channels is found inside the Microsoft.BizTalk.Interceptors.dll. Within this assembly, you can find a BAMEndPointBehaviorExtension, which is a class that contains the implementation to read from a WCF Configuration section named BAMEndPointBehavior. [The actual name of the section is customized according to how it’s configured within the Machine.config file.] This Behavior extension class is used to create and apply the default values for the WCF BAM Endpoint Behavior, such as initializing the WCF BAM Interceptor, reading the connection string to the BAM Primary Import Tables (PIT) Database, and retrieving the polling interval in seconds that checks for new Interceptor configurations inside the BAM PIT. Once the behavior is applied, the interceptor is initialized, and the WCF BAM Interceptor is available until the WCF Channel is disposed. Through the WCF Framework, the WCF endpoint invokes the "ApplyClientBehavior" or "ApplyDispatchBehavior" method of the BAM Endpoint behavior depending on if a Dispatcher (service) or Proxy (client) endpoint has been configured to use it. Within this method call, the WCF BAM Interceptor is loaded and configured.

To expound on this further, the WCF BAM Interceptor is initialized internally by a WCF Configuration manager utility, which invokes even another utility named the InterceptorConfigurationManagement class that queries the BAM database using the bam_Metadata_GetLatestInterceptorConfiguration stored procedure. This stored procedure retrieves the latest version of a WCF BAM Interceptor configuration. The interesting point about this call is that the procedure returns a rowset of InterceptorConfigurations for a given Manifest and Technology. The Manifest and Technology are specified within the Interceptor Configuration file. When you "deploy" this file, the values as well as the configuration are stored inside the BAM database. Currently there are only two supported technology values: WCF and WF, with more possibly in the near future. This limitation is hardcoded within the WCF BAM interceptor and WF BAM Interceptor respectively. This effectively allows a BAM tracking profile to span multiple WCF Services, and Workflows (WF) by loading up multiple interceptor configurations, one for WCF and one for WF, for multiple BAM Activities that use the same .Net assembly which contain the ServiceContracts or Workflows respectively.

To be more WCF specific, you can have multiple interceptor configurations for a given ServiceContract. For WCF, the reason is simple, the manifest is the fully qualified name of the ServiceContract, and the fully qualified name of the assembly which the ServiceContract can be found. Specifying the Manifest and Technology is done using the EventSource section if the WCF interceptor configuration as such:

<ic:InterceptorConfiguration xmlns:ic="">
<ic:EventSource Name="OrderServiceSource" Technology="WCF" Manifest="OrderService.Contracts.IOrderService, OrderService.Contracts, version=, culture=neutral, publicKeyToken=23AFFDB4390AAD33F"/>

Above, the event source is names OrderServiceSource and the Technology being used is WCF, along with the OrderService.Contracts.IOrderService service contract located inside the OrderService.Contracts assembly.

One must take care because a BAM activity can be deployed that shares one or more manifests, that uses the exact same EventSourceName within another Interceptor configuration. When this happens, it is possible for the WCF BAM interceptor to record the wrong set of values, to the wrong BAM activity if a general "catch all" filter is applied within multiple interceptor configurations. An example of this is when you are using the WCF Generic Service contracts within a WCF Send adapter as well as with a WCF Client Service that sends BizTalk a message using the same Generic Service Contract. Let’s say a WCF Service is acting as a proxy to one of your BizTalk WCF Receive adapters. The code inside the WCF "proxy" service uses the IRequestChannel interface found inside the System.ServiceModel.Channels namespace. You track a BAM event inside the proxy service using the IRequestChannel interface found inside the System.ServiceModel assembly using an interceptor as such:

<ic:InterceptorConfiguration xmlns:ic="">
<ic:EventSource Name="ProxySource" Technology="WCF" Manifest="System.ServiceModel.Channels.IRequestChannel, System.ServiceModel, version=, culture=neutral, publicKeyToken=…"/>

This proxy sends a message through a WCF Receive adapter, which through normal BizTalk publish-subscribe rounting forwards the message to a WCF Send Adapter, which also uses the IRequestChannel interface; you can easily see there’s a duplicate source here. Which source will be identified as the correct source? Also, let’s add to this mix that the WCF Send Adapter also uses another WCF Interceptor Endpoint behavior for two different BAM Activities, it is possible that the interceptor will catch WCF Events for the client sending BizTalk a message and record that to the BAM activity that the WCF Send adapter endpoint is using. This can also happen if the WCF Send Adapter or the Client proxy Service uses a filter that simply checks for a "ClientRequest" execution stage and nothing more. We’ll discuss filters in another article. In other words, if the Manifest, and eventSourceName and BAM activity are not unique enough, incorrect data could be applied, and if the filter is not specific enough to set a condition that only applies to the WCF Send adapter or the proxy WCF Service the same could occur.

To avoid the ambiguous issue, first thing is to try not to use the same EventSourceName and Manifest within different interceptor configuration files. Next, if you do use the same Manifest, create a different EventSourceName specific to the project being implemented. Also, look at the WCF client service sending data to BizTalk. If you own the WCF service, avoid using generic contracts when sending from a WCF Service to a BizTalk WCF Receive adapter when using this interceptor, and try to avoid using WCF Services that do the same.

Now, back to the WCF Configuration and Interceptor Configuration utilities, this process continues to poll the BAM DB every so often as configured by the polling interval, reconfiguring the BAM WCF Interceptor for a given endpoint, if need be.

Once the interceptor configuration is retrieved, it is returned as an Xml stream which is then used to configure the interceptor so that it can evaluate its configuration values against different WCF Events or stages of execution. WCF goes through different stages of execution in both a Message inspector and a Parameter Inspector. The stages found within a Message Inspector are four. There are the BeforeSendRequest and the AfterReceiveReply stages that occur inside a proxy; there are the BeforeSendReply and AfterReceiveRequest stages that occur inside a dispatcher. The WCF BAM interceptor also evaluates against the Parameter Inspector stages such as BeforeCall and AfterCall. Evaluation is also done within the Message and Parameter inspectors to check endpoint names as found inside WCF configurations, operation names, as well as the SOAP Header actions of messages, the content of a message, and SoapFaults, if properly configured.

Programmatically, the stages of execution are implemented as an enumeration called the ContractCallpoint. This enumeration is exposed by a method call to the developer within the Interceptor configuration as a WCF Operation named "GetServiceContractCallPoint". It is used within the interceptor configuration such as this:

<wcf:Operation Name="GetServiceContractCallPoint" />

This method returns one of these values: ClientReply, ClientRequest, ClientFault, ServiceReply, ServiceRequest, ServiceFault, CallbackRequest, CallbackReply, CallbackFault

Which value the method returns is determined by fairly simple rules. If the endpoint behavior is attached to a Dispatch (Service) endpoint, then the value will be one of the ServiceXXX values. If it’s attached to a Proxy (Client) endpoint, then the value will be one of either ClientXXX or CallbackXXX values. Let’s deal with ServiceXXX values first. If you want to filter when a Dispatcher, or a Service Host is receiving a request, you would use the ServiceRequest CallPoint. If you want to filter when a Service Host is sending a response/reply after receipt of a request, you’d use the ServiceReply. If you want to filter when the channel faults, throws an exception, within a ServiceHost you would use the ServiceFault.

Understanding the Proxy filters is just slightly more complex. If a client endpoint contains a Client side Dispatcher, in other words, a client callback contract for use in duplex communication patterns, the value returned from the method GetServiceContractCallPoint is CallBackXXX. If you want to filter when the duplex channel is receiving a request, you would use CallbackRequest. If you want to filter when the duplex channel is sending a response/reply after receipt of a request, you’d use the CallbackReply, and CallbackFault in case of a Duplex Client Side Service Fault. On the other hand, if a client proxy does not contain a Dispatcher, and you are simply sending data to another service, the ClientXXX value is returned. If you want to filter when the proxy sends a request to another service, you’d use the ClientRequest. If you want to filter when the proxy receives a response/reply after sending the request, you’d use the ClientReply, and use ClientFault if the proxy has a channel fault.

When you create a BAM WCF Interceptor configuration, you tell the interceptor which "event"/stages of execution that you’re interested in, and the stages are then filtered out using the BAM Endpoint Behavior, BAM Parameter Inspector, or BAM Message Inspector appropriately. The filter is configured inside the interceptor configuration using the "<Filter>" section of the Interceptor configuration. Such as the one below:

Code Sample 1

Once the stages of execution are matched through the filter, the WCF BAM Interceptor retrieves the collection of BAM tracking points that are configured within the Interceptor configuration using Correlation, and Update sections within the BAM Interceptor configuration, such as shown below:

Code Sample 2

These BAM track points must relate back to Key Performance Indicators defined within the deployed BAM activity this interceptor configuration refers to. The BAM Activity is configured in the BAM Activity section just below the EventSource section within an Interceptor Configuration. Multiple BAM activities sections are supported. Its configuration section looks like the following:

<ic:InterceptorConfiguration xmlns:ic="">
<ic:BamActivity Name="OrderActivityMonitoring">
<ic:OnEvent …>

Above the BamActivity section defines a BAM Activity named OrderActivityMonitoring. This Bam Activity must already be deployed into the BAM PIT.

Now that you understand how the EventSource and BAM Activities relate, and how the Stages are matched, let’s revisit the ambiguity issue as outlined before. If you’re still not sure why you may receive incorrect updates due to duplicate event sources, the reasoning is simplified. In the example above, both a WCF proxy service sending to a BizTalk WCF Receive adapter, and a WCF Send Adapter, that happens to be running on the same computer, could be at the "ClientRequest" stage. Thus, the BAM WCF interceptor for the WCF Send Adapter endpoint, and the WCF proxy will have an interceptor configuration for both the client Service loaded in memory as well as the WCF Send Adapter interceptor configuration loaded into memory. The interceptor for the proxy configuration may be first in the list/collection after the bam_Metadata_GetLatestInterceptorConfiguration stored procedure call, and it will match the correct filters and possibly write whatever Update values to the BAM infrastructure for all Activities found in the list producing incorrect results. This doesn’t happen often, and with the correct design, can be totally avoided. Once the match has been determined, the KPI track points are then written using a DirectEventStream to the appropriate BAM activity rows.

In summary, this article addressed How The WCF BAM Interceptors work under the hood of the BizTalk BAM infrastructure, however not every aspect was covered. How the Filter is matched, why WCF and WF Interceptors use Reverse Polish Notation, how correlation is applied will be discussed in a future article… until then happy BAM’ng!!!

New whitepaper posted!  "Modifying Messages with the WCF Adapters"

New whitepaper posted! "Modifying Messages with the WCF Adapters"

A new WCF adapter whitepaper entitled “Modifying Messages with the WCF Adapters” is now available for download at  Here is a summary of its content. Enjoy!

The Windows Communication Foundation (WCF) adapters enable you to modify both incoming BizTalk%u00ae messages and outgoing WCF messages. You can use an XPath expression to specify what part of an incoming WCF message to use as the source of the BizTalk message body, and you can use an XML template to modify the content of an outbound WCF message body.


Custom XSLT and Sequential Convoy Sushi Server Webcasts

Custom XSLT and Sequential Convoy Sushi Server Webcasts

I’ve added another two webcasts to the website. One of them has been on my to-do list for a long time, there have been a lot of BizTalk mapping questions on the forums where the best answer is “Don’t use the mapper, use custom XSLT!”, so I’ve finally added a webcast to show you how you can achieve this.
The other one is taken form a demo that I have been running in the BizTalk courses I teach for a while. It’s looking at how BizTalk Server can be used to take a stream of RFID tag reads, and create a process that produces business related information. I’m using WPF to create an “Animated Sushi Emporium”, complete with conveyer belt and RIFD reader, and then using a fairly basic sequential convoy to determine which sushi plates have been consumed, and which are too old and need to be removed.
The full list of BizTalk downloads is here.
I have quite a few more webcasts on my to-do list, but I also do requests, so if you have an idea, let me know and I’ll see what I can come up with.

Multiple ways to generate metadata / execute Stored Procedures in the SQL adapter

Multiple ways to generate metadata / execute Stored Procedures in the SQL adapter

If you’ve used the SQL Adapter in CTP3 of the BizTalk Adapter Pack V2, you’ll notice that there are two nodes in the Metadata Hierarchy for Stored Procedures – one is named “Procedures”, and the other is named “Strongly-Typed Procedures”. What’s the difference, and what are the scenarios which they are meant to solve? What are their limitations? I’ll try to explain all this, in this post.


1. The first method of executing Stored Procedures which we added in the adapter, manifested as the operations under the “Procedures” node in the Metadata hierarchy. The operations here have the action “Procedure/<database_schema_name>/<procedure_name>”. For these operations, the adapter reads the System Tables in SQL Server, finds out the parameters to the procedure, and exposes them as IN or INOUT parameters in the WSDL. However, we yet need a way to return the result sets which the Stored Procedure can return at execution time. The adapter exposes these result sets as a return parameter, of type System.Data.DataSet[]. Being an array, multiple result sets can be returned.

At runtime, the adapter executes the Stored Procedure using the ADO.NET function SqlCommand::ExecuteReader(). Each result set returned by executing the Stored Procedure is serialized into its own DataSet; hence all of them together appear as DataSet[]. The response XML message contains both, the schema for each result set / DataSet, as well as the actual data in that result set / DataSet. When used in a .NET application, the schema + data together are used when deserializing the SOAP message into a DataSet object.

Advantages: Works for all stored procedures.

Limitations: The result sets are loosely typed (being a DataSet[]), and in BizTalk, are not helpful if you want to use the Mapper.

Workarounds: What you could do is, execute the procedure once, dump the XML message to a file location, and open it in notepad. Select the <schema> node within the result set section in the return parameter, copy-paste it into a new file, and save it with the .xsd extension. You now have a schema which you can deploy in your BizTalk orchestration/project. Also, use the Message Template feature in the WCF-Custom/WCF-SQL port configuration, using a XPath query to only select out the data (at runtime) for that particular result set (matching the XSD which you deployed); ignoring the other result sets (if any) and out parameters. You now have a XML blob being submitted to BizTalk, which conforms to the XSD which you deployed.


2. The next approach we took, was to try and expose the result sets as “strongly typed”, instead of the loosely typed DataSet[] above. These operations manifest as the operations under the “Strongly-Typed Procedures” node in the metadata hierarchy. The action is of the form “TypedProcedure/<database_schema_name>/<procedure_name>”. For these operations, the adapter reads the System Tables in SQL Server, finds out the parameters to the procedure, and exposes them as IN or INOUT parameters in the WSDL. In order to expose the returned result sets in a “strongly typed” fashion, the adapter needs to know what the metadata for the returned result sets will look like. For this purpose, at design time, the adapter executes the Stored Procedure using the ADO.NET function SqlCommand::ExecuteReader(CommandBehavior::SchemaOnly). This translates to the SET FMTONLY ON option being used. However, in order to execute the Stored Procedure, the adapter needs to pass in values to the parameters. For this purpose, the adapter uses DBNull.Value for each parameter. Upon execution, the adapter gets back multiple (empty) result sets, and from these, the adapter can obtain the metadata for the result sets which can potentially be returned at runtime.

Note – I use the word potentially. This is because, at runtime (FYI – the adapter uses the ADO.NET function SqlCommand::ExecuteReader() at runtime), a Stored Procedure can return different result sets based on the input values. For example, if an input parameter has the value 1, the Procedure could return a result set by performing the operation “SELECT * FROM TABLE1”. If the input parameter has the value 2, it might actually execute “SELECT * FROM TABLE2”. At runtime, only one of the two result sets will be returned, depending on whether the input parameter is 1 or 2. However, at design time, when the SET FMTONLY ON statement is used, both the (empty) result sets are returned. The adapter exposes them both as complex out parameters (and names them in the metadata as TypedProcedureResultSet1, TypedProcedureResultSet2, etc). At runtime, one will be null, while the other will have the appropriate data filled in.

Note – The current design of the WCF LOB Adapter SDK (on which the SQL Adapter is based) is that metadata is also required at runtime. Hence, at runtime too, the adapter executes the Stored Procedure using SET FMTONLY ON (just once) to obtain the metadata. Then, for every message being passed to the adapter, the adapter executes the Stored Procedure, and based on the result sets returned, tries to determine whether it should be serialized as TypedProcedureResultSet1, or TypedProcedureResultSet2, etc (for example) (based on the metadata it obtained earlier by using SET FMTONLY ON).

Advantages: Strong typing of the result sets.


  • These operations cannot be used to execute “complicated” Stored Procedures – for example, a hypothetical procedure which returns multiple result sets (from potentially different tables) within a loop. This is because the adapter won’t be able to figure out which result set (at runtime) needs to be serialized as which complex type (TypedProcedureResultSet1, or TypedProcedureResultSet2, or something else).
  • These operations won’t work for Stored Procedures, when in the procedure code, a temporary table is created, and then, one of the returned result sets is obtained by doing a SELECT on that temporary table. The reason being, when the SET FMTONLY ON option is used, no temporary tables are created. However, when the SQL execution engine comes across the line SELECT * FROM #temptable, it throws an error, (something to the effect of it not finding an artifact named #temptable), since it never created this artifact in the first place (since SET FMTONLY ON is not supposed to make any changes on the server).
  • I mentioned above that during metadata retrieval time, the adapter needs to pass in values for all parameters to the Stored Procedure, and for this purpose it passes in DBNull. Now, if the Procedure internally calls a System Stored Procedure, and passes in one of the input parameters (which in our case is NULL), and if the System Stored Procedure returns an error if it sees the value as NULL, the adapter will get an exception at metadata retrieval time. NOTE – this won’t happen in your custom procedure itself throws an error if it sees an input parameter having the value NULL, since for user procedures, when SET FMTONLY ON is true, the execution engine skips the evaluation of the if statements (this evaluation only happens in system stored procedures).

Workarounds: Use one of the other Stored Procedure execution methods.


3. In CTP4 (releasing end of October 2008), we’ve added one more mechanism to execute Stored Procedures. This was mainly done for backward compatibility.

The earlier SQL adapter required result sets returned from Stored Procedures to use the FOR XML syntax. This was because the old adapter used SQLXML; using FOR XML would instruct the SQL Server to return the result set as a single XML value, and the SQLXML code on the client would then parse the XML returned.

In methods 1 and 2 above, you’ve seen that the new adapter used the ADO.NET SqlCommand::ExecuteReader() function. If this function was used to execute a Stored Procedure which returned a result set using FOR XML, all the adapter would see is a single XML value (exposed as a string) – i.e., the adapter thinks that the returned result set only has one column. Even worse, if the XML value was large, it would get split into multiple rows. This is of course extremely cumbersome to work with.

For this purpose, a third mechanism was added, what I call “XML Procedures”. The action for such operations is “XmlProcedure/<database_schema_name>/<procedure_name>”.

There is no design time experience for such procedures. Generating metadata involves the following:

  • Generate metadata for the same procedure under the “Procedures” node – this generates the metadata for the request message as well as the response message. Here, we are only interested in the request message schema, since that’s the format in which the request message needs to be sent (though you need to use the “XmlProcedure” action).
  • In SQL Server Management Studio, edit your Stored Procedure, and add the XMLDATA keyword at the end of the FOR XML statement (similar to what you would have done if using the older SQL Adapter).
  • Execute your Stored Procedure from SQL Server Management Studio. Before the actual data, you should see a <schema> node which has the metadata for the result set. Copy-paste the schema into a .xsd file. Also, add a root node (with namespace) to encapsulate the nodes in the data/schema (you’ll see why later).
  • This .xsd file will serve as the schema for the response message in BizTalk.
  • Remember to remove the XMLDATA keyword from the Stored Procedure code.

At runtime, you need to do this:

  • Specify values for the XmlStoredProcedureRootNodeName (mandatory) and XmlStoredProcedureRootNodeNamespace (optional) binding properties. The XML obtained by executing the “FOR XML” Stored Procedure will be wrapped within this root node, with this namespace.
  • Use the “XmlProcedure/<database_schema_name>/<procedure_name>” action during execution (instead of “Procedure/<database_schema_name>/<procedure_name>”).

When the adapter sees the XmlProcedure action, it uses the ADO.NET SqlCommand::ExecuteXmlReader() function.

Advantages: You can continue using FOR XML Stored Procedures side-by-side with the old SQL Adapter. Can work with all stored procedures which return XML.

Limitations: There is no way to generate the metadata using the adapter, since the WCF LOB Adapter SDK does not have a way to accept input parameter values during design time. The adapter cannot use NULL for the parameter values (unlike how it did in 2 above), since obtaining metadata involves actual execution of the Stored Procedure, and therefore all the Stored Procedure logic comes into play, which might throw an error if an unexpected value is seen for an input parameter.

Workarounds: Use one of the other Stored Procedure execution methods.

How Many Remote Desktop Windows Is Too Much?? – Cool FREE Manager Apps

How Many Remote Desktop Windows Is Too Much?? – Cool FREE Manager Apps

Folks – it’s been one of those weeks (I know it’s only Tues 🙂

I just got to a point where I was just opening up tooo many RDP connections, managing
them – some using Terminal Services Gateways, others not.

Configuring BTS boxes/SQL Servers/MOSS/Indexers/Search….. and the list goes on.

From client to client or even our network internally – my head was rapidly filling
up with these random ip addresses that I wished I didn’t have to remember.

So I wanted to have a way simply to manage all these windows (a crude version I wrote
some years back was simply to drop 6 RDP ActiveX controls onto a web page an knock
yourself out).

I needed:

– to work on Vista and Win2008 as well as the other list of usual suspects.

– be able to set Terminal Services Gateway on some.

They panned out as follows:

  1. Remote Desktops – found in Win2K3 Admin Tools SP1, which is OK as
    it presents a simple tree view and you’re away.
  2. Terminals (currently 1.7) – SENSATIONAL!!! I almost wanted to get
    VNC etc just to use those bits.
    It’s got – network tools, port scanners just absolutely brilliant, a well polished
    application with a very very handy toolbar.
    Only ONE problem for me……no TSG support 🙁   – forums
    state this is planned….. 🙂


    Check out TERMINALS HERE

  3. Royal TS – Supports RDP Terminal Service Gateway Connections 🙂
    So this one for the moment is one that I’m going with, just downloading .NET 3.5 SP1
    as we speak and about to fire this up on Vista (x86).

    Does a very good job at managing RDP connections, it doesn’t support any of the other

    Presents a TreeView allowing groupings of connections (although I had to ‘Create a
    Document’ first)

    Check out Royal TS HERE




Terminals *would* be the one I’d go for if it supported TSG connections……have
to check back shortly.

Enhancements to the EDI Logging tool (more)

Enhancements to the EDI Logging tool (more)

This seems to be a big complaint, there are many companies that would like to have access to the envelope information in the map. There are a few VERY KLUDGY ways of getting data from the message into the map. All of them are VERY BRITTLE. The EDI logger now will create an additional message part called ‘envelope’ and send it to the message box.

If you are familiar with the multi part message that the BTAHL7 accelerator creates, this will be a breeze. For those of you not in the provider world, I will walk through the instructions on how to create a process to access the envelope information (you will see that there is a LOT more than just the envelope information).

To start off with, there are now two message parts to the message that shows up in the message box. The body part is the transaction itself (so you can still use the message in a send port map if necessary).

Here is a screen shot of the multi part message:


The schema that is deployed to the GAC that you can reference (C:\WINDOWS\assembly\GAC_MSIL\EDIArchiveProperties\\EDIArchiveProperties.dll) in your project that will host your orchestration has this structure (click to enlarge):


I normally create at least two different projects for each solution. One that represents the schemas, and the other that represents the mapping/orchestrations. For simplicities sake, I have called the schema project Schemas, and the location that I am going to put my maps and orchestrations will be called Logic. Here is a snapshot of the Solution Explorer:


I added the reference to the EDIArchiveProperties by adding a reference and pasting the dll mentioned above:


Let’s create an orchestration, here you will want to create a multi-part message, making sure you define the body segment first and then the envelope segment second (click to enlarge):


Next you will create the necessary port and message from this multi-part message. I will jump to the creation of the map. You have a message defined from this multi-part type and you are mapping it to an output. You open up the map configuration dialog window and you fill in two lines, one that represents the envelope and the other that represents the EDI transaction on the input and the output message on the destination window.



And when pressing <OK>, this is the map that is created (click to enlarge), notice that the transaction is at the bottom of the source:


This allows you to access both the envelope and context data for that transaction, giving you the ability to map everything that would need to without going through a unnecessary hoops to get data injected into the message.

For pricing on this component, please refer to our services page.