New Send Feature in BTS 2009

I am sure y’all know (yes, I can say that, I am in Texas) that on the send side, there is a new advanced property. I discovered the following new line the administration console:

After looking into it further, there is a new feature in the advanced options called Schedule

I am not sure however, if there is a a way in which you can use this setting dynamically though.

Sequential and Flowchart modeling styles

WF 4 ships with an activity palette that consists of many activities – some of these are control flow activities that represent the different modeling styles developers can use to model their business process. Sequence and Flowchart are a couple of modeling styles we ship in WF 4. In this post, we will present these modeling styles, learn what they are, when to use what, and highlight the main differences between them.

Sequential modeling style

A sequential workflow executes a set of contained activities in sequential order. The Sequence activity in WF 4 allows you to model workflows in the sequential modeling style. A Sequence contains a collection of activities that are scheduled in the order in which they have been added to the collection. Hence, the order of execution of the activities is predictable.

You can add any activity to a Sequence – control flow procedural constructs like ForEach, If, Switch, While, DoWhile; or parallel constructs like Parallel and ParallelForEach to model parallel execution of logic; or any other activity we provide in the WF 4 activity palette (or your own custom activity or a third party activity).

The next figure shows a Vacation Approval workflow modeled as a sequential workflow using the Sequence activity and other activities. In this workflow, we first check if the employee has enough available days, wait for his manager approval, and finally update his vacation information in the company’s HR database. The activity highlighted in the orange box (Get Manager Approval) is actually a While activity (collapsed in the main Sequence) that executes another Sequence of activities (AskForApproval) while the approvedByManager variable value is False.

Workflows modeled using the sequential modeling style are easy to understand and author. They can be used to model simple to moderately complex processes. Since procedural activities have strong parity with procedural statements in imperative programming languages, you can use this type of workflows to model almost any type of process. Sequential workflows are also a good fit to model simple processes with no human interactions (e.g. services).

As the complexity of the process increases, the workflow will become more complex. In comparison to code, with workflows, you will get the benefit of visually looking at your process and visual debugging, however, you may want to factor out the logic into re-usable custom activities to improve the readability of large workflows.

Sequential Modeling Style and Sequence Activity

Sequence is not a requirement to create workflows that use the sequential modeling style. As we will explain later in this post, any activity in WF 4 can be the root of a workflow. Therefore, we can create a workflow that does not contain a Sequence but still uses the sequential modeling style and procedural activities. In the figure below, we have a workflow that has as a ForEach as root activity that prints all the items in a list of strings with a length higher than 5.

Flowchart

Flowchart is a well known and intuitive paradigm to visually represent business processes. Business Analysts, Architects and Developers use often flowcharts as common language to express process definitions and flow of logic. 

Since the release of WF 3, customers have given us feedback about what they like and don’t like.  One common point of feedback from customers using WF3 was that “we want the simplicity of Sequence, Parallel, etc. but the flexibility of StateMachine.”  When we dug deeper to get at the scenario behind this sentiment, we found that customers have a process (or a portion of a process) that is often quite sequential in nature but which requires “loopbacks” under certain circumstances (for some customers the circumstances are “exceptional” in nature while for other customers they are “expected” but it really doesn’t matter to this discussion). The FlowChart activity is new in WF 4 and it squarely addresses this (rather large) class of scenarios.  Flowchart is a very powerful construct since it provides the simplicity of sequence plus the ability of looping back to a previous point of execution, which is quite common in real life business processes to simulate re-try of logic when handling external input.

A Flowchart contains a set of nodes and arcs.  The nodes are FlowNodes – which contain activities or special common constructs such as a 2-way decision or a multi-way switch.  The arcs describe potential execution paths through the nodes.  The WF 4 Flowchart has a single path of execution; that is, it does not support split/join semantics that would enable multiple interleaved paths of execution.

The next figure shows a simplified recruiting process modeled using a Flowchart. In this case after a r%u00e9sum%u00e9 is received, references are checked. If references are good, the process continues, otherwise the r%u00e9sum%u00e9 is rejected. The next step verifies that the candidate skills are a good match for the position offered. If the candidate is a good match, then she will be offered a position. If she is not a good match for this position but interesting for future opportunities, the resume is saved in a database and a rejection letter is sent. Finally, if the candidate is not a good match, she is sent a rejection letter.

Flowchart modeling style is great to represent processes that are sequential in nature (with a single path of execution), but have loops to previous states. They use a very well known approach for modeling processes (based on “boxes and arrows”) and allow representing processes in a very visual manner. Control of flow is dominated by the transitions between the nodes and by two first-class branching activities (FlowDecision and FlowSwitch). Flowchart is a good fit to model processes with human interactions (e.g. human workflows).

Using Sequence and Flowchart together

WF 3 had a notion of a root activity. Only root activities could be used as the top level activity in a WF 3 workflow. WF 4 does not have a similar restriction. There is no notion of a root activity any more. Any activity can be the root of a workflow.

Let me explain this in more detail Activities are a unit of work in WF. Activities can be composed together into larger Activities. When an Activity is used as a top-level entry point, we call it a "Workflow", just like Main is simply another function that represents a top level entry point to CLR programs. Hence, there is nothing special about using Sequence or Flowchart as the top level activity; and they can be composed at will.

The next figure shows a Flowchart inside a Sequence. The workflow below has three activities: a composite activity that does some work, a Flowchart (highlighted in green) and finally another composite activity that does some more work.

The same can be also done in a Flowchart. The next figure shows a Flowchart that has a Sequence (highlighted in green), a FlowDecision, and then two WriteLine activities for the True and False paths of the decision.

Beyond Sequence and Flowchart

WF 4 simplified activity authoring story makes easier writing your own custom composite activities to model any control of flow approach of your choice. In future posts, we will show how to write your own custom activities and designers.

Conclusion

In this post we have presented the Sequential and Flowchart modeling styles. We learned that Sequence is used for modeling sequential behavior and that Flowchart is used to model processes with a single path of execution and loops to previous states. We also learned that Sequence and Flowchart can be combined and used together as any other existing activity.

The following table shows the main differences between Sequence and Flowchart.

Sequence

Flowchart

Order of execution is explicit, close to imperative/code

Order of execution expressed as a graph with nodes and arcs

Loopbacks are represented combining control of flow activities (e.g. While + If)

First class modeling of loopbacks

Parity with imperative / procedural

Parity with boxes and arrows diagrams

Activities are executed in sequential order

Activities are executed in the order dictated by the arrows between them

Simple process / no human interaction (e.g. services)

Complex processes / human interactions (e.g. human workflows / state machine scenarios)

The flow of the process is not visually obvious

Control of flow is visual, dominated by Boolean decisions (FlowDecision) or Switch (FlowSwitch)

Introduction to Workflow Tracking in .NET Framework 4.0 Beta1

By now you must be aware of the significantly enhanced Windows Workflow Foundation (WF) scheduled to be released with .Net Framework 4.0. The road to WF 4.0 and .Net Framework 4.0 Beta1 documentation for WF can give you more details. Being a member of the team responsible for the development of the WF tracking feature, I am excited to discuss the components that constitute this feature. In a nutshell, tracking is a feature to gain visibility into the execution of a workflow. The WF tracking infrastructure instruments a workflow to emit records reflecting key events during the execution. For example, when a workflow instance starts or completes tracking records are emitted. Tracking can also extract business relevant data associated with the workflow variables. For example, if the workflow represents an order processing system the order id can be extracted along with the tracking record. In general, enabling WF tracking facilitates diagnostics or business analytics over a workflow execution. For people familiar with WF tracking in .Net 3.0 the tracking components are equivalent to the tracking service in WF 3. In WF 4.0 we have improved the performance and simplified the programming model for WF tracking feature.

A high level view of the tracking infrastructure is shown below

The primary components of the tracking infrastructure are

1) Tracking records emitted from the Workflow runtime.

2) Tracking Profile to filter tracking records emitted from a workflow instance.

3) Tracking Participants that subscribe for tracking records. The tracking participants contain the logic to process the payload from the tracking records (e.g. they could choose to write to a file).

The Workflow tracking infrastructure follows the observer pattern. The workflow instance is the publisher of tracking records and subscribers of the tracking records are registered as extensions to the workflow. These extensions that subscribe to tracking records are called tracking participants. Tracking participants are extensibility points that allow a workflow developer to consume tracking records and process them. The tracking infrastructure allows the application of a filter on the outgoing tracking records such that a participant can subscribe to a subset of the records. The mechanism to apply a filter is through a tracking profile.

The workflow runtime is instrumented to emit tracking records to follow the execution of a workflow instance. The types of tracking records emitted are

%u00b7 Workflow instance tracking records: Workflow instance records describe the life cycle of the workflow instance. For instance a record is emitted when the workflow starts or completes.

%u00b7 Activity tracking records: Activity tracking records are emitted when a workflow activity executes. These records indicate the state of a workflow activity (i.e. an activity is scheduled, activity completes or fault is thrown).

%u00b7 Bookmark resumption tracking record: A bookmark resumption record tracks any bookmark that is successfully resumed

%u00b7 User tracking records. A workflow author can create custom tracking records within a custom workflow activity and emit them within the custom activity. Custom tracking records can be populated with data to be emitted along with the records.

Out of the box in WF 4.0 we provide an ETW (Event Tracing for Windows) based tracking participant. The ETW tracking participant writes the tracking records to an ETW session. The participant is configured on a workflow service by adding a tracking specific behavior in a config file. Enabling ETW tracking participant allows tracking records to be viewed in the event viewer. Details of using the ETW based tracking participant will be covered in a future post. The SDK sample for ETW based tracking is a good way to get familiar with WF tracking using the ETW based tracking participant.

In future posts we will discuss the WF tracking feature in depth. Topics covered will include tracking profiles and tracking records, ETW tracking participant, writing custom tracking participants, variable extractions and unified tracking and tracing.

Exposing Custom WCF Headers through WCF Behaviors – Part 2

In part 1 we covered how to create a custom behavior to inject header data into the dynamically created WSDL.


In this part we will look at consuming the header data passed in.


By default BizTalk will take any custom header it finds in the incoming WCF message and automatically map them to the Message Context. 


If it were really this simple we wouldn’t need this posting.


So, what is the issue.  The issue is that when BizTalk maps the header to the context it posts an xml fragment.  This fragment could certainly be used as is and parsed each time you need to use it but that gets tedious quickly and certainly doesn’t do good things to the performance of your solution.


What we need is to be able to parse the key and value of the header data when the message is submitted to BizTalk so that it looks like all of the other context entries (a key and a value pair).


There are a number of options to enable you to do this including creating a pipeline component.  We are not going to go that route.  Instead, we are going to add code directly to our behavior.  I want everything to be encapsulated inside the behavior so that if developers decide to use the behavior they don’t have to also remember to place a pipeline in the mix.  By having a separate pipeline component we are creating an error prone system that won’t be caught until after deployment has occurred.


To promote or write to the context when the message arrives we will modify the AfterReceiveRequest method on the SoapHeaderMessageInspector class.  This class was created in Part 1 of this series.  If you go back and look at that method you will see that we originally implemented it by returning null. 


First lets look at what is required to write or promote to the Message Context in code.  MSDN has a sample of how this can be done which I put below.

const string PropertiesToPromoteKey=”http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties/Promote”;
const string PropertiesToWriteKey=”http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties/WriteToContext”;

XmlQualifiedName PropName1=new XmlQualifiedName(“Destination”, “http://tempuri.org/2007/sample-properties”);
XmlQualifiedName PropName2=new XmlQualifiedName(“Source”, “http://tempuri.org/2007/sample-properties”);

//Create a List of KeyValuePairs that indicate properties to be promoted to BizTalk message context.
//A Property Schema must be deployed and string values have a limit of 256 characters
List<KeyValuePair<XmlQualifiedName, object>> promoteProps=new List<KeyValuePair<XmlQualifiedName, object>>();
promoteProps.Add(new KeyValuePair<XmlQualifiedName, object>(PropName1, “Property value”));
wcfMessage.Properties[PropertiesToPromoteKey]=promoteProps;

//Create a List of KeyValuePairs that indicate properties to be written to BizTalk message context
List<KeyValuePair<XmlQualifiedName, object>> writeProps=new List<KeyValuePair<XmlQualifiedName, object>>();
writeProps.Add(new KeyValuePair<XmlQualifiedName, object>(PropName2, “Property value”));
wcfMessage.Properties[PropertiesToWriteKey]=writeProps;


We are going to use this code but will format it a bit differently.  As I said earlier we need to modify the AfterReceiveRequest method to incorporate this code.


Our method implementation will look like:


public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
{
    List<KeyValuePair<XmlQualifiedName, object>> writeProps = new List<KeyValuePair<XmlQualifiedName, object>>();
    const string PropertiesToWriteKey = “http://schemas.microsoft.com/BizTalk/2006/01/Adapters/WCF-properties/WriteToContext”;

    Int32 headerPosition = OperationContext.Current.IncomingMessageHeaders.FindHeader(SoapHeaderNames.SoapHeaderName, SoapHeaderNames.SoapHeaderNamespace);

    if (headerPosition < 0)
   
{
        //Fault Condition
        throw new ArgumentNullException(SoapHeaderNames.SoapHeaderNamespace + “#” + SoapHeaderNames.SoapHeaderName, “SoapHeader not found.”);
   
}

    // Get an XmlDictionaryReader to read the header content
    XmlDictionaryReader reader = OperationContext.Current.IncomingMessageHeaders.GetReaderAtHeader(headerPosition);
    XmlDocument d = new XmlDocument();

    d.LoadXml(reader.ReadOuterXml());

    foreach (XmlNode node in d.DocumentElement.ChildNodes)
   
{
        if ((node.Name.ToLower().Equals(SoapHeaderNames.AppName.ToLower()) ||
node.Name.ToLower().Equals(SoapHeaderNames.UserName.ToLower())) && String.IsNullOrEmpty(node.InnerText))
       
{
            throw new ArgumentNullException(node.Name, “Header value cannot be null.”);
       
}

        XmlQualifiedName PropName1 = new XmlQualifiedName(node.Name, SoapHeaderNames.SoapHeaderNamespace);
        writeProps.Add(new KeyValuePair<XmlQualifiedName, object>(PropName1, node.InnerText));
   
}

    if (writeProps.Count > 0)
   
{
        request.Properties[PropertiesToWriteKey] = writeProps;
   
}

    return null;
}


This code shows how we can select and read the header, and then loop through each element in the header and promote it. 


In order to promote properties into the context you need to have a property schema.  We took the SoapHeader.xsd that we created in Part 1 of this post and used that for our property schema. 


When we take a look at the content of the incoming message after compiling and deploying our changes we can now see that our key name appears under the Name column and our value appears under the Value column of the Context dialog box.  We no longer have an xml fragment and no longer have to deal with the need to parse the fragment each time we want to use it.  Now that we have this data in the context we can utilize it in the same we would with any other data that appears in the context.  The best part is that it was all done in one location, through one artifact, and won’t require the developer to remember to utilize another artifact to make the solution work.


In the next post, we will cover the ability to create a behavior that exposes the properties through configuration to let you dynamically, per end point, set the header items as well as determine whether you want the values written or promoted dynamically as well.

BizTalk ESB Toolkit 2.0 FAQ

It’s been about 48 hours since we released the ESB Toolkit and the news are spreading very quickly.

We wanted all the relevant information on this hot topic to be easily discoverable.
Following is a set of Frequently Asked Questions regarding the new BizTalk ESB Toolkit 2.0, (this list is also published on the new ESB page on the BizTalk Developer Center on MSDN):

  • When was the BizTalk ESB Toolkit 2.0 officially released?
    The BizTalk ESB Toolkit 2.0 along with documentation was released to the Web on Monday, June 8, 2009.
  • From where are downloads provided?
    You can download the BizTalk ESB Toolkit 2.0 and documentation from the Microsoft Download Center. The toolkit is packaged as binaries and samples in a Windows Installer.
  • What happens to those customers who are currently using ESB Guidance 1.0?
    Customers who are using ESB Guidance 1.0 are strongly encouraged to upgrade to BizTalk Server 2009 and the BizTalk ESB Toolkit 2.0.  ESB Guidance 1.0 will be deprecated in the next few months. Also, proactive monitoring of the ESB 1.0 forums will no longer take place after the BizTalk ESB Toolkit 2.0 is released.
  • What license does it use?
    It uses a standard Microsoft, free, binary-only license.
  • How is the BizTalk ESB Toolkit 2.0 packaged?
    It is packaged as a binary-only Windows Installer (32- and 64-bit).
  • Are BizTalk ESB Toolkit 2.0 bug fixes provided?
    Bugs are addressed on a best-effort basis, by the BizTalk ESB Toolkit Team.
  • Where do customers file bugs and requests?
    The BizTalk ESB Toolkit 2.0 Connect site has been created to log bugs with the product teams and to provide updates to additional tools over time. Once you log a bug, someone will respond to you within five days with an acknowledgment and status.
  • What is the BizTalk ESB Toolkit 2.0 forum commitment?
    Assistance is provided through forums, with a one-year notice of deprecation plans. Any fixes and responses to questions in the forums are best effort, and we will continue to leverage the community to provide peer assistance, though with a capability to issue critical fixes if necessary.
  • Where is the online community hosted?
    A dedicated ESB Toolkit Forum is provided.
  • Will source code for signed binaries be provided?
    Source code for signed binaries will be available as a separate download (date to be determined).
  • How does the support policy relate to the source code of the BizTalk ESB Toolkit?
    Microsoft will not support customizations to the BizTalk ESB Toolkit source code. The source for these components will be for reference only, not for making changes.  If there are critical issues that require it, we will use standard release processes to get fixes in place for the signed binaries. 

Promoting values from destination schema of map on receive port

Hi all

So, the other day, a guy asked a question on the online forums, and another guy tried
helping out by stating, among other things, that the maps on receive ports are executed
before the receive pipeline. This isn’t true, and I posted a post, where I tried to
explain how things work. This ended up being slightly wrong, so I posted a correction,
but now it seems I need to post another correction and I ended up writing this post
to explain how stuff works.

First of all, let me set one thing straight; When a message arrives on a receive location
it is first sent through the receive pipeline that is specified on the receive location.
This is needed before the map for several reasons, including converting the input
to XML and promoting the MessageType, so the receive port can choose the correct map
to execute. The receive pipeline also promotes all the distinguished fields and promoted
properties that are specified on the schema.

Now, after the map has finished executing, the transformation engine will look up
the schema for the output and it will instantiate the XMLDisassembler with this particular
schema, so the disassembler doesn’t have to find the correct schema itself. After
instantiating it, it will call it, so the XMLDisassembler will read the output from
the map and promote all distinguished fields and promoted properties to the context
of the message. Also, before doing the promotion, it will copy all the context from
the original message, so you get all the properties from the adapter and so on copied
to the destination message.

Now, there are a couple of issues to this, which most people don’t realize – mostly
because they will only affect you on very very rare occasions:

Distinguished fields

I found that if you have an input message with a field marked as a distinguished
field and then look at the context of the output from the map, then the output message
also has the distinguished field from the input message in its context which really
doesn’t make sense, since you can’t use it in any way. This has NO influence at runtime
and NO influence at design time, so you will go through your life not realizing it.
Also, usually we don’t set distinguished fields on the external schemas, because we
don’t want to change them and because we don’t want to use schemas exposed to trading
partners in our business processes, which is the only place we can use the distinguished
fields.

Promoted Properties

If you have promoted a field from both input and destination schema to the
same property, the value from input schema in the properties is overwritten by the
value from the destination schema after the map. For the same reasons as for the distinguished
fields, we rarely have promoted properties on the external schemas, and therefore
you probably will never have an issue with this.

Envelopes

IF the schema for the destination message is marked as an envelope, the message
will fail. This is because the disassembler will recognize the schema is an envelope
and it will then debatch the message into several messages. Only the first is returned
though, since the mapping engine which is calling the disassembler assumes this is
not an envelope and therefore will only call the GetNext method once, whereas normally
the GetNext emthod is called until it returns null. This first message is then looked
at, but in the properties for the disassembler, the transformation engine has all
ready set the only allowed schema, which was the envelope schema. So the disassembler
only has the envelope schema as a possible schema, and the instance that comes out
after debatching is not the output from the map anymore, meaning that it will fail
with the standard error message “Details:"Document type "http://MyNameSpace.com#Record"
does not match any of the given schemas."” As with the first to issues about
distinguished fields and promoted properties, this should practically never happen,
since you are most likely mapping the incoming message to some internal schema, for
which there is usually no reason to mark as an envelope.

Conclusion

So, basically, when a message arrives, the receive pipeline is executed,
then the map is executed and at the end, the XML disassembler is executed by the transformation
engine to get all promoted fields in the destination message promoted. There are a
couple of known issues with this, but they are either totally unimportant or very
unlikely to occur.

I hope this helps someone out there.



eliasen

Getting up and Running with BizTalk ESB Toolkit 2.0

Microsoft BizTalk ESB Toolkit 2.0 released to web yesterday evening, and if you have not been following the earlier CTPs, now is a great time to get on board. The name has been changed, a support model has been introduced, and official MSDN forums for the Toolkit have been made available.

Since there are not any VMs that you can simply download and get up and running with quickly, I decided to put together a quick and dirty installation guide for installing the Toolkit on a single server in a 32-bit virtualized environment for evaluation purposes. What follows are the steps to get you from 0 to ESB with all of the sample applications.

  1. Install the Pre-Requisite Components
    • Microsoft Windows Server 2008 or Windows Server 2003 (except Web Editions) operating system
    • Internet Information Services (IIS) 7.0 with IIS 6.0 extensions or IIS 6.0 (used for Web services and the ESB Management Portal)
    • Microsoft SQL Server 2008 or Microsoft SQL Server 2005
    • Microsoft BizTalk Server 2009 Enterprise Edition, including Business Activity Monitoring (BAM)
    • Microsoft Visual Studio 2008 SP1 (required on development computers)
    • Microsoft UDDI Services 3 (required by UDDI resolver and dependent samples)
    • .NET Framework 3.5 SP1
    • Microsoft Chart Controls for Microsoft .NET Framework 3.5
    • Enterprise Library 4.1
    • Microsoft Visual Studio 2008 Software Development Kit (SDK) 1.1 (required by the Itinerary Designer when installing from source code) [optional]
  2. Install the main MSIs (Docs/Toolkit x86)
    • Currently available for download here.
  3. Import/Install C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Microsoft.Practices.ESB.ExceptionHandling.msi
  4. Import/Install C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Microsoft.Practices.ESB.CORE.msi
  5. Give SQL Service account permissions to all of the BAM related databases
  6. Using the bm.exe tool, deploy the BAM Activities located at C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Bam
    • bm deploy-all -DefinitionFile:"C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Bam\Microsoft.BizTalk.ESB.BAM.Exceptions.xml"
    • bm deploy-all -DefinitionFile:"C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Bam\Microsoft.BizTalk.ESB.BAM.Itinerary.xml"
  7. Run C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Bin\ESBConfigurationTool.exe
    • You have to apply settings on each page before continuing to the next page
  8. Run C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\Bin\Microsoft.Practices.ESB.UDDIPublisher.exe
    • If errors, uncheck Require SSL setting in UDDI3 MMC
  9. Extract C:\Program Files\Microsoft BizTalk ESB Toolkit 2.0\ESBSource.zip to C:\Projects\Microsoft.Practices.ESB\
  10. Create snk at C:\Projects\Microsoft.Practices.ESB\Keys\ [command below requires Visual Studio Command Prompt]
    • sn -k Microsoft.Practices.ESB.snk
  11. Mark C:\Projects\Microsoft.Practices.ESB\ as NOT read-only
  12. Run the command below
    • powershell Set-ExecutionPolicy unrestricted
  13. Run the command below
    • C:\Projects\Microsoft.Practices.ESB\Source\Samples\DynamicResolution\Install\Scripts\Setup_bin.cmd
  14. Run the command below
    • C:\Projects\Microsoft.Practices.ESB\Source\Samples\Itinerary\Install\Scripts\Setup_bin.cmd
  15. Run the command below
    • C:\Projects\Microsoft.Practices.ESB\Source\Samples\MultipleWebServices\Install\Scripts\Setup_bin.cmd
  16. Repeat for remaining samples (setup_bin.cmd or SampleName_Install.cmd for each)
  17. Open the Visual Studio solution named ESB.Portal.sln from the C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.Portal folder, and then make sure that the section of the Web.config file contains the correct connection strings for the ESBAdmin database.
  18. Open C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.AlertService\ESB.AlertService.sln, and change TargetPlatform property of the Setup project to x86.
  19. Build C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.AlertService\ESB.AlertService.sln
  20. Build C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.UDDI.PublisherService\ESB.UDDI.PublisherService.sln
  21. Run C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.AlertService.Install\Debug\setup.exe
  22. Run C:\Projects\Microsoft.Practices.ESB\Source\Samples\Management Portal\ESB.UDDI.PublisherService.Install\Debug\setup.exe
  23. Use the My Settings page in the portal to configure the main portal settings
    • Default URL: http://localhost/ESB.Portal/PortalSettings.aspx
  24. Configure the settings on the Fault Settings page in the portal
    • Default URL: http://localhost/ESB.Portal/Admin/Configuration.aspx
  25. Configure the settings on the Registry Settings page in the portal
    • Default URL: http://localhost/ESB.Portal/Uddi/UDDIAdmin.aspx

Once you have the BizTalk ESB Toolkit 2.0 installed, I would recommend you read the following documents (in order) in the official documentation to give you a tour of what all is there, and even get fairly deep into the inner-workings of the Toolkit, and extensibility opportunities:

  1. Overview of the BizTalk ESB Toolkit
  2. Architecture of the BizTalk ESB Toolkit
  3. Itinerary-Based Routing
  4. BizTalk ESB Toolkit Message Life Cycle
  5. The ESB Management Portal and Fault Message Viewer
  6. Prerequisites for the Development Activities
  7. Development Activities
  8. Running the Itinerary On-Ramp Sample
  9. Installing and Running the Scatter-Gather Sample
  10. Installing and Running the Multiple Web Services Sample
  11. The ESB Itinerary Selector Component
  12. The ESB Itinerary Forwarder Component
  13. The ESB Dispatcher Component [Most of the magic of ESB happens here]
  14. Implementing Design Patterns in Itineraries
  15. Creating a Custom Itinerary Service