by community-syndication | Jan 6, 2010 | BizTalk Community Blogs via Syndication
Recently, I used Unity v1.2, which in my opinion is a great product. I had difficulty, however, with the configuration for a scenario I encountered. I needed an enum value to be passed to a constructor. Trying as I might, there was no way Unity was accepting my typeAlias, as such:
<unity>
<typeAliases>
<typeAlias
alias="contextType"
type="System.DirectoryServices.AccountManagement.ContextType,
System.DirectoryServices.AccountManagement, Version=3.5.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089" />
</typeAliases>
<containers>
<container name="identitystores">
<types>
<type name="primary" type="IIdentityStore" mapTo="ActiveDirectoryIdentityStore" >
<lifetime type="singleton" />
<typeConfig
extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement,
Microsoft.Practices.Unity.Configuration">
<constructor>
<param name="context" parameterType="contextType">
<value
value="Domain"
type="contextType" />
</param>
</constructor>
</typeConfig>
</type>
</types>
</container>
</containers>
</unity>
For the life of me, I couldn’t figure out what I did wrong. I asked in an internal discussion list, and someone mentioned writing a plain old TypeConverter… Right he was, getting my enum value to work was pretty easy. The TypeConverter (as always, comments removed for clarity and provided AS IS, etc.):
public class ContextTypeTypeConverter : TypeConverter
{
public override bool CanConvertFrom(ITypeDescriptorContext context, Type sourceType)
{
return sourceType.GetType() == typeof(string);
}
public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType)
{
return destinationType == typeof(ContextType);
}
public override object ConvertFrom(
ITypeDescriptorContext context,
CultureInfo culture,
object value)
{
return Enum.Parse(typeof(ContextType), (string)value);
}
public override object ConvertTo(
ITypeDescriptorContext context,
CultureInfo culture,
object value,
Type destinationType)
{
return Enum.GetName(typeof(ContextType), value);
}
}
and modified configuration file:
<unity>
<typeAliases>
<typeAlias
alias="contextType"
type="System.DirectoryServices.AccountManagement.ContextType,
System.DirectoryServices.AccountManagement, Version=3.5.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089" />
</typeAliases>
<containers>
<containername="identitystores">
<types>
<typename="primary" type="IIdentityStore" mapTo="ActiveDirectoryIdentityStore" >
<lifetimetype="singleton" />
<typeConfig
extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement,
Microsoft.Practices.Unity.Configuration">
<constructor>
<param name="context" parameterType="string">
<value
value="Domain"
type="contextType"
typeConverter="Microsoft.AccountManagement.Extensions.ContextTypeTypeConverter,
Microsoft.AccountManagement.Extensions, Version=1.0.0.0,
Culture=neutral, PublicKeyToken=4fd564a94067c21"/>
</param>
</constructor>
</typeConfig>
</type>
</types>
</container>
</containers>
</unity>
That’s it, now everything works just fine. HTH someone out there 🙂
by community-syndication | Jan 6, 2010 | BizTalk Community Blogs via Syndication
Greetings!
Tech%u00b7Ed North America 2010, takes place in New Orleans from June 7-10, 2010.
We would like to request Breakout Session topic ideas from product expects like yourself. We hope you will consider submitting one or more session ideas for BizTalk Server , in the Application Integration (AIN) track before the January 15 deadline.
Steps for Call for Content submissions:
- Go to: https://northamerica.msteched.com/CFT
- Enter RSVP Access Code RSVP10-AIN for Application Integration Track
- Complete all the fields and submit the topic/s you’re interested in presenting
- When returning to the Call for Content site, use the e-mail alias and password you entered when creating your Call for Content profile to review or edit your submission, or to submit another topic.
Deadline for Submissions: January 15, 2010
Following is the track descriptions. Please ensure that your sessions align with it:
Application Integration Track
Learn how Microsoft’s latest products and technologies can help your organization leverage the flexibility and cost effectiveness of Web service based development and composite applications.This track covers the products and technologies that are relevant to the challenges you face and how to use them in the context of building and managing business applications. BizTalk Server, Windows Server AppFabric, SharePoint and, of course, the capabilities in .NET (WCF, WF, ASP.NET) will be brought together to show you how to use them to solve real problems.
Additional Call for Content Information
Breakout Sessions are the primary way Tech%u00b7Ed attendees receive Microsoft content. These sessions are lecture-style presentations held in rooms seating anywhere from 200-1,200 people. Breakouts are 75-minutes in length and speakers use PowerPoint slides and demos; leaving 10-15 minutes at the end to answer questions. These sessions are recorded and available at Tech%u00b7Ed Online to all paid attendees from the other worldwide Tech%u00b7Ed conferences held during the 12 months following Tech%u00b7Ed North America.
Additional conference information
The following information will be helpful as you think about the session/s you are going to submit and, if selected, present at Tech%u00b7Ed.
Tech%u00b7Ed is Microsoft’s premier global conference designed to provide developers and IT professionals with the technical education, product information and community resources they need to design, develop, manage, secure, and mobilize state-of-the-art software solutions for a connected enterprise. Content focuses on current and soon-to-released (before June 2011) Microsoft products, technologies and services.
At Tech%u00b7Ed North America 2009, 30% of attendees were developers — programmers (41%), architects (28%), designers (20%), and developer managers (12%) — who wanted to dive deeper into the latest enterprise development solutions using Microsoft’s developer tools, frameworks, and platforms. The remaining 70 percent of Tech%u00b7Ed attendees were IT professionals with the majority being Infrastructure Managers (48%) and IT Mangers (28%). They were interested in the best ways to plan, design, deploy, manage and secure connected enterprise systems. The remaining
Review and notification
%u00b7 Session submissions are reviewed to determine which best meet the needs of the Tech%u00b7Ed audience, adhere to the Track framework and content focus, and fulfill the messaging requirements of the product groups.
%u00b7 Session selections will be made and you will be notified by e-mail in February 2010.
Thank you for your time. We look forward to seeing your Breakout Session ideas.
http://northamerica.msteched.com/
by community-syndication | Jan 5, 2010 | BizTalk Community Blogs via Syndication
I agree with Nick Heppleston’s post entirely in that you had better test your DR plan in advance of the real disaster if you really want it to work! I can speak from experience here – my company’s previous plan sounded perfectly fine to me when I worked on it. In fact, when I discussed […]
by community-syndication | Jan 5, 2010 | BizTalk Community Blogs via Syndication
Here’s a great post that explains the new pricing model announced yesterday for Windows Azure platform AppFabric usage. At first read, this new model makes sense to me and seems quite reasonable, in addition to being more predictable for customers.
http://blogs.msdn.com/netservices/archive/2010/01/04/announcing-windows-azure-platform-commercial-offer-availability-and-updated-appfabric-pricing.aspx
by community-syndication | Jan 5, 2010 | BizTalk Community Blogs via Syndication
Again another post in the series of more advanced things you can do with the PowerShell provider for BizTalk.
When debugging BizTalk solutions you find yourself many times in a situation where you need to attach the Visual Studio debugger to the running BizTalk host instance. This is very easy to do. In Visual Studio you […]
by community-syndication | Jan 5, 2010 | BizTalk Community Blogs via Syndication
I had a great start of this new year. On January, 1st I received the ’Congratulations 2010 Microsoft MVP!’ email from Microsoft. I almost missed it because it was delivered to the junk mail folder. I do not check that folder too often
I’m really honored and excited. I would like to thank the […]
by community-syndication | Jan 5, 2010 | BizTalk Community Blogs via Syndication
[In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]
This is the thirteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers some of the improvements being made around Search Engine Optimization (SEO) with ASP.NET 4.
Why SEO?
Search engine optimization (SEO) is important for any publically facing web-site. A large percentage of traffic to sites now comes from search engines, and improving the search relevancy of your site will lead to more user traffic to your site from search engine queries (which can directly or indirectly increase the revenue you make through your site).
Measuring the SEO of your website with the SEO Toolkit
Last month I blogged about the free SEO Toolkit we’ve shipped that you can use to analyze your site for SEO correctness, and which provides detailed suggestions on any SEO issues it finds.
I highly recommend downloading and using the tool against any public site you work on. It makes it easy to spot SEO issues you might have in the site, and pinpoint ways to optimize it further.
ASP.NET 4 SEO Improvements
ASP.NET 4 includes a bunch of new runtime features that can help you to further optimize your site for SEO. Some of these new features include:
- New Page.MetaKeywords and Page.MetaDescription properties
- New URL Routing support for ASP.NET Web Forms
- New Response.RedirectPermanent() method
Below are details about how you can take advantage of them to further improve your search engine relevancy.
Page.MetaKeywords and Page.MetaDescription properties
One simple recommendation to improve the search relevancy of pages is to make sure you always output relevant “keywords” and “description” <meta> tags within the <head> section of your HTML. For example:
One of the nice improvements with ASP.NET 4 Web Forms is the addition of two new properties to the Page class: MetaKeywords and MetaDescription that make programmatically setting these values within your code-behind classes much easier and cleaner.
ASP.NET 4’s <head> server control now looks at these values and will use them when outputting the <head> section of pages. This behavior is particularly useful for scenarios where you are using master-pages within your site – and the <head> section ends up being in a .master file that is separate from the .aspx file that contains the page specific content. You can now set the new MetaKeywords and MetaDescription properties in the .aspx page and have their values automatically rendered by the <head> control within the master page.
Below is a simple code snippet that demonstrates setting these properties programmatically within a Page_Load() event handler:
In addition to setting the Keywords and Description properties programmatically in your code-behind, you can also now declaratively set them within the @Page directive at the top of .aspx pages. The below snippet demonstrates how to-do this:
As you’d probably expect, if you set the values programmatically they will override any values declaratively set in either the <head> section or the via the @Page attribute.
URL Routing with ASP.NET Web Forms
URL routing was a capability we first introduced with ASP.NET 3.5 SP1, and which is already used within ASP.NET MVC applications to expose clean, SEO-friendly “web 2.0” URLs. URL routing lets you configure an application to accept request URLs that do not map to physical files. Instead, you can use routing to define URLs that are semantically meaningful to users and that can help with search-engine optimization (SEO).
For example, the URL for a traditional page that displays product categories might look like below:
http://www.mysite.com/products.aspx?category=software
Using the URL routing engine in ASP.NET 4 you can now configure the application to accept the following URL instead to render the same information:
http://www.mysite.com/products/software
With ASP.NET 4.0, URLs like above can now be mapped to both ASP.NET MVC Controller classes, as well as ASP.NET Web Forms based pages. You can even have a single application that contains both Web Forms and MVC Controllers, and use a single set of routing rules to map URLs between them.
Please read my previous URL Routing with ASP.NET 4 Web Forms blog post to learn more about how the new URL Routing features in ASP.NET 4 support Web Forms based pages.
Response.RedirectPermanent() Method
It is pretty common within web applications to move pages and other content around over time, which can lead to an accumulation of stale links in search engines.
In ASP.NET, developers have often handled requests to old URLs by using the Response.Redirect() method to programmatically forward a request to the new URL. However, what many developers don’t realize is that the Response.Redirect() method issues an HTTP 302 Found (temporary redirect) response, which results in an extra HTTP round trip when users attempt to access the old URLs. Search engines typically will not follow across multiple redirection hops – which means using a temporary redirect can negatively impact your page ranking. You can use the SEO Toolkit to identify places within a site where you might have this issue.
ASP.NET 4 introduces a new Response.RedirectPermanent(string url) helper method that can be used to perform a redirect using an HTTP 301 (moved permanently) response. This will cause search engines and other user agents that recognize permanent redirects to store and use the new URL that is associated with the content. This will enable your content to be indexed and your search engine page ranking to improve.
Below is an example of using the new Response.RedirectPermanent() method to redirect to a specific URL:
ASP.NET 4 also introduces new Response.RedirectToRoute(string routeName) and Response.RedirectToRoutePermanent(string routeName) helper methods that can be used to redirect users using either a temporary or permanent redirect using the URL routing engine. The code snippets below demonstrate how to issue temporary and permanent redirects to named routes (that take a category parameter) registered with the URL routing system.
You can use the above routes and methods for both ASP.NET Web Forms and ASP.NET MVC based URLs.
Summary
ASP.NET 4 includes a bunch of feature improvements that make it easier to build public facing sites that have great SEO. When combined with the SEO Toolkit, you should be able to use these features to increase user traffic to your site – and hopefully increase the direct or indirect revenue you make from them.
Hope this helps,
Scott
by community-syndication | Jan 4, 2010 | BizTalk Community Blogs via Syndication
I am going to do a series of three blog posts (and accompanying videos) about how to integrate BizTalk and Azure. As you know, Azure is in the process of going live, and BizTalk can play a very compelling role bridging between on-premise and off-premise.
This post shows two key interactions with the Windows Azure platform AppFabric Service Bus:
- receiving a message from the Service Bus through a BizTalk receive location
- publishing to the Service Bus from an InfoPath form
It is the first point that I think illustrates what will become a VERY commonly used pattern. This sample is based on the Order Demo I created previously, and you can go watch those videos first to get a sense of what the demo is. It brings together BizTalk, SharePoint, ESB Toolkit, SQL Server Analysis Services, SQL Server Reporting Services, InfoPath and more, including showing how to use the Business Rules Engine for dynamic itinerary selection (a favorite pattern of mine). I blogged about that here. I like it because it’s a “SharePoint-based ESB-driven BizTalk-powered workflow”, really leveraging the power of the Microsoft stack.
The video for this blog post is available here at the MSDN BizTalk Developer site.
My goal in this post is to take an on-premises business process, and make it externally available. Now, traditionally in a BizTalk environment that would mean TYPICALLY that you expose a schema or orchestration as a Web service, and then reverse-proxy that so it can be reached from the outside world, probably with a load balancer in the mix.
There’s new tool in our toolbox now: Windows Azure platform AppFabric, which includes the Service Bus and Access Control Service. So, I set out to use that as an external relay, and an entry point into my existing order process demo.
Before we integrate with the existing system, let’s get the client->ServiceBus->BizTalk flow working.
Once you’ve signed up for the Azure service and received your developer token, the first thing you need to do is install the Windows Azure platform AppFabric SDK in your BizTalk environment. One of the things this gives you is a set of new WCF bindings, which is what I used in conjunction with BizTalk’s WCF-Custom adapter. I created a new BizTalk project, a new receive port and receive location, as shown below:
I had many options available, but as I was going .NET (BizTalk) to .NET (Service Bus), I opted to use the Net TCP binding for greater efficiency (perhaps not so important for a demo, but, might as well do it right). Note also that transport-level security is enabled.
For credentials, I used “shared secret”. This means that it will use the Access Control Service to authenticate me, and the corresponding issuerName and issuerSecret need to correspond to what you have set up using the Azure portal.
Normally, that would be it, but in my case there was a structural difference between what was being relayed to me from the Service Bus and what I needed for my existing process. So, I used the WCF adapters capability to reach into the message and only pass on what I had specified as being the body based on my XPath expression:
And that’s it! Enabling that receive location will create an endpoint “listener” (subscriber) in Azure’s Service Bus. Pretty remarkable really, when you consider that I needed to magic at the BizTalk end other than a new WCF binding, and I can now subscribe to message from the cloud. This is the sort of thing where when you first see it work, you sit back in your chair, say “wow”, and then think about what the implications are.
Next step, we need a publisher. My existing demo already had an InfoPath form, so I wanted to use that. However, how do you make InfoPath talk to Windows Azure platform Service Bus? Now, I’m hardly an InfoPath expert, but I thought I’d give it a shot anyhow. Following the lead from samples in the Azure SDK, I knew I had to set up some contracts. I had to embed this right in the formcode for the form, perhaps there’s a better way (any InfoPath experts, I’d love to know), but my goal here was “just make it work”, knowing full well that I’m dealing with CTPs and ultimate production-ready approaches would likely change anyhow. So, with that as my mindset, the following is what I ended up with. Notice the OrderRelayService class, and how that aligns with the URI in the BizTalk receive location.
Next, the “Save” button code just sets the order status, and then calls the SendForm method:
Lastly, we have the sending part. Yes, I have my credentials in the clear here, but once this post goes live those credentials will have changed 🙂
And, that’s it we can now use an InfoPath form to send messages to the Service Bus, and if the Access Control Service authorizes it, the message will be published. BizTalk will have an endpoint listening and will pick up that message.
The last thing we need to do now is hook this into the existing process flow. If you watched the original Order Demo video, you’ll know that I was creating orders in InfoPath, publishing to a SharePoint document list, and from there they were picked up by a BizTalk receive location that used the SharePoint adapter. As part of that, we use one of the standard ESB Toolkit pipelines to select and apply an itinerary, and we selected the itinerary based on a rules engine decision. So, how hard would it be to integrate that functionality into the messages we’re picking up from the Service Bus? Simple, we just add that pipeline to the receive location, and we’re done.
In this post, and the accompanying video, I have shown you how to take an on-premise BizTalk application and make it available externally via the Windows Azure platform Service Bus, by just adding another receive location. And, a huge benefit you get here against having exposed your own endpoint is the scale benefits of the cloud. You don’t need to worry about having servers in a DMZ, reverse proxies, load balancers, or any of that infrastructure “goo”.
At this point you may want to sit back, say “wow”, and think about what new possibilities this opens up and the solutions you can now create, combining on-premise BizTalk ESB (and perhaps some of your existing applications) with the Windows Azure platform.
by community-syndication | Jan 4, 2010 | BizTalk Community Blogs via Syndication
As part of today’s announcement about the commercial availability of Windows Azure platform offers, we are also introducing updated pricing for the Windows Azure platform AppFabric, which helps developers connect cloud and on-premises applications. Based on discussion and feedback from hundreds of customers during the CTP process, we have made the pricing simpler and more predictable. Service Bus will now be priced at $3.99 per Connection-month, and Access Control will be $1.99 per 100,000 Transactions.
Last November at the 2009 Professional Developers’ Conference, Microsoft announced a new product offering named AppFabric. AppFabric delivers services that enable developers to build and manage composite applications more easily for both server and cloud environments. The server components, known as Windows Server AppFabric, provide caching capabilities and workflow and service management capabilities for applications that run on-premises. The cloud components, known as Windows Azure platform AppFabric (formerly called “.NET Services”), include cloud-based services that help developers connect applications and services between Windows Azure, Windows Server and a number of other platforms. Windows Azure platform AppFabric is available as a production ready service today and includes two services: Service Bus, which makes it easier to connect applications and services in the cloud or on-premises, and Access Control, which provides federated authorization as a service.
SERVICE BUS PRICING
For Service Bus, the pricing meter has changed from “Message Operations” to “Connections”. In many cases, each application instance that connects to the Service Bus will require just one Connection, which means that predicting your usage is often as simple as counting the number of application instances or devices that you need to connect. Whether your application requires two-way messaging, event distribution, protocol tunneling, or another architecture, the Connection-based model is designed to suit your business needs. Connections are charged at a rate of $3.99 per Connection per month (plus applicable data transfer charges), and will be billed on a pay-as-you-go, consumptive basis. Alternatively, for customers who are able to forecast their needs in advance, we offer the option to purchase “Packs” of Connections: a pack of 5 Connections for $9.95, a pack of 25 for $49.75, a pack of 100 for $199.00, or a pack of 500 for $995.00 per month (plus data transfer). Connection Packs represent an effective rate of $1.99 per Connection-month. Pack sizes larger than 500 may be available on request. In our FAQ, we provide more details on how Connections are defined, measured, and billed.
We expect that most customers will find this new meter to be simpler and more predictable. While the former Message Operations meter was well suited for uses such as discrete transactional messaging, it has been more complicated in other cases. For example, what happens if you want to stream a large file, tunnel a protocol persistently, or deploy a lot of devices that all “listen” idly all day? Knowing what gets counted as a message, and predicting usage from day to day, was difficult.
It turns out that our customers have all of those uses in mind, which is a good thing. Customers have asked for simpler pricing, and we are now able to deliver this for a wider range of uses, including streamed data, protocol tunneling, and transactional messaging. This new pricing structure will help make it easier for you to understand and control when and how many Connections are being used. In addition, it provides increased predictability, because under normal circumstances, your total Connection cost will stay the same whether you generate more or less “message” volume from one month to the next.
ACCESS CONTROL PRICING
For Access Control, the pricing meter has changed from “Message Operations” to “Transactions”. In practice, these meters are the same; only the name has been changed to reflect the Access Control function more accurately. As previously announced, token requests and service management operations will both be counted as Transactions, and charged at a rate of $1.99 per 100,000 Transactions.
AVAILABILITY
The Service Bus and Access Control are available today, however to give customers more time to adjust to the new pricing structure, charges will start to accrue in April, 2010. Usage until that time will be free of charge, so we encourage you to upgrade your account and sign up for an offer today. Starting today, customers can already take advantage of the same support and benefits provided across the Windows Azure platform. SLAs will take effect when charges begin to accrue in April, 2010. In order to help customers monitor and predict their usage before charges begin to accrue, Connection and Transaction usage reports will be made available soon on their developer portal at http://appfabric.azure.com/.
For more information, please visit our FAQ and pricing pages.
by community-syndication | Jan 4, 2010 | BizTalk Community Blogs via Syndication
In .NET 4.0, we have introduced a framework for correlation. What do I mean by correlation? I’m glad you asked. In our vocabulary, a “correlation” is actually one of two things:
- A way of grouping messages together. A classic example of this is sessions in WCF, or even more simply the relationship between a request message and its reply.
- A way of mapping a piece of data to a service instance. Let’s use sessions as an example here also, because it makes sense that I’d want all messages in a particular session (SessionId = 123) to go to the same instance (InstanceId = 456). In this case, we’ve created an implicit mapping between SessionId = 123 and InstanceId = 456.
As you can see, these patterns are related, hence why we call them both “correlations”. But sessions are inherently short-lived, tied to the lifetime of the channel. What happens if my service is long-running and the client connections aren’t? The world of workflow services reinforces the need for a broader correlation framework, which is why we’ve invested in this area in .NET 4.0.
There are two operations that can be performed on a correlation: it can be initialized, or it can be followed. This terminology is not new; in fact, Biztalk has had correlation for many releases. But what does “initializing” a correlation mean? Another great question. Well, it is simply creating this mapping between the data and the instance.
In addition, there are many types of correlation available in .NET 4.0, but let’s focus on this category of associating data with an instance. When that data comes from within the message itself (e.g. as a message header or somewhere in the body of the message), we call that content-based correlation.
Ok, too much theory and not enough application; let’s look at how this manifests in WF 4.0. With every message sent or received from your workflow, you’ve got an opportunity to create an association between a piece of data in that message and the workflow instance. That is, every messaging activity (Receive, SendReply, Send, & ReceiveReply) has a collection of CorrelationInitializers which let you create these associations. Here’s what the dialog looks like in Visual Studio 2010:
As you can see, it’s been populated with a Key and a Query. The Key is just an identifier used to differentiate queries, e.g. DocId = 123 should be different than CustomerId = 123. The Query part is how we retrieve the data from the message; in this case, it’s an XPath expression that points within the body of the ReceiveDocument request message to the Document.Id value. Some of the resulting correlation information is pushed into a CorrelationHandle variable (the DocIdCorrelation), which will be used by later activities to follow this correlation.
Now, you might be wondering: if I already have the Document.Id value in a workflow variable, why do I need this XPath expression in order to initialize my correlation? That’s a great point. In fact, we wrote another activity just for this purpose: the InitializeCorrelation activity. As you would expect, the dialog looks very similar to what we just saw (Note: this dialog is different than what is present in Visual Studio 2010 Beta2):
This activity is particularly useful in scenarios where the data to initialize the correlation comes from a database, or if you need to create a composite correlation from multiple pieces of data like a customer’s first and last name concatenated together. For all of you Biztalk experts, this means no more “dummy send” pattern! Hooray!
Ok, you’ve initialized a correlation now what? Regardless of how you’ve initialized it, a correlation is only useful if it is followed. A Receive activity follows a correlation by correlating on the same piece of data (in this case, a DocId) and specifying the same CorrelationHandle. Imagine that the document approval process includes some opportunity to update the document before the official approval is given. An UpdateDocument request message is sent, which contains the DocumentId value in it. Here we specify the XPath expression to point to that particular piece of data in our incoming message. We also set the CorrelatesWith property to the same CorrelationHandle we specified previously; this ensures that when the Receive activity starts executing and goes idle (sets up its bookmark), the WorkflowServiceHost knows what correlation this workflow is waiting on and can resume the correct instance when the message with the corresponding Document Id comes in.
And now you can consider yourself a content-based correlation expert! No longer are you restricted to using a context-based binding for communicating with instances of your workflow services! Now that’s freedom. Give it a shot and let us know what you think!