The WCF services ecosystem

At PDC, Microsoft announced the rebranding of ADO.NET Data Services as WCF Data Services and of .NET RIA Services as WCF RIA Services. This is not just a product marketing decision – it is also a technical commitment to provide a coherent and unified services story on .NET.


The current implementations of ADO.NET Data Services (previously known as Codename ’Astoria’) and .NET RIA Services (previously known as Codename ’Alexandria’) are based on WCF.  In fact, they are WCF services. Moving forward, future releases will align the technologies to allow features of the technologies to be used in a mix and match manner as appropriate. We are currently in early stages of investigation around potential areas of deeper integration such enabling WCF RIA services to support an appropriate subset of ADO.NET Data Service’s Open Data protocol (OData); enabling validation features that are currently only available in WCF RIA Services in other flavors of WCF services as well etc.


By unifying these services offerings on top of WCF, we are maximizing developer knowledge transfer and skill reuse in the short term and the long term. For the WCF RIA Services developer, the developer does not need to know all aspects of WCF to get their service up and running.  However, if they want to add a WCF behavior or take advantage of the rich extensibility of WCF to their WCF RIA Service, they can choose to do so in a fashion that takes advantage of the unified communications programming model that is WCF.


Thus, as a result of this alignment, .NET will offer several different flavors of WCF services (listed below) that you can choose from based on your particular needs.  The important thing to remember is these options will all build on the underlying WCF architecture. As such these are not binary choices – providing the .NET service developer with a choice among three different entry points into a single distributed programming framework, rather than a choice among three different programming options.  We expect many applications will leverage multiple models for building out their applications and their developer’s knowledge will easily transfer from one model to the other.




%u00b7 WCF Core Services – Allows full flexibility for building operation-centric services.  This includes industry standard interop, as well as channel and host plug-ability.  Use this for operation-based services which need to do things such as interop with Java, be consumed by multiple clients, flow transactions, use message based security, perform advanced messaging patterns like duplex, use transports or channels in addition to HTTP, or host in processes outside of IIS.


%u00b7 WCF WebHttp/AJAX Services – Is best when you are exposing operation-centric HTTP services to be deployed at web scale or are building a RESTful service and want full control over the URI/format/protocol.


%u00b7 WCF Data Services – Including a rich implementation of OData for .NET, Data Services are best when you are exposing your data model and associated logic through a RESTful interface


%u00b7 WCF Workflow Services – Is best for long running, durable operations or where the specification and enforcement of operation sequencing is important


%u00b7 WCF RIA Services – is best for building an end-to-end Silverlight application


If you want to learn more about the different WCF services at PDC please check out the following sessions:




  • FT13   What’s New for Windows Communication Foundation


  • FT55   Developing REST Applications with the .NET Framework


  • CL06   Networking and Web Services in Silverlight


  • CL07   Mastering Microsoft .NET RIA Services


  • CL21   Building Amazing Business Applications with Microsoft Silverlight and Microsoft .NET RIA Services


  • FT10   Evolving ADO.NET Entity Framework in .NET 4 and Beyond


  • FT12   ADO.NET Data Services: What’s new with the RESTful data services framework


Thanks, and looking forward to your feedback!

New Web Services features in Silverlight 4 Beta

Cross-posted from the Silverlight Web Services Team Blog. 


This morning at PDC ’09 ScottGu just announced the availability of Silverlight 4 Beta. Later on today I am going on to present the latest improvements around networking and web services and I’ll link to the full talk as soon as it is available online. In this post I’ll provide a quick summary of today’s announcements, with more detail to follow.


On the high level, we are announcing an exciting alignment between the different web services stacks in Silverlight. ADO.NET Data Services and .NET RIA Services are being rebranded as WCF Data Services and WCF RIA Services to reflect the fact that both technologies are being built out as programming models on top of WCF. In a way, this is not really major news; to you as a developer, pretty much everything stays the same, and you can continue using your favorite technology, whether it is straight WCF, or WCF RIA Services or WCF Data Services.


RIA Services and Data Services give you productive patterns for specific kinds of services and applications, hiding away some of the complexity of using WCF directly. The power of WCF is still there for you under the covers, if you need to modify some setting to your liking.


Specifically within the core WCF model, Silverlight 4 Beta has support for a brand new binding: NetTcp. This binding lets Silverlight talk to WCF services using a high-performance TCP pipe, using a duplex message pattern. In Silverlight, the binding is built on top of the sockets support that’s already there since Silverlight 2, so we inherit the security requirements of the Silverlight sockets API. More specifically, the service needs to be hosted in a given port range (4502 – 4534) and needs to expose a policy responder on port 943. One more thing to be aware of is that the security support and the streamed programming model for NetTcp available in WCF on the desktop framework are not available in Silverlight 4 Beta.


We’ll have a lot more content for you coming up soon, including the code from my talk today. If you want to get your hands dirty right away, go get the Silverlight 4 Beta, and then try the steps this how-to in our MSDN documentation, which has already been updated and show usage for NetTcp.


More information:



Thanks, and looking forward to your feedback!
Yavor Georgiev
Program Manager, Silverlight
 

Processing an Excel upload with nServiceBus

v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}

Normal
0
false

false
false
false

EN-GB
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
mso-pagination:widow-orphan;
font-size:11.0pt;
mso-bidi-font-size:10.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

[Source: http://geekswithblogs.net/EltonStoneman]

We’ve had a couple of projects recently with similar requirements to process an Excel file as a batch upload of data. One was a BizTalk project where the FarPoint Spread pipeline component was a good fit; the other was a Web app where we put together a custom parser based on the open-source ExcelDataReader. The custom solution was appropriate for the expected size of upload files, but wouldn’t scale well to deal with large files quickly, so I wanted to look at a distributed alternative using nServiceBus. My sample implementation is on MSDN Code Gallery here: nServiceBus Excel Upload. I’ll look at a comparative BizTalk solution in a future post.

If you haven’t come across nServiceBus, it’s a queue-based messaging framework which is inherently asynchronous. “Scalability and reliability are in its DNA”, and it has some impressive case studies. Using nServiceBus you can set up a simple publish-subscribe architecture between nodes, or a load-balanced architecture with a central distributor. In the distributed version, the upload sample looks like this:

(Note that the diagram represents the bus as a separate entity, in reality it’s distributed among the queues of all the nodes. The diagram also omits the distributor).

In nServiceBus, services are requested by publishing messages onto the bus. Requests are fulfilled by a handler which subscribes to a type of message. The Excel upload sample takes a workbook which contains a set of products and uploads them to the AdventureWorks database. There are three types of message:

  • StartBatchUpload – published when a file has been received and is ready to be processed; subscriber does some basic validation on the Excel data structure, and then for each row in the worksheet publishes an AddProduct message;
  • AddProduct – subscriber maps the product defined in the message to a stored procedure call which inserts the new product. When the last product in the batch is reached, sends a BatchStatusChanged message to the original publisher of the StartBatchUpload message;
  • BatchStatusChanged – logs the status change and renames the Excel upload file.

This is a basic example, more validation would be expected, but the workflow is representative. Parsing the Excel file is done quickly, allowing for any number of nodes to participate in the resource-intensive work of creating the products. Using a single host with 5 threads, an Excel file with 3,500 rows takes just over 4 minutes to process on a dev laptop. That’s 13 messages per second which is nothing special, but this is on a single host which is also running the distributor and SQL Server. The processing host has a flat memory profile (consistently around 40Mb) and runs at less than 20% CPU. The distributor takes around 15% CPU, and MSMQ another 15%.

For a much larger upload – 12,000 rows – the processing and memory profile is the same, and the upload takes around 14 minutes (~14 messages per second) on the same infrastructure.

Running the Sample

Access to a SQL Server instance with the AdventureWorks sample database installed is a pre-requisite. You’ll need to add the new stored procedure with uspInsertProduct.CREATE.sql. The connection string used by the host is specified in ExcelUpload.Host.exe.config (defaults to unnamed local instance).

You’ll need MSMQ running on all nodes. Queues are specified in configuration and are created by nServiceBus if they don’t exist – an exception is the storage queue for the distributor which needs to be manually created, this PowerShell snippet will do it:

[Reflection.Assembly]::LoadWithPartialName(“System.Messaging”)

[System.Messaging.MessageQueue]::Create(“.\Private$\distributorStorage”, $true)

Unzip the file ExcelUpload.Binaries.zip. You’ll have a batch file – start.cmd – and five subdirectories – Client, Host, Distributor, SampleFiles and Drops. Run start.cmd, check the console screens for errors, then copy one of the Excel files from SampleFiles to Drops. You should see activity in the host, client and distributor console screens, and new rows being added to the [Production].[Product] table.

If you drop the same file twice, the unique key on Products will be violated, so the upload will error. On a fresh install there are under 1000 products, so this resets the table to the default state:

delete [Production].[Product] where ProductID > 999;

Implementation Details

The sample uses the release version of nServiceBus – 1.9 – as the distributor was broken in the 2.0 beta at the time of writing.

The two console apps run the “client” (which monitors a configured file location for an Excel drop, and publishes the StartBatchUpload message), and the “host” (which subscribes to StartBatchUpload and publishes AddProduct and BatchStatusChanged messages). Both use Topshelf so they can run as a console, or can be installed as a Windows service (e.g. ExcelUpload.Client.exe /install).

If you want to run several hosts on the same machine, they will need to use different queues. Copy the whole of the Host directory, and modify ExcelUpload.Host.exe.config to specify a unique queue name:

InputQueue=ExcelUpload.Service.1.InputQueue

Then run ExcelUpload.Host.exe from all the copied locations, and you’ll see the console hosts sharing the message processing when a file is dropped.

BizTalk: xpath: How to work with empty and Null elements in Orchestration

The problem is with three Empty-Null cases.
Is it possible to separate all these cases in Expression shapes of the Orchestration?
For example, we have the record with <name> element, in such flawors:

case: “NonEmpty”

<ns0:People>
<
ns0:Name>Name_0</ns0:Name>
<
ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..

case: “Empty”

<ns0:People>
<
ns0:Name></ns0:Name>
<
ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..

case: “OneTag”

<ns0:People>
<
ns0:Name/>
<
ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..

case: “Null”

<ns0:People>
<!– NO NODE: <ns0:Name>Name_0</ns0:Name>–>
<ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..

Is it possible to separate all these cases in Expression shapes of the Orchestration?

There is no informationinto theMSDN about this [http://msdn.microsoft.com/en-us/library/aa561906(BTS.10).aspx]

I tried to use the xpath() function in two variants, one with “string(xpath_expression)” second with “xpath_expression”

Expression Shape:
[
System.Diagnostics.Trace.WriteLine(“======================================================================”);
System.Diagnostics.Trace.WriteLine(“[” + System.Convert.ToString(xpath (msg_SourceRoot, “string(/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’])”)) + “]”);

if ( xpath (msg_SourceRoot, “string(/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’])”) == null)
{ System.Diagnostics.Trace.WriteLine(“Name == null”); }

else if ( xpath (msg_SourceRoot, “string(/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’])”) == “”)
{ System.Diagnostics.Trace.WriteLine(“Name == Empty”);}

else
{ System.Diagnostics.Trace.WriteLine(“Name != null && Name != Empty”); }
System.Diagnostics.Trace.WriteLine(“———————————————————————-“);
System.Diagnostics.Trace.WriteLine(“[” + System.Convert.ToString(xpath (msg_SourceRoot, “/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’]”)) + “]”);

if ( xpath (msg_SourceRoot, “/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’]”) == null)
{ System.Diagnostics.Trace.WriteLine(“Name == null”); }

else if ( xpath (msg_SourceRoot, “/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’]”) == “”)
{ System.Diagnostics.Trace.WriteLine(“Name == Empty”); }
else
{ System.Diagnostics.Trace.WriteLine(“Name != null && Name != Empty”);}
]

I.e. in thefirs section used the “string(xpath_expression)” expression, in thesecond used the”xpath_expression”

Result is:

“NonEmpty” ======================================================================
[Name_0]
Name != null && Name != Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty

“Empty”======================================================================
[]
Name == Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty

“OneTag”======================================================================
[]
Name == Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty

“Null”======================================================================
[]
Name == Empty
———————————————————————-
[]
Name == null

Conclusion:
* I cannot separate the cases “Empty” and “OneTag”
* I can separate the cases “Empty and “Null” with “xpath_expression”, not with “string(xpath_expression)” expression
* “Null” case does not throw an exception.

SQL Server Modelling – November CTP

Hot on the heels of one CTP (StreamInsight) and timed to coincide with the opening day of the PDC, Microsoft has just released the latest version of the technology formally known as ‘Oslo’. SQL Server Modelling, as we must now learn to call it, has several improvements over the previous CTP release last May. Indeed, the previous CTP was characterised by behind-the-scenes code improvements and rewrites rather than new functionality. It is therefore doubly reassuring to see the slew of new features.

I won’t spend time going into new functionality here because Kraig Brockschmidt (yes, forall you ‘techie’s of a certain age’…this is the same Kraig B of ‘Inside OLE’ fame – every COM developer’s bible circa 1995)has just posted an excellent summary at http://blogs.msdn.com/modelcitizen/archive/2009/11/17/announcing-the-sql-server-modeling-n-e-oslo-ctp-for-november-2009.aspxtogether with links to the download and various materials. In any case, it’s getting late here and I need to go to bed. However, things are moving along. I’m looking forward to getting to grips with Quadrant which sounds like it may, at last, be beginning to make some sense.M improvements sound great.

Like others, I was deeply alarmed by Doug Purdy’s post (see http://www.douglaspurdy.com/2009/11/10/from-oslo-to-sql-server-modeling/) aweek ago, and my initial reaction was similar to many commentators (something along the lines of deep groans, punctuated by loudinarticulate screams). We must hope that the new name and product alignment, which appears at so many levels to indicate an abandonment of the deep platform vision many of us believed ‘Oslo’ represented, is just a temporary glitch in Microsoft’s modelling story. SQL Server may be a platform. Most of us think of it as a product. Windows is the platform, and we want Microsoft to do better at supporting modelling across their entire platform, and not just one, admittedly very powerful, corner of their universe <end of rant>

Microsoft StreamInsight CTP 3 – New Features

Microsoft has just released the November CTP (CTP 3) of StreamInsight. Seehttp://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=01c664e4-1c98-4fc8-93ee-08cc039503c1.
I’ve encountered some confusion from more than one person about the status of StreamInsight, so let me explain briefly that a CTP is a Community Technology Preview.It’s not a beta as such.CTPs are closely aligned with the shorter iterative development cycles of agile methodologies. Many Microsoft product teams use them to provide insight into their progress from a much earlier stage than would be the case for a formal beta programme.So, StreamInsight is a work in progress. It hasn’t shipped yet. It isn’t necessarily ‘code complete’. It doesn’t have full documentation. Bits are missing or may not work. Under the terms of the license, you can’t use it in production. For some reason, CTPs seem to cause a lot of confusion, even in the Microsoft community, but the clue is mainly in the word ‘preview’ and partly in the word ‘community’.
So what is new? Well, looking at the help file…
Hopping windows. This is an additional temporal window model which was conspicuous by its absence in CTP2.It is, very simply, a contiguous ‘batched’ window and will be very familiar to users of other CEP products. At any one time the window provides access to the set of events that are alive within that time frame. After a fixed interval the Window is incremented in time to provide access to events in the next fixed period. Because events in StreamInsight are interval-based, an event may appear in more than one time frame. Note that the graphic in the help file appears to be slightly incorrect.Event e1 is unaccountably extended in the first two ‘hops’.
Joins. Again, highly conspicuous by its absence in CTP 2, CTP 3 now supports joins in LINQ! The fact that we have only just got this fundamental capability (what could be more central to CEP than the ability to specify joins directly in your query language) shows just how early a view Microsoft gave us in CTP 2 (CTP 1, incidentally, was a private preview released earlier this year to company employees only).
User-Defined Aggregates (UDAs). This feature allows developers to create highly customised aggregation operators for aggregating over windows.Developers create UDAs by deriving from one of two classes depending on the need to access timestamp information and also following a reasonably straightforward pattern involving creation of an annotated extension method that allows the UDA to be accessed directly within LINQ.
User-Defined Operators (UDOs). UDAs map multiple events to a single scalar value.Like UDAs, UDOs take multiple events as input but can return multiple events. They follow a very similar design pattern to UDAs, but return an IEnumerable<T> rather than a single value.
UDAs and UDOs provide a major piece of the CEP jigsaw and greatly extend the power and expressivity offered by LINQ.
Additional Query Improvements. TopK queries now allow lambdas to be used in Take() to project values from ranked events into the payload of each of the selected events.The Group and Apply operators have also been improved.
In CTP 2, .NET methods in expressions included in LINQ were evaluated at query translation time, which wasn’t much use. In CTP 3 they are now evaluated at run time.
CTI improvements.In CTP 2, CTIs could only be enqueued in code in an input adapter. CTP 3 provides support for declarative CTI generation by allowing query template bindings to be configured with an ‘AdvanceTime’ specifier.You can also configure an adapter factory.CTIs are then generated internally by the engine. CTIs are very cool. Declarative CTIs are very, very cool:-)
Management Service API Improvements. A number of improvements have been made to the management service API to support a richer set of views at different levels within the engine.
So, in summary, Microsoft has made significant progress since CTP2, and CTP 3, as we might hope, looks more like a complete engine. Major gaps in functionality have been filled and we can begin to see more clearly what capabilities will be included in the first release. Keep up the good work

Not at PDC? Come to NotAtPDC!

I wasn’t able to make it to PDC this year. Borrowing from Frank La Vigne who is in a similar situation, for me, PDC stands for “Preparing for Delivering Child.” Apparently my wife has an “in” with the State Department, and as such, I’m on the No Fly list until “deployment.”

That being said, MVP Rachel Appel and the wonderful folks at NotAtPDC have asked me to host this week’s festivities. If you’re unfamiliar with NotAtPDC, it’s a perennial virtual conference coordinated for those who weren’t able to make it to PDC this year. It includes industry and INETA speakers and MVPs focusing on real-world scenarios and topics.

Look for more information on the NotAtPDC web site in the coming day.

If you have specific questions, please drop me a line.

Enterprise Integration Pattern Part -6- Envelope Wrapper

Enterprise Integration Pattern Part -6- Envelope Wrapper

Sometimes batches are sent with common header information related to messages in the batch. This information is useful for routing purposes whereas the information is also useful for processing the messages. This information may or may not be useful to other client application or processes other than the process which is processing the message. Sometimes […]