Thanks, BaltoMSDN
I wanted to thank the Directors, Officers, and Members of BaltoMSDN for the opportunity to speak last night. We had a great time talking about StreamInsight, and the questions that were posed were really, really great.
I wanted to thank the Directors, Officers, and Members of BaltoMSDN for the opportunity to speak last night. We had a great time talking about StreamInsight, and the questions that were posed were really, really great.
At PDC, Microsoft announced the rebranding of ADO.NET Data Services as WCF Data Services and of .NET RIA Services as WCF RIA Services. This is not just a product marketing decision – it is also a technical commitment to provide a coherent and unified services story on .NET.
The current implementations of ADO.NET Data Services (previously known as Codename ’Astoria’) and .NET RIA Services (previously known as Codename ’Alexandria’) are based on WCF. In fact, they are WCF services. Moving forward, future releases will align the technologies to allow features of the technologies to be used in a mix and match manner as appropriate. We are currently in early stages of investigation around potential areas of deeper integration such enabling WCF RIA services to support an appropriate subset of ADO.NET Data Service’s Open Data protocol (OData); enabling validation features that are currently only available in WCF RIA Services in other flavors of WCF services as well etc.
By unifying these services offerings on top of WCF, we are maximizing developer knowledge transfer and skill reuse in the short term and the long term. For the WCF RIA Services developer, the developer does not need to know all aspects of WCF to get their service up and running. However, if they want to add a WCF behavior or take advantage of the rich extensibility of WCF to their WCF RIA Service, they can choose to do so in a fashion that takes advantage of the unified communications programming model that is WCF.
Thus, as a result of this alignment, .NET will offer several different flavors of WCF services (listed below) that you can choose from based on your particular needs. The important thing to remember is these options will all build on the underlying WCF architecture. As such these are not binary choices – providing the .NET service developer with a choice among three different entry points into a single distributed programming framework, rather than a choice among three different programming options. We expect many applications will leverage multiple models for building out their applications and their developer’s knowledge will easily transfer from one model to the other.
%u00b7 WCF Core Services – Allows full flexibility for building operation-centric services. This includes industry standard interop, as well as channel and host plug-ability. Use this for operation-based services which need to do things such as interop with Java, be consumed by multiple clients, flow transactions, use message based security, perform advanced messaging patterns like duplex, use transports or channels in addition to HTTP, or host in processes outside of IIS.
%u00b7 WCF WebHttp/AJAX Services – Is best when you are exposing operation-centric HTTP services to be deployed at web scale or are building a RESTful service and want full control over the URI/format/protocol.
%u00b7 WCF Data Services – Including a rich implementation of OData for .NET, Data Services are best when you are exposing your data model and associated logic through a RESTful interface
%u00b7 WCF Workflow Services – Is best for long running, durable operations or where the specification and enforcement of operation sequencing is important
%u00b7 WCF RIA Services – is best for building an end-to-end Silverlight application
If you want to learn more about the different WCF services at PDC please check out the following sessions:
Thanks, and looking forward to your feedback!
Just in case you missed it, it’s now been made public. The next release for the BizTalk Server stack will be BizTalk Server 2009 R2.
More information, a roadmap, integration with AppFabric, and other information may be found here.
Cross-posted from the Silverlight Web Services Team Blog.
This morning at PDC ’09 ScottGu just announced the availability of Silverlight 4 Beta. Later on today I am going on to present the latest improvements around networking and web services and I’ll link to the full talk as soon as it is available online. In this post I’ll provide a quick summary of today’s announcements, with more detail to follow.
On the high level, we are announcing an exciting alignment between the different web services stacks in Silverlight. ADO.NET Data Services and .NET RIA Services are being rebranded as WCF Data Services and WCF RIA Services to reflect the fact that both technologies are being built out as programming models on top of WCF. In a way, this is not really major news; to you as a developer, pretty much everything stays the same, and you can continue using your favorite technology, whether it is straight WCF, or WCF RIA Services or WCF Data Services.
RIA Services and Data Services give you productive patterns for specific kinds of services and applications, hiding away some of the complexity of using WCF directly. The power of WCF is still there for you under the covers, if you need to modify some setting to your liking.
Specifically within the core WCF model, Silverlight 4 Beta has support for a brand new binding: NetTcp. This binding lets Silverlight talk to WCF services using a high-performance TCP pipe, using a duplex message pattern. In Silverlight, the binding is built on top of the sockets support that’s already there since Silverlight 2, so we inherit the security requirements of the Silverlight sockets API. More specifically, the service needs to be hosted in a given port range (4502 – 4534) and needs to expose a policy responder on port 943. One more thing to be aware of is that the security support and the streamed programming model for NetTcp available in WCF on the desktop framework are not available in Silverlight 4 Beta.
We’ll have a lot more content for you coming up soon, including the code from my talk today. If you want to get your hands dirty right away, go get the Silverlight 4 Beta, and then try the steps this how-to in our MSDN documentation, which has already been updated and show usage for NetTcp.
More information:
Thanks, and looking forward to your feedback!
Yavor Georgiev
Program Manager, Silverlight
v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
Normal
0
false
false
false
false
EN-GB
X-NONE
X-NONE
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:””;
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
mso-pagination:widow-orphan;
font-size:11.0pt;
mso-bidi-font-size:10.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}
[Source: http://geekswithblogs.net/EltonStoneman]
We’ve had a couple of projects recently with similar requirements to process an Excel file as a batch upload of data. One was a BizTalk project where the FarPoint Spread pipeline component was a good fit; the other was a Web app where we put together a custom parser based on the open-source ExcelDataReader. The custom solution was appropriate for the expected size of upload files, but wouldn’t scale well to deal with large files quickly, so I wanted to look at a distributed alternative using nServiceBus. My sample implementation is on MSDN Code Gallery here: nServiceBus Excel Upload. I’ll look at a comparative BizTalk solution in a future post.
If you haven’t come across nServiceBus, it’s a queue-based messaging framework which is inherently asynchronous. “Scalability and reliability are in its DNA”, and it has some impressive case studies. Using nServiceBus you can set up a simple publish-subscribe architecture between nodes, or a load-balanced architecture with a central distributor. In the distributed version, the upload sample looks like this:
(Note that the diagram represents the bus as a separate entity, in reality it’s distributed among the queues of all the nodes. The diagram also omits the distributor).
In nServiceBus, services are requested by publishing messages onto the bus. Requests are fulfilled by a handler which subscribes to a type of message. The Excel upload sample takes a workbook which contains a set of products and uploads them to the AdventureWorks database. There are three types of message:
This is a basic example, more validation would be expected, but the workflow is representative. Parsing the Excel file is done quickly, allowing for any number of nodes to participate in the resource-intensive work of creating the products. Using a single host with 5 threads, an Excel file with 3,500 rows takes just over 4 minutes to process on a dev laptop. That’s 13 messages per second which is nothing special, but this is on a single host which is also running the distributor and SQL Server. The processing host has a flat memory profile (consistently around 40Mb) and runs at less than 20% CPU. The distributor takes around 15% CPU, and MSMQ another 15%.
For a much larger upload – 12,000 rows – the processing and memory profile is the same, and the upload takes around 14 minutes (~14 messages per second) on the same infrastructure.
Running the Sample
Access to a SQL Server instance with the AdventureWorks sample database installed is a pre-requisite. You’ll need to add the new stored procedure with uspInsertProduct.CREATE.sql. The connection string used by the host is specified in ExcelUpload.Host.exe.config (defaults to unnamed local instance).
You’ll need MSMQ running on all nodes. Queues are specified in configuration and are created by nServiceBus if they don’t exist – an exception is the storage queue for the distributor which needs to be manually created, this PowerShell snippet will do it:
[Reflection.Assembly]::LoadWithPartialName(“System.Messaging”)
[System.Messaging.MessageQueue]::Create(“.\Private$\distributorStorage”, $true)
Unzip the file ExcelUpload.Binaries.zip. You’ll have a batch file – start.cmd – and five subdirectories – Client, Host, Distributor, SampleFiles and Drops. Run start.cmd, check the console screens for errors, then copy one of the Excel files from SampleFiles to Drops. You should see activity in the host, client and distributor console screens, and new rows being added to the [Production].[Product] table.
If you drop the same file twice, the unique key on Products will be violated, so the upload will error. On a fresh install there are under 1000 products, so this resets the table to the default state:
delete [Production].[Product] where ProductID > 999;
Implementation Details
The sample uses the release version of nServiceBus – 1.9 – as the distributor was broken in the 2.0 beta at the time of writing.
The two console apps run the “client” (which monitors a configured file location for an Excel drop, and publishes the StartBatchUpload message), and the “host” (which subscribes to StartBatchUpload and publishes AddProduct and BatchStatusChanged messages). Both use Topshelf so they can run as a console, or can be installed as a Windows service (e.g. ExcelUpload.Client.exe /install).
If you want to run several hosts on the same machine, they will need to use different queues. Copy the whole of the Host directory, and modify ExcelUpload.Host.exe.config to specify a unique queue name:
InputQueue=“ExcelUpload.Service.1.InputQueue“
Then run ExcelUpload.Host.exe from all the copied locations, and you’ll see the console hosts sharing the message processing when a file is dropped.
case: “NonEmpty”
<ns0:People>
<ns0:Name>Name_0</ns0:Name>
<ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..
case: “Empty”
<ns0:People>
<ns0:Name></ns0:Name>
<ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..
case: “OneTag”
<ns0:People>
<ns0:Name/>
<ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..
case: “Null”
<ns0:People>
<!– NO NODE: <ns0:Name>Name_0</ns0:Name>–>
<ns0:IsDependent>IsDependent_0</ns0:IsDependent>
..
There is no informationinto theMSDN about this [http://msdn.microsoft.com/en-us/library/aa561906(BTS.10).aspx]
I tried to use the xpath() function in two variants, one with “string(xpath_expression)” second with “xpath_expression”
if ( xpath (msg_SourceRoot, “string(/*[local-name()=’Root’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’People’ and namespace-uri()=’http://MapTest.IncPerson’]/*[local-name()=’Name’ and namespace-uri()=’http://MapTest.IncPerson’])”) == null)
{ System.Diagnostics.Trace.WriteLine(“Name == null”); }
I.e. in thefirs section used the “string(xpath_expression)” expression, in thesecond used the”xpath_expression”
Result is:
“NonEmpty” ======================================================================
[Name_0]
Name != null && Name != Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty
“Empty”======================================================================
[]
Name == Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty
“OneTag”======================================================================
[]
Name == Empty
———————————————————————-
[Microsoft.XLANGs.Core.Part+ArrayBasedXmlNodeList]
Name != null && Name != Empty
“Null”======================================================================
[]
Name == Empty
———————————————————————-
[]
Name == null
Hot on the heels of one CTP (StreamInsight) and timed to coincide with the opening day of the PDC, Microsoft has just released the latest version of the technology formally known as ‘Oslo’. SQL Server Modelling, as we must now learn to call it, has several improvements over the previous CTP release last May. Indeed, the previous CTP was characterised by behind-the-scenes code improvements and rewrites rather than new functionality. It is therefore doubly reassuring to see the slew of new features.
I won’t spend time going into new functionality here because Kraig Brockschmidt (yes, forall you ‘techie’s of a certain age’…this is the same Kraig B of ‘Inside OLE’ fame – every COM developer’s bible circa 1995)has just posted an excellent summary at http://blogs.msdn.com/modelcitizen/archive/2009/11/17/announcing-the-sql-server-modeling-n-e-oslo-ctp-for-november-2009.aspxtogether with links to the download and various materials. In any case, it’s getting late here and I need to go to bed. However, things are moving along. I’m looking forward to getting to grips with Quadrant which sounds like it may, at last, be beginning to make some sense.M improvements sound great.
Like others, I was deeply alarmed by Doug Purdy’s post (see http://www.douglaspurdy.com/2009/11/10/from-oslo-to-sql-server-modeling/) aweek ago, and my initial reaction was similar to many commentators (something along the lines of deep groans, punctuated by loudinarticulate screams). We must hope that the new name and product alignment, which appears at so many levels to indicate an abandonment of the deep platform vision many of us believed ‘Oslo’ represented, is just a temporary glitch in Microsoft’s modelling story. SQL Server may be a platform. Most of us think of it as a product. Windows is the platform, and we want Microsoft to do better at supporting modelling across their entire platform, and not just one, admittedly very powerful, corner of their universe <end of rant>
I wasn’t able to make it to PDC this year. Borrowing from Frank La Vigne who is in a similar situation, for me, PDC stands for “Preparing for Delivering Child.” Apparently my wife has an “in” with the State Department, and as such, I’m on the No Fly list until “deployment.”
That being said, MVP Rachel Appel and the wonderful folks at NotAtPDC have asked me to host this week’s festivities. If you’re unfamiliar with NotAtPDC, it’s a perennial virtual conference coordinated for those who weren’t able to make it to PDC this year. It includes industry and INETA speakers and MVPs focusing on real-world scenarios and topics.
Look for more information on the NotAtPDC web site in the coming day.
If you have specific questions, please drop me a line.
Sometimes batches are sent with common header information related to messages in the batch. This information is useful for routing purposes whereas the information is also useful for processing the messages. This information may or may not be useful to other client application or processes other than the process which is processing the message. Sometimes […]