April 28th Links: ASP.NET, ASP.NET AJAX, ASP.NET MVC, Silverlight

Here is the latest in my link-listing series.  Also check out my ASP.NET Tips, Tricks and Tutorials page and Silverlight Tutorials page for links to popular articles I’ve done myself in the past.

ASP.NET

  • Displaying the Number of Active Users on an ASP.NET Site: Scott Mitchell continues his excellent series on ASP.NET’s membership, roles, and profile support.  In this article he discusses how to use ASP.NET’s Membership features to estimate and display the number of active users currently visiting a site.

  • ASP.NET Dynamic Data Update: The ASP.NET team last week released an update of the new ASP.NET Dynamic Data feature.  This update adds several new features including cleaner URL support using the same URL routing feature that ASP.NET MVC uses, as well as better confirmation, foreign-key, and template support. 

  • ASP.NET Testing with Ivonna: Travis Illig blogs about a new testing framework named Ivonna that enables unit testing of ASP.NET web forms.

ASP.NET AJAX

  • ASP.NET AJAX UI Templates: Nikhil Kothari from the ASP.NET team has a cool post that shows off a prototype he has been working on that enables clean client-side AJAX templating of UI. 

  • ASP.NET AJAX Control Toolkit TabContainer Theme Gallery: Matt Berseth has another of his excellent posts – this one shows off a bunch of cool themes you can use to style the TabContainer control in the ASP.NET AJAX Control Toolkit.

  • Why do ASP.NET AJAX page methods have to be static? Dave Ward has a useful article that talks about the page methods feature in ASP.NET AJAX, and explains why they are static methods.

ASP.NET MVC

  • Inversion of Control, ASP.NET MVC and Unit Testing: Fredrik Kalseth has a cool article that talks about the concepts behind inversion of control (IOC) and how you can use this with ASP.NET MVC to better isolate dependencies and enable better unit testing of your code.

  • MVC Contrib Project Update: Eric Hexter blogs about some of the latest updates to the open source MvcContrib project to work with the latest ASP.NET MVC interim source release.

  • Testing Action Results with ASP.NET MVC: Jeremy Skinner blogs about some cool extension method helpers he has added to MvcContrib to enable pretty sweet testing of Controller actions.

  • MVC Membership Starter Kit – 1.2 Release: Troy Goode has posted an update to his excellent MVC Membership Starter Kit.  This version works with the interim ASP.NET MVC source release.

Silverlight

Hope this helps,

Scott

Business Rules Engine – I WANT TO BELIEVE

This is a call to anyone who happens upon my blog, thru my contact page, let me know why I would want to use the Business Rule Engine.

As I see it, it seems like it is a lot of work to do something that can be done using SQL, C#, etc. In many cases, you have to know SQL, C#, etc to implement the BRE anyway.

Why would I add a layer of logic on top of existing code to get the same result?

Thanks for the feedback!

BizTalk 2006 R3 – Where Will BizTalk Go Next?

As many of you probably know by now, Microsoft has announced the next release in the BizTalk Server Family. This release will be called BizTalk Server 2006 R3.

I for one am excited for this release, as I am about almost anything BizTalk related. I am not so excited about the new adapters and added features that will be available, but more by what this release represents.

Microsoft and the Connection Systems Division are actively working on Oslo (the next generation of model drive design i.e. cool stuff) so for them to announce a new BizTalk Release before the Oslo release really shows the level of dedication Microsoft has, not only to the product but to the customers currently using it worldwide.

I keep thinking back to 2005 to the days when Windows Workflow and Windows Communication Foundation were first announced and reading blog posts saying “BizTalk is Dead”. Here we are now three years later and BizTalk is even stronger than ever!

With the upcoming BizTalk R3 release and the not-so-distant Oslo Platform release, I can only imagine where BizTalk will be a few years from now.

BizTalk 2006 R3 – A Step In The Right Direction

As many of you probably know by now, Microsoft has announced the next release in the BizTalk Family.  This will be called BizTalk Server 2006 R3.

I for one am excited for this release!  Not so much for the new adapters and added features that will be available, but more by what this represents.

Microsoft and the Connection Systems Division are actively working on Oslo (the next generation of model drive design – all in all super cool stuff) so for them to announce a new BizTalk Release before the Oslo release really shows the level of dedication Microsoft has, not only to the product but to the customers currently using it worldwide.

I keep thinking back to 2005 to the days when Windows Workflow and Windows Communication Foundation were first announced and reading blog posts saying “BizTalk is Dead”.  Here we are three years later and BizTalk is even stronger than ever!

With the upcoming BizTalk R3 release and the not-so-distant Oslo Platform release, I can only imagine where BizTalk will be a few years from now.

Composite Applications and Distributed "Service Networks"

For those that have managed to avoid the pitfall of SOA for SOA’s sake, composite applications are one of the biggest driving forces behind deployments.  The reason for service enablement in the first place is so that you can compose them into new applications more easily and richly. As a byproduct, you tend to get more abstraction, and therefore control, over the application logic.  Yet, there are some dirty-little secrets about composite apps – compared with your old-fashioned “monolithic app” they introduce some new challenges:


 


1.       Composite apps are more complex for IT to deploy, manage and evolve.  The fact that pieces of the composite app may be distributed across servers, platforms and possibly organizational boundaries creates a need for more sophisticated management solutions than exist today. 


2.       Composite apps present new challenges around scalability, performance and reliability.  With classic monolith apps there are tried-and-true strategies for optimizing apps to scale in demanding environments.  But how can you predict the way that a composite app will perform, when the underlying services may have been built for much different levels of scale? How can you make a composite app resilient to failure, so that if one service stops working it doesn’t bring the whole app down?


3.       Composite apps often require much greater cross-vendor interoperability.  The benefits of reusing services for new apps can’t be limited to services you’ve created on a single vendor platform, since most enterprises live in a fundamentally heterogeneous world.  Therefore composite apps need to be able to easily interop across.NET, Java and legacy mainframe environments.


Despite what some vendors might tell you (which tend to be driven by their monetization goals), solutions to the above problems are challenging.  Customers often avoid these issues with initial SOA deployments, but as they start to pursue enterprise-wide initiatives and develop numerous new composite apps these challenges become more apparent.


 


However, there are existing architectural patterns for addressing these concerns.  Think about how the world’s largest distributed, service-oriented application – the Internet – addresses the above challenges.  Every second of the day the Internet gains more scale and power as new nodes are added to the fabric of the internet.  All nodes on the Internet are virtualized – I don’t need to know machine addresses, or know how many boxes any website is running on. You can dynamically add new computing resources without having to take down the Internet, and if any failure occurs the Internet is resilient enough to route traffic around individual points of failure.  The Internet is also fundamentally interoperable – you can add computing nodes of any vendor platform as long as you adhere to standards-based protocols.


 


I’ll apply these characteristics of the Internet to the world of SOA, and describe something that I’ll call a distributed “Service Network”.  The Service Network should be able to automatically detect when you’ve added new computing nodes to the network and take advantage of scaling out with greater power (without requiring complex setup or configuration – it should auto-detect that more resources are available for use).  A Service Network should allow you to add new services without having to worry about physically deploying bits on servers or taking down machines – it should dynamically be “virtualized” against the underlying physical resources.  The Service Network should also let you mix/match services across vendors, as long as you stick to open standards.  And most importantly, the Service Network should keep the composite application from knowing about failure of hardware, automatically redirecting access to new services somewhere on the network without the app ever skipping a beat.


 


Last week I wrote about how Greg Leake was going on tour to talk about his latest work on both StockTrader and the new Configuration Service.    Greg’s configuration service is one of the first examples of a general purpose Service Network built using .NET managed code.  These code libraries can be leveraged as part of your SOA projects to develop new composite application that mix/match services across both .NET and Java.  And you can take advantage of the Config Service with your existing services and apps, requiring only 20 lines of code to reap all of the advantages of the Service Network concept. 


 


For more on this, check out a recent article by Redmond Magazine that describes how Greg’s efforts are helping shape the future of SOA and composite applications.  We’ll be using the feedback we get from this initial work to drive requirements and patterns into our on-going product development efforts.


 


And remember, if a vendor tells you that you have to open a PO to “do SOA” and the money isn’t for training, start asking hard questions.

Using BAM for latency tracking in a BizTalk request response scenario

Using BAM for latency tracking in a BizTalk request response scenario

This post will try and explain how BAM tracking can be used in SOAP based request response scenario in BizTalk 2006. It important to notice that some of the issues discussed in the post are specific to the SOAP adapter and are non-issues if the scenario would for example use the the WCF adapter or similar.

Describing the scenario

In this case we have a SOAP receive port that receives a request and returns a response. The request is routed to a orchestration that calls three different send ports. These ports then sends new requests to back-end systems and returns responses (communication with back-ends systems are also SOAP based). The three responses are used to build up the final response that is then returned to original receive port as a final response.

Our goal is to track the duration between the request and response on each of the ports. The idea is also to find a solution and tracking model that doesn’t have to change if we add or remove ports or add similar processes to track.

Defining and deploying the tracking model

We’ll start by defining our tracking model in Excel. Our activity contains of the following items:

  • InterchangeId (data item)
    As we won’t correlate all the tracking point into one single row (that would break the goal of having one model for all processes, the model would then have to be specific for one process and it’s specific ports) the interchange id will then tell us what different rows belong together and that describes one process.

  • ReceivePortName (data item)
    The name of the receive port.

  • Request (milestone item)
    The time the request was either sent or received (depending on if we track a port that received/sent the request using a receive port or send port).

  • Response (milestone item)
    The time the response was either sent or received (depending on if we track a port that received/sent it’s response on a receive port or send port).

  • SendPortName (data item)
    The name of the send port.

After we described the model it’s time to export it to an XML representation and then to use the BM tool to deploy it and generate the BAM database infrastructure. You’ll find some nice info on this here.

Using the Tracking Profile Editor to bind the model

Next step is to bind the model to the implementation using the Tracking Profile Editor. The figure below shows the different properties that were used. Notice that none of the items was bound to the actual orchestration context. All properties are general properties that we track on the ports. This is important as that gives us the possibility to just add and remove ports to change the tracking.


The next figure shows how the tracking of the request milestone event actually happens on either the RP1 port or on any of the three different send ports! If we developed a new process using other ports we could just add it here, no new model required.

What about the continuation then?

Our final problem is that unless we somehow correlate our request tracking point with our receive tracking point the receive we’ll end up with each tracking point spread over several different rows. In the example below I’ve marked the request for the RP01 port and the response event on the same port.

The reason for this is of course that BAM doesn’t have a context for the two tracking points and doesn’t know that actually belongs together. This differs from tracking in a orchestration were we always are in a context (the context of the orchestration), it’s then easy for BAM to understand that we like to view all the tracking point as one row – when tracking on ports it’s different. Continuation helps us tell BAM that we like have a context and correlate these two points.

In our case ServiceID is the prefect candidate for correlating the two points. A request and a response will have the same service id. In an other situation we could just as well have used a value from inside the message (say for example an invoice id).

The result is one single row for the request response for each port. So in our case a complete process (a complete interchange) is shown on four rows (one row for each of the ports). In the example below the first rows shows us the complete duration (and the other tracking data) between the request response to the client. The other rows show the duration for the send ports communication with the back-ends systems.

This model might not be optimal in an other scenario where your process are more fixed and you can then create a tracking model that is more specific to you actual process. But this solution meets our design goal as we’re now able to just add and remove port using the tracking profiler to track new port in completely new processes without having to go back and change the tracking model.

>
> [![note](../assets/2008/04/windowslivewriterusingbamforlatencytrackinginabiztalkrequ-9c0bnote-thumb.gif)](../assets/2008/04/windowslivewriterusingbamforlatencytrackinginabiztalkrequ-9c0bnote-2.gif) NOTE: When configuring BAM to track a port the _[MessageType](http://msdn2.microsoft.com/en-us/library/aa561650.aspx)_ is actually promoted. This causes some problems in combination with the SOAP based ports that have been published using the Web Services Publishing Wizard. Saravana writes about this [here](http://www.digitaldeposit.net/blog/2007/08/soap-adapter-and-biztalk-web-publishing.html) and all his articles on this subject is a must read when working with SOAP ports. The problem however comes down to that the Web Services Publishing Wizard generates code that puts the wrong _DocumentSpecName _in the message context and that causes the _XmlDisassembler_ to fail (it tricks the _XmlDisassembler_ to look for a _MessageType_ that doesn’t exists).
>
>
>
> This usually isn’t a problem (unless you like to use a map on a port) but as BAM will force the port to promote the _MessageType_ based on the _DocumentSpecName _we’ll have to fix this. Saravana has two solutions to the problem and I find the one that replaces the _DocumentSpecName_ with a null value and lets the _XmlDisassembler_ find the _MessageType_ to work well.
>
>

BizTalk 2006, MSMQ and BizTalk 2002

BizTalk 2006, MSMQ and BizTalk 2002

A little over a year ago I posted
an entry
about a new feature included in the MSMQ Send Adapter in BizTalk 2006
which provided a way to send messages through MSMQ from BizTalk to legacy applications
that required the message contents to be formatted using the ActiveXFormatter, or
that simply used the old COM-based MSMQ API.

However, I didn’t mentioned anything about how to enable the opposite scenario: How
to receive and parse a text message in BizTalk 2006 coming from one of those legacy
applications. Someone recently asked about this on the BizTalk Newsgroup and, as It
turns out, BizTalk Server 2002 happens to fall in this category as well.

Fortunately, this scenario is a lot simpler, if we assume that the contents of the
message will be plain text (say, some string content or XML). The ActiveX message
serialization rules dictate that a message with a BodyType of String will simply be
serialized on the wire as a set of bytes encoded using UCS-2 (UTF-16LE), without a
Byte Order Mark (BOM).

BizTalk can parse UTF-16 contents without problems, but without a BOM, the standard
disassemblers can’t figure out the encoding of the message on it’s own.

Fortunately, most of the time you can work around this by setting the message Charset
before parsing (for example using my own FixEncoding pipeline
component).

technorati BizTalk, MSMQ

Slides from my ASP.NET Connections Orlando Talks

Last week I presented at the ASP.NET Connections Conference in Orlando.  I gave a general session talk on Monday, and then two breakout talks later that day.  You can download my slides+samples below:

General Session

The slides for my keynote can be downloaded here

In the talk I demonstrated how to debug the .NET Framework source code.  You can learn how to set this up with VS 2008 here

I also demonstrated building a site using the new ASP.NET Dynamic Data support – which you can learn more about here.  I also demonstrated using the new ASP.NET MVC Framework – which you can learn more about here.

I also showed off the new Hard Rock Memorabilia site built with Silverlight 2.  You can try out the Hard Rock application yourself here.  You can learn more about Silverlight from my links page here.

Building .NET Applications with Silverlight

The slides + demos for Silverlight breakout talk can be downloaded here.

You can learn more about Silverlight from my links page here.  In particular, I recommend reading my tutorial posts here and here.

ASP.NET MVC

The slides + demos for my ASP.NET MVC talk can be downloaded here.

You can learn more about the latest ASP.NET MVC source refresh here.  Stephen Walther also just posted a really good set of slides + demos from his post conference tutorial on ASP.NET MVC here.

Hope this helps,

Scott