WCF: 4 Tenets of Service Oriented Data Validation

Remember the 4 tenets of SOA?  One of them is that Boundaries are explicit.  When somebody sends data to your service it is just like when you cross an international border into another country.  Just a couple of hours drive north of us in Redmond is the border crossing to Canada.  When you cross into Canada or back into the United States you have to stop your car and the border agents do their job.  Their job is to make sure that you have proper documentation and that you aren’t smuggling something (or someone) bad into the country.

Your service has to have a similar border checkpoint and it is at the trust boundary where data enters your “country”.  At the boundary you have to validate the data before it gets down deep into your business logic or database in some invalid form.  The question I want to focus on here is one of design.  Where should the validation be done?

Service Operation

Most of the time we tend to validate data on entry to the service method.  In small applications this approach is manageable but suppose you have a number (call it a Foo) that you use in 15 different services and you always pass it as an integer.  As you review the system you find that some services reject any negative Foo value while others reject any Foo value less than 1.  Your refactoring instinct tells you that it would be a good idea to centralize the Foo validation logic so you don’t end up with a variety of different validation rules.

Take a look at this code.  It works, but could it be better?

public bool SomeOperation(int foo)

{

    if (foo < 0)

    {

        throw new FaultException("Invalid Foo");

    }

 

    return ProcessTheFoo(foo);

}

The problem with this code is that

  1. The validation rule (foo must be greater than zero) is contained within this service method.  If you use foo in another method or another service where it has the same semantic you have to be sure that your validation rule is followed there are well.  Once you code the rule in more than one place you have a system that is difficult to maintain and prone to inconsistent validation.
  2. The response to failed validation is also contained within the service method.  In this case you throw a FaultException but there are many options for how to Fault the service (as I mentioned in my previous post) and once again the way in which you respond to the failure of validation now becomes spread across your codebase resulting in a fragile system.

Ron’s 4 Tenets of Service Oriented Data Validation

Services have to consume and receive data.  This data flows across the service boundary and therefore must be untrusted until validated.  I’m proposing some new tenets for service orientation.  These tenets describe validation rules.  Validation rules are an expression that tells you if the data is valid or not.

  1. Validation Rules should be in one place
  2. Validation Rules should apply to all layers of the app
  3. Validation Rule violations should result in internal exceptions which may cause external faults
  4. Validation Rules may be shared between sender and receiver

Validation Rules should be in one place

In your system you have to validate the state of an object.  The validation rules for object should be written once and only once.  This makes your system more maintainable..

Validation Rules should apply to all layers of the app

Service Oriented Applications consist of the service boundary and lower layers or business logic.  What makes an object valid at one layer should be the same as what makes it valid at another layer.  Lower layers of the system may use internal types which hold data in intermediate states that is not valid according to the rules.  When this is the case you should think of these types as being fundamentally different than the type that the validation rules apply to.

Validation Rule violations should result in internal exceptions which may cause external faults

If validation is called from internal service logic, validation rules should throw exceptions types appropriate for internal use such as ArgumentNullException or ArgumentOutOfRangeException.  At the service boundary if you want to propagate the error to the sender these exceptions must be converted to FaultException or FaultException<TDetail>

Validation Rules may be shared between sender and receiver

If the sender and receiver are able and willing to accept the tighter coupling that comes from sharing assemblies, you can share validation rules between sender and receiver.  If you share validation rules, the sharing should be limited to only the types exposed at the boundary

Scenario: Self-Validating Request Message

Given

  • an entity named Foo which implements the IFoo interface
public interface IFoo
{
    int Data { get; set; }
    string Text { get; set; }
}
  • a DataContract type named GetDataRequest with a property named Foo of type IFoo
  • A class Foo that implements IFoo and does data validation in the property setter
  • A class FooValidator that validates data on behalf of Foo
  • An InternalFoo class that implements internal business logic

When

  • The service is invoked with invalid data

Then

  • WCF deserializes the message body and creates an instance of Foo
  • The Foo property setters run validation code using the FooValidator
  • the FooValidator methods throw ArgumentException
  • and the Foo class converts the ArgumentException to a FaultException<TDetail> and throws
  • The sender catches a FaultException<TDetail>

Conclusion

Sound complex?  Sure but it is one thing to build a simple example and quite another to show an architecture style that yields some significant benefits.  Of course there are many ways to accomplish these goals – you might have a better way – if so, please share it with me.

Happy Coding!

Ron

http://blogs.msdn.com/rjacobs

Twitter: @ronljacobs

WCF Spike FaultContract, FaultException<TDetail> and Validation

Ready to have some fun? Today I spent the day investigating WCF FaulContracts and FaultException and some best practices for argument validation.  I’m going to do the same in a future post on Workflow Services but I felt it best to really understand the topic from a WCF point of view first.

Investigation Questions

  1. What happens when a service throws an exception?
  2. What happens if a service throws a FaultException?
  3. What happens if the service operation includes a FaultContract and it throws a FaultException<TDetail>?
  4. How can I centralize validation of DataContracts?

Scenario 1: WCF Service throws an exception

Given
  • A service that throws an ArgumentOutOfRangeException
  • There is no <serviceDebug> or <serviceDebug includeExceptionDetailInFaults="false" /> in web.config
When
  • The service is invoked with a data that will cause the exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode name will be “InternalServiceFault”
  • The Fault.Reason will be

"The server was unable to process the request due to an internal error.  For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the <serviceDebug> configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs."

Conclusions

No surprises here, probably anyone who has done WCF for more than 5 minutes has run into this.  For more information see the WCF documentation Sending and Receiving Faults.  You might be tempted to just turn on IncludeExceptionDetailInFaults but don’t do it because it can lead to security vulnerabilities.  Instead you need a better strategy for dealing with exceptions and that means you need to understand FaultException

Scenario 2: WCF Service throws a FaultException

As we saw in the previous example, WCF already throws a FaultException for you when it encounters an unhandled exception.  The problem is in this case that we want to let the caller know they sent an invalid argument even when they are not debugging.

Given
  • A service that throws a FaultException
public string GetDataFaultException(int data)

{

    if (data < 0)

    {

        // Let the sender know it is their problem

        var faultCode =

            FaultCodeFactory.CreateVersionAwareSenderFaultCode(

                ContractFaultCodes.InvalidArgument.ToString(), ContractConstants.Namespace);

 

        var faultReason = string.Format(Resources.ValueOutOfRange, "Data", Resources.ValueGreaterThanZero);

 

        throw new FaultException(faultReason, faultCode);

    }

 

    return "Data: " + data;

}

When
  • The service is invoked with a data value that will cause an exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode.Name will be “Client” (for SOAP 1.1) or “Sender” (for SOAP 1.2)
  • The Fault.Reason will be Argument Data is out of range value must be greater than zero

Conclusions

The main thing is that the client now gets a message saying that things didn’t work and its their fault.  They can tell by looking at the FaultException.Code.IsSenderFault property.  In my code you’ll notice a class I created called CreateVersionAwareSenderFaultCode to help deal with the differences between SOAP 1.1 and SOAP 1.2.  You will find it in the sample.

Some interesting things I learned while testing this

  • You should specify a Sender fault if you determine that there is a problem with the message and it should not be resubmitted
  • BasicHttpBinding does SOAP 1.1 while the other bindings do SOAP 1.2 so if you are working with both you need to do the version aware sender fault code.
  • FaultException.Action never shows up on the client side so don’t bother with it.
  • FaultException.HelpLink and FaultException.Data (and other properties inherited from Exception) do not get serialized and will show up empty on the client side so you can’t use them for anything.

While this is better than throwing unhandled exceptions, there is an even better way and that is FaultContracts

Scenario 3: WCF Service with a FaultContract

Given
  • A service operation with a FaultContract
  • and a service that throws a FaultException<TDetail>
var argValidationFault = new ArgumentValidationFault

{

    ArgumentName = "Data",

    Message = string.Format(Resources.ValueOutOfRange, "Data", Resources.ValueGreaterThanZero),

    HelpLink = GetAbsoluteUriHelpLink("DataHelp.aspx"),

};

 

throw new FaultException<ArgumentValidationFault>(argValidationFault, argValidationFault.Message, faultCode);

When
  • The service is invoked with a data value that will cause an exception
Then
  • The client proxy will catch a System.ServiceModel.FaultException
  • The FaultCode.Name will be “Client” (for SOAP 1.1) or “Sender” (for SOAP 1.2)
  • The Fault.Reason will be Argument Data is out of range value must be greater than zero
  • The FaultDetail will be the type TDetail which will be included in the generated in the service reference

Conclusions

This is the best choice.  It allows you to pass all kinds of information to clients and it makes your error handling capability truly first class.  In the sample code one thing I wanted to do was to use the FaultException.HelpLink property to pass a Url to a help page.  Unfortunately I learned that none of System.Exception’s properties are propagated to the sender.  No problem, I just added a HelpLink property to my ArgumentValidationFault type and used it instead in FaultException.Details.HelpLink

Some interesting things I learned while testing this

  • You can specify an interface in the [FaultContract] attribute but it didn’t seem to work on the caller side for catching a FaultException<T>

Recommendation

Use [FaultContract] on your service operations.  You should probably create a base type for your details and perhaps have a few subclasses for special categories of things.  Remember that whatever you expose in the FaultContract is part of your public API and that versioning considerations that apply to other DataContracts apply to your faults as well.

Happy Coding!

Ron

http://blogs.msdn.com/rjacobs

Twitter: @ronljacobs

All about the Testing Applications with Microsoft Test Manager Course

The 2-day Testing Applications with Microsoft Test Manager course is currently our most popular Visual Studio 2010 training course. For every public delivery of the course we are usually running 4 or 5 private in-house courses for companies. So why the demand for this course?

Microsoft has invested a significant amount of time and effort focusing on improving the testing capabilities of Visual Studio 2010. Here are a few of the key things you will learn by attending the Testing Applications with Microsoft Test Manager course.

Test Case Management

Learn how to create a Test Plan and configure properties for the test plan including test settings and configurations.  You’ll create Test Suites and link them to your requirements for traceability and reporting. During the course you’ll see how to write effective test cases and organize them for convenience and reporting.

Executing manual tests

Microsoft’s new Manual Test Runner is a purpose built application to allow you to step through your test cases and see how you need to interact with your application under test. You can record an action recording for a test case which will then allow you to fast forward one or more steps on subsequent test executions. This can be a huge time saving feature.

What’s changed? What do I need to retest?

Using the Test Impact Data Collector, you can select two different builds (eg. 2.1.10.1 & 2.1.12.1) and with the click of a mouse button see exactly what changes (work items) have gone in the newer build. You can also find out which test cases should be executed based on what has changed between the builds. Why run 200 tests when you only need to run 20 of them?

Raising data-rich bugs

One of the challenges a tester has is knowing what information and how much information to include in a bug report. Often testers don’t have time to include as much detail as they would like and developers invest more time trying to reproduce a bug then they might otherwise need to. Using the Test Runner, you can raise data rich bug reports that can include a wealth of helpful information for the developer. Through the selection of data collectors, much of the relevant information is automatically added to the bug report when the testers raises the bug. This means less time writing the bug and for developers, possibly a significant reduction in the time it takes to reproduce the bug.

Automated UI testing

The Mastering Testing with Visual Studio 2010 course will cover creating automated user interface tests called Coded UI tests. Coded UI tests can be generated from manual test action recordings or they can be recorded using the Coded UI test recorder. These tests have the great benefit of being able to be automated completely and included as part of the build process.

Report on test progress

Another time consuming task for testers is often creating the testing reports needed for management or to record the daily progress of your testing. In the course we’ll look at how to use some of the out-of-the-box reports and how you can quickly and easily create your own reports.

While the courses covers more than I have listed above, these are some of the most important things you’ll learn how to use by attending the Mastering Testing with Visual Studio 2010 course.

Our the next Testing Applications with Microsoft Test Manager course is scheduled for February 16th. Hope to see you there! 

QuickLearn’s New Year’s Sale: 15% off Public Classes

Sign up for classroom training before January 31st and receive 15% off your registration.

Kick off the new year with training from QuickLearn. Update your skills, improve at your job, and make getting a raise your New Year’s resolution!

All classes taught at our training center in Redmond, WA (or through remote training from your home or office) are eligible. Register for classroom training using the promotion code: NY2011

Search the course calendar to find and register for your course.

*Sorry this offer cannot be applied to prior registrations or combined with any other offers. Discounts are not valid at partner locations.

Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

I’m excited to announce the release today of several products:

  • ASP.NET MVC 3
  • NuGet
  • IIS Express 7.5
  • SQL Server Compact Edition 4
  • Web Deploy and Web Farm Framework 2.0
  • Orchard 1.0
  • WebMatrix 1.0

The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack.

ASP.NET MVC 3

Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here.

ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include:

Razor

ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow.

Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. 

You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months

Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express.

JavaScript Improvements

ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities.

The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries. 

ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project). 

Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects).

Improved Validation

ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data.

Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation – which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client.

The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model. 

ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values.

You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks.

Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today – from Steve Sanderson’s MvcScaffolding blog post.

Output Caching

Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level.

With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server. 

The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached.

I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future.

Better Dependency Injection

ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers.

With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application.

This makes it much easier to cleanly integrate dependency injection within your projects.

Other Goodies

ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples:

  • Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates.
  • Improved Add->View Scaffolding support that enables the generation of even cleaner view templates.
  • New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views.
  • Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app.
  • New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models.
  • Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller.
  • New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios.
  • New Html.Raw() helper to indicate that output should not be HTML encoded.
  • New Crypto helpers for salting and hashing passwords.
  • And much, much more

Learn More about ASP.NET MVC 3

We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today:

We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published.

How to Upgrade Existing Projects

ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3. 

The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones.

You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects.

Localized Builds

Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available.

NuGet

Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here.

NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on.

NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects.

Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects. 

NuGet Gallery

This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here.

There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future.

IIS Express 7.5

Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types.

We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically:

  • It’s lightweight and easy to install (less than 5Mb download and a quick install)
  • It does not require an administrator account to run/debug applications from Visual Studio
  • It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules
  • It supports and enables the same extensibility model and web.config file settings that IIS 7.x support
  • It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all)
  • It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms

IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights.

Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables. 

SQL Server Compact Edition 4

Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage.

No Database Installation Required

SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment.

SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications.

Works with Existing Data APIs

SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today.

Supports Development, Testing and Production Scenarios

SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well.

There are no license restrictions with SQL CE.  It is also totally free.

Tooling Support with VS 2010 SP1

Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables. 

Web Deploy and Web Farm Framework 2.0

Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines.

You can learn more about these capabilities from my previous blog posts on them:

Visit the http://iis.net website to learn more and install them. Both are free.

Orchard 1.0

Today we are also releasing Orchard v1.0. 

Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site.

Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it. 

WebMatrix 1.0

WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch.

WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET – providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers.

You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today.

Summary

I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better. 

A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them!

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

WF4: How Tracking Helped Me Write a Better Unit Test

This morning I’ve been working on how to support cancelling a workflow via a CancellationToken.  The details of that are not important right now but what is really cool is how I was able to test this.

Scenario: Caller requests Cancellation via a CancellationToken and the UnhandledExceptionAction is Cancel

Given

  • An activity that contains a CancellationScope
  • The CancellationScope body has an activity that will create a bookmark and go idle
  • The CancellationScope has a CancelHandler with a WriteLine that has a DisplayName "CancelHandlerWriteLine"

When

  • The caller invokes the workflow asynchronously as a task with a CancellationToken
  • and in the idle callback calls CancellationTokenSource.Cancel

Then

  • A TaskCanceledException is thrown
  • The WorkflowApplication is canceled
  • The CancelationScope CancelHandler is invoked
Test Challenges
  • How can I wait until the cancel is completed after handling the exception before verifying?
  • How will I verify that the CancelHandler is invoked?
Solution

To wait until the cancel is completed after handling the exception before verifying I simply create an AutoResetEvent (line 18) and signal it from the WorkflowApplication.Completed event callback (line 19).  Then before verifying the tracking data I wait for this event (line 41)

To verify that the cancel handler was invoked I use the Microsoft.Activities.UnitTesting.Tracking.MemoryTrackingParticipant. This allows me to capture the tracking information into a collection that I can search using AssertTracking.Exists to verify that the activity with the name ExpectedCancelWriteline entered the Closed state.

   1: [TestMethod]

   2: public void ActivityIsCanceledViaTokenShouldInvokeCancelHandler()

   3: {

   4:     const string ExpectedCancelWriteLine = "CancelHandlerWriteLine";

   5:     var workflowApplication =

   6:         new WorkflowApplication(

   7:             new CancellationScope

   8:                 {

   9:                     Body = new TestBookmark<int> { BookmarkName = "TestBookmark" }, 

  10:                     CancellationHandler = new WriteLine { DisplayName = ExpectedCancelWriteLine }

  11:                 });

  12:  

  13:     // Capture tracking events in memory

  14:     var trackingParticipant = new MemoryTrackingParticipant();

  15:     workflowApplication.Extensions.Add(trackingParticipant);

  16:  

  17:     // Use this event to wait until the cancel is completed

  18:     var completedEvent = new AutoResetEvent(false);

  19:     workflowApplication.Completed = args => completedEvent.Set();

  20:  

  21:     try

  22:     {

  23:         var tokenSource = new CancellationTokenSource();

  24:  

  25:         // Run the activity and cancel in the idle callback

  26:         var task = workflowApplication.RunEpisodeAsync(

  27:             (args, bn) =>

  28:                 {

  29:                     Debug.WriteLine("Idle callback - cancel");

  30:                     tokenSource.Cancel();

  31:                     return false;

  32:                 }, 

  33:             UnhandledExceptionAction.Cancel, 

  34:             TimeSpan.FromMilliseconds(1000), 

  35:             tokenSource.Token);

  36:  

  37:         // Exception is thrown when Wait() or Result is accessed

  38:         AssertHelper.Throws<TaskCanceledException>(task);

  39:  

  40:         // Wait for the workflow to complete the cancel

  41:         completedEvent.WaitOne(this.DefaultTimeout);

  42:  

  43:         // Verify the the cancel handler was invoked

  44:         AssertTracking.Exists(

  45:             trackingParticipant.Records, ExpectedCancelWriteLine, ActivityInstanceState.Closed);

  46:     }

  47:     finally

  48:     {

  49:         // Write the tracking records to the test output

  50:         trackingParticipant.Trace();

  51:     }

  52: }

  53:  

When I run this test I also get the Tracking info in the Test Results along with any Debug.WriteLine output to help me sort out what is happening.  The tracking data is nicely formatted thanks to extension methods in Microsoft.Activities.UnitTesting.Tracking that provide a Trace method for each type of tracking record which produces human readable formatting.

WaitForWorkflow waiting for workflowBusy - check for cancel

Checking cancel token

System.Activities.WorkflowApplicationIdleEventArgs

    Bookmarks count 1 (TestBookmark)

Idle callback - cancel

Checking cancel token from idle handler

Cancel requested canceling workflow 

WaitForWorkflow workflowBusy is signaled - check for cancel

Checking cancel token

Cancel requested canceling workflow 

WorkflowApplication.Cancel

this.CancellationToken.ThrowIfCancellationRequested()

*** Tracking data follows ***

WorkflowInstance for Activity <CancellationScope> state is <Started> at 04:13:53.7852

Activity <null> is scheduled child activity <CancellationScope> at 04:13:53.7852

Activity <CancellationScope> state is Executing at 04:13:53.7852

Activity <CancellationScope> is scheduled child activity <TestBookmark> at 04:13:53.7852

Activity <TestBookmark> state is Executing at 04:13:53.7852

{

    Arguments

        BookmarkName: TestBookmark

}

WorkflowInstance for Activity <CancellationScope> state is <Idle> at 04:13:53.7852

Activity <null> cancel is requested for child activity <CancellationScope> at 04:13:53.7852

Activity <CancellationScope> cancel is requested for child activity <TestBookmark> at 04:13:53.7852

Activity <TestBookmark> state is Canceled at 04:13:53.8008

{

    Arguments

        BookmarkName: TestBookmark

        Result: 0

}

Activity <CancellationScope> is scheduled child activity <CancelHandlerWriteLine> at 04:13:53.8008

Activity <CancelHandlerWriteLine> state is Executing at 04:13:53.8008

{

    Arguments

        Text: 

        TextWriter: 

}

Activity <CancelHandlerWriteLine> state is Closed at 04:13:53.8008

{

    Arguments

        Text: 

        TextWriter: 

}

Activity <CancellationScope> state is Canceled at 04:13:53.8008

WorkflowInstance for Activity <CancellationScope> state is <Canceled> at 04:13:53.8008