This of course is something I believe pretty intensely having been a trainer for the
last 10 years or so.  Scott and Brian (and Tomas has
sort of piled on) have posted a couple of entries on how they thing Windows Workflow
Foundation (WF) might be too complex.  In general I think they are pretty much
totally wrong (isn’t disagreement and discourse the cool think about the web? 😉
).  First I will address their particular points directly – and then give my
overall assessment of WF.  And, in full-disclosure, although sometimes people
mistake me for an MS employee – I am *not* .

First Brian’s points (not to pick on him – but since he was first to post I’ll
respond to his main points first):

1) On properties.  So here Brian has an interesting point that events are
displayed in the properties grid in a way that are not segregated from “properties”
in the way other .NET objects properties are. But the thing he misses is that
*most* events are (and should be) DependencyProperties.  Because they are DependencyProperties
they can be bound using ActivityBind, and from that POV they do really belong in the
same part of the property grid (since you should be binding an Event of one Activity
to another Activity – you aren’t using the *normal* += syntax when binding those two
objects together).  This point is pretty minor IMO and so is my response. 
When writing WF properties and events there really isn’t much difference in the syntax.

2) Code Conditions.  So why do CodeConditions have an EventArgs?  VB.NET. VB.NET
cannot deal with delegates who have a return value (unless this has been fixed and
I didn’t know about it).  Now in generally he misses the point here as well. 
In *most* real WF applications – you don’t want to be using CodeCondition, you want
to be using DeclarativeRuleCondition since you’ll want the flexibility of the WF rule
execution rather than hardcoding conditions in code.  Also – since the *preferred*
model of WF is to have as litle code as possible in your Root workflow (which
enables more dynamic scenarios as well as XAML creation of workflows) – using CodeCondition
is really just for demos and such IMO.

3) HandleExternalEvent/CallExternalMethod.  Granted, communication between the
Host and running workflows isn’t perhaps the best part of WF.  But the barrier
is there for a good reason – because the model of WF supports persisting workflow
instances.  If a reference to an object could be passed directly into a workflow
instance, that could cause issues when using persistence.  Now – Is using HEE/CEM
complex?  Perhaps at first, but once you get used to it – it really isn’t
all that complex – *and* generally on a particular WF project you’ll have your interface
and types defined pretty early in the process and then like magic – you
are done and can get on to writing other Activities. 

Also – how much more complex is that than defining a WCF (Indigo) ServiceContract
interface and corresponding message types, and the configuration entries for the bindings,
etc. etc. etc.?  I think it is just about as complex – which really tells me
it is about as complex as it needs to be to be generic.

Now – you also have to remember that HEE/CEM is just *one* way to communicate between
the host and the workflow instances.  The real communication mechanism (that
HEE and the ExternalDataExchangeService use) is the WF Workflow Queuing mechanism. 
So if HEE/CEM is too complex or not complex enough (which is actually what I’ve run
into a number of times) then you can create Activities that listen for application
specific queues and create services that Activities can ask for to communication to
the Host.  The big thing to remember is that this indirect communication is essential
for the WF model to succeed.

Also – he fails to mention the ability (in a very simple workflow) to pass parameters
into a Workflow Instance and get parameters back out.  That is probably the mechanism
you’d use in a “WF-lite” kind of WF usage.

Now on to Scott:

Scott doesn’t have complaints really (ok a few about the WF designer – I’m leaving
those alone for brevity) – but he really has a list of “gotchas”.  Now – I would
argue in return that every runtime (Java, .NET, ASP.NET, WCF, .NET Remoting, BizTalk)
all have “gotchas” – which generally related to understanding the model of that particular
runtime or library.  That being said – here are my responses to his points:

1) Spawned execution contexts.  These are really super important in terms of
the model.  What part of the model?   Compensation for one.  If
each child inside of a While Activity didn’t have a persistable context – then it
would be impossible to come back to that activity at some time in the future (could
be years – that is what the model is meant to support) and tell that activity to compensate,
since that activity would have no state to remember what to compensate.  Also
in persistence it is vital to remember all the activities that have executed. 
So – yes in general you need to be really careful that your Activities are all Serializable. 
This is true in other runtimes as well (like when you store objects in out-of-proc
Session in ASP.NET or work in .NET remoting – so it really isn’t anything new for
most .NET developers).

2) See #1

3) So this is something I’ve argued about on the WF forums – how is this different
than any random .NET code that uses System.Transactions?  It isn’t.  If
you use more than one connection to SQL Server 2005 you get a DTC transaction. 
End of story – happens in C#, VB.NET, ASP.NET, WCF and WF.  So the issue here
is that if you using the *Out-of-Box* Tracking and Persistence service – *and* connecting
to a database you get DTC transactions.  Just like if you used three objects
in a .NET library and all of them used different connection objects.  The OOB
Tracking and Persistence and supposed to be references to get your WF application
started – and if your application works with them – super , use the OOB implementations. 
How to get rid of the DTC?  Build your own Tracking and Persistence service that
uses a common connection (like they do if you configure it) with your own code and
you now get Local SQL Server transactions – magic if you understand the model.

So what’s the real upshot here?  Are Scott and Brian right or am I right? 
I think I’m right of course 😉 And here’s why – I think I understand the WF Model.   Coming
from BizTalk has made me understand the power of this model (since BizTalk orchestrations
have the ability to do many of the same things WF workflows can do).  Design-time
and runtime visibility, the ability to model many different kinds of short and long-running
processes (I can go on and on about the features) – are IMO really powerful ways
to model real world processes.

You have to remember the charter of the WF Team – they aren’t just building a
visual way to write random .NET code – they are creating a way to write applications
that are workflow enabled, that need all or some of the potential services that the
WF model provides.  The workflow runtime is based on a certain set of assumptions
about how applications should be put together (although almost all of those
assumptions are pluggable pieces of the infrastructure that you can change if you
like).

Perhaps your application won’t do well with the model that WF provides. 
But I think with more and more people writing services – there is going to be a big
need to tie those services together (not to mention all the applications that people
write today which really are workflows whether people realize it or not).  And
I think WF is going to be proven to be the best way to write those kinds of applications. 

Is WF easy?  No – I do not think WF is easy.  If it was easy it would hardly
be powerful enough to be very useful.

Flame away 🙂