.NET Services – Cloud Interoperability

Speaking of standards – I’m thrilled to report that we will release the “M5” (Milestone 5) CTP (Community Technology Preview – think Beta) for .NET Services (part of the Azure Services Platform) tomorrow!  For those who aren’t familiar with this effort, here’s the primer Almost two years ago, we introduced these services – Service Bus (secure messaging across networks and firewalls), Access Control (user access to web apps and services across multiple standards-based identity providers), and Workflow (for orchestrating and routing Service Bus messages).  From the beginning, .NET Services was designed for multi-cloud, multi-platform use. Developers can use the .NET Services in conjunction with ANY programming language (using support for industry-standard protocols, or via available SDKs for .NET, Java and Ruby) on ANY platform to create or extend federated applications. A good overview of .NET Services is available here.


This milestone contains enhancements to all of the services including expanded support for standards like REST, ATOM, SOAP and HTTP.  As I mentioned previously, we demonstrated at MIX cloud to cloud interop in action.  Specifically, we showed how the Access Control Service and Service Bus could be integrated with a Python Application deployed into Google App Engine using just two lines of code. As always, feedback from developers is critical to us. So, please take time to sign up for the CTP, and tell us what you think. We’re on our way to commercial availability later this year and we need your help to get there.


If you haven’t already, you can follow our cloud efforts by adding @Azure on Twitter.

Practical Oslo Examples Starting to Emerge

In recent days there’s been some interesting movement in Oslo-land:

  • Doug Purdy and Chris Sells did a presentation at Mix09 that showed a domain-specific language for invoking RESTful services, and how to use M to create RESTful services
  • BizTalk MVP Yossi Dahan has published (Part 1, Part 2, Part 3) details about his domain-specific approach to BizTalk deployments
  • Kris Horrocks has started blogging, and posted his domain-specific approach to interacting with X10 home automation

What these three things have in common is that this is the start of practical examples of how you can author a DSL using M, and how the DSL could be used to simplify the effort required to do things (ie: reduce the amount of code you need to write). Up until now it’s been interesting to watch, but there hasn’t been much in the way of practical examples. The fact that these are starting to emerge speaks to the increasing maturity of the project.

The path to learning this stuff starts with M, then you move on to creating DSLs using MGrammar, and then the final phase is creating a runtime that is actually doing something with the data produced by your DSL. This is where Doug and Chris’s presentation (I highly recommend you click the link above and watch it) really resonates well. The “Murl” effort they’ve been working on (available here) makes it really easy to create a REST client.

The sample also shows how extensible Intellipad is, by adding a new “mode” that enables actually USING the DSL (parsing *and* executing) from inside Intellipad and calling RESTful services. Very cool! The code you see below is the syntax of the DSL used to call REST services.

But, in my opinion, the most exciting part of their presentation was the MService part, which is a (work-in-progress, not released) way to create a RESTful service, using a DSL written in M. The screen shot below shows the service definition in the left-hand pane, and the generated SQL in the right-hand pane (red arrows indicates some of the relationships between the two).

They saved the file out to c:\inetpub\wwwroot, and made a REST request (from Intellipad, using Murl-mode, just because they could :)). There’s a handler that IIS routes the request for a “*.m” resource request to. It’s an entirely self-bootstrapped operation, if the service sees no storage it will create the storage.

M is all about writing down data, and the ultimate output is structured data in the form of MGraph. Where this becomes tangible and payoff is realized is when we use a runtime to do interesting things with that data, and MService is the best example of this that I’ve seen thus-far. It’s a tantalizing view of how developers will create applications in the future.

Technorati Tags: Oslo,M,DSL

Moving beyond the “Manifesto”

As you might expect, several of us spent most of Thursday and Friday last week in conversations with developers, standards body members and other vendors regarding open standards for cloud computing and how we get there collaboratively. Being in this industry for so many years, I remember a time when new technologies and platforms did not produce much interest in standards and interoperability. It was great this time around to see broad support for openness in the cloud and transparency on the approach to interoperability.  I was also happy to see a number of community-driven efforts spin up last week, which will provide enormously valuable feedback in defining the desired end-state. It’s important for everyone to take a step back and remember this isn’t about vendors; it’s about developers and end-users.


As I indicated on Wednesday night, Microsoft welcomes the opportunity for open dialogue on cloud standards. To that end, we have accepted an invitation to meet on Monday at 4pm in New York at the Cloud Computing Expo with other vendors and members of standards bodies.  From our perspective, this represents a fresh start on the conversation – a collaborative “do-over” if you will.

BizTalk 2009 Webcasts

This webcast will look at implementing a service aggregator pattern to call three WCF services that will book a hotel, flight and conference for attendees of a conference. The design will be kept simple for now, and will be optimized and made more reliable in future webcasts.
Level: 200
In this webcast you will see
%u00b7 Adding WCF service references
%u00b7 Creating a service aggregator
%u00b7 Calling WCF services from an orchestration
%u00b7 Creating WCF send ports

Oslo based solution for deploying BizTalk applications – the runtime

This is a third post in a series describing my Oslo based solution for deploying BizTalk applications; I’ve used this exercise to play around with ’M’, but it was important for me to work on a real solution, with real benefits – something I could actually usein Part I I discussed the concept and presented both the “source code” of my app and the output I was working toward; Part II was all about the MGrammar part of the solution.

In this, third, part I will discuss the last missing piece -the runtime.

Before I start, though, I would say that I did find getting into ’M’ somewhat confusing at first; and while it’s more than just possible I’m still missing some things , I hope this series could help one or two people in their journey with Oslo – which is, without a doubt, an exciting one!

There are two things, I believe, that contributed to my confusion – the first is the fact the M is really many things, quite different things, actually; from what I hear Microsoft have identified the challenge some of us (me) are having getting a grasp on ’M’ and are hard at work to bring things [closer] together; hopefully it won’t be long before we know how the converged language looks like, in the mean time one simply has to remember that –

There’s MSchema – which you could use to define models, a bit like xml-schema, or declaring your classes in code or even tables in SQL; I haven’t really touched on MSchema in this series butI might come back to that later.

Then there’s MGraph, which is a way to define instances of things, possibly ones that have been modelled using MSchema, but, as is evident from my little project, not necessarily – MGraph can be very useful even if you don’t have a model- as long as you have your grammar – in comes MGrammar, the third spect of ’M’, which can be used to define a syntax for your very own [domain-sepcific-]languge for describing things;
A ’runtime’ could then be used to processes instances described as MGrammar as a result of inputs in your language.

And that is the second thing that really confused me – what is that ’runtime’? in all the ’M’ presentations I’ve seen, the ’runtime’ was merely mentioned and has never received enough “floor space” and yet – an MGrammar without a runtime, in the majority of cases, is, quite useless; you have to have a runtime that would act on your source code; in fact – the runtime would act on the MGraph resulting from your language, which is what makes it all so brilliant, because in a sense, this is where everything comes together – you runtime can work on instances described in your language, on MGraph instances stored in the repository created using MSchema and possibly even ones defined using Quadrant.

The point is that there must be a runtime that understand the model behind your language , can parse its graph and then do whatever you need it to do; and it is your job to build that runtime.

So what have I done for my runtime? here’s a quick overview (reminder: the full source code will find its way shortly onto codeplex) –

My runtime is a console application, one that takes a source code file path as an argument and outputs MSBuild files (and dependencies) that can be used to deploy the application described in the source code onto BizTalk Server.

The first part of my runtime – which I will not bore you with – is about validating the command line arguments; standard stuff.

The second part is about creating the parser for my language, where, thankfully, the Oslo SDK does all of the heavy lifting – it includes a class called DynamicParser which, once created, you can use to parse your source code.

To create the DynamicParser you must first compile your language, and that’s easy enough to do – you start by creating a compiler

MGrammarCompiler compiler = new MGrammarCompiler();

and continue by supplying your grammar

compiler.SourceItems = new SourceItem[] {
        new SourceItem {
            Name="BTSDeploy",
            ContentType = ContentType.Mg,
            TextReader = new StreamReader(GetLanguageDefinition())
        }
    };

(GetLanguageDefinition() is a simple helper method I wrote to get the grammar file embedded as a resource in the exe)

Now you’re ready to compile your language, but to make things manageable you want to provide it with an error reported; the compiler would report any errors to the stream you would provide, I’ve naturally used the console

TextWriterReporter errorReporter = new TextWriterReporter(Console.Out);
if (compiler.Compile(errorReporter) != 0 || errorReporter.HasErrors)
{
    Log("Failed to compile language definition\nSee above for details");
    return null;
}

If the compilation succeeded you are ready to create your parser –

DynamicParser parser = new DynamicParser();
compiler.LoadDynamicParser(parser);

That’s part one of three done.

The next step is to use the dynamic parser to parse your source code, the output of which would be a graph representation of the source; luckily the SDK does virtually all the lifting here as well, and it comes down to one line –

object rootNode = parser.Parse<object>(sourceCodeFileName, null, errorReporter);

Note that the output type is object – which, as you will find out if you try this out, is quite painful- currently all the types used in the Graph are internal, which makes debugging quite difficult (you can’t quite look at any variables you hold in any meaningful way, you have to keep calling methods, as you’ll see next; hopefully this will change one of the next updates to the SDK.

In any case rootNode is now pointing at the root of a graph – a tree like structure you could ’walk’ to extract the pieces of information you care about in the source code; here you’re expected to use methods like GetLabel, GetSequenceElements and GetSuccessors to reach nodes and their values in the graph and, of course, to do that you need to know exactly how your graph looks like; my first instinct was to look at the PreviewMode pane in intellipad (usually the right most pane when working with MGrammar) as it shows you a representation of the MGraph created for the source code and language used; this worked quite well, but, as I found out, wasn’t the most trivial thing – the two didn’t align completely and I ended up having to resort to trail-and-error to get the parsing logic right.

The reason is that M has a few shortcuts one could take, but the graph you would be working on is the very basic, more verbose format; some information on this is mentioned here.

Then, on a recent visit to Redmond, Dana Kaufman passed on a great tip – if you ’compile’ your grammar using mg.exe to create the mgx file (basically a ZIP file containing XAML representation of language) and then use mgx.exe on your source file adding a reference to the mgx file you just created, you end up with an ’M’ file which is exactly the graph your runtime would be working on.end up with; so useful!

So – here are a few example of how I worked the graph – to start with I knew my root node should be a node with the label ’Application’, so I checked it this way –

string label = graph.GetLabel(rootNode).ToString();
if (label != "Application")

I then knew that the application name would be a child element of the root node, so I extracted it like this

// extract the application's data - this should contain two nodes - the application name and the list of items in the application
List<object> appData = graph.GetSequenceElements(rootNode).ToList<object>();
//first line should be the application name, make sure it is not a node and extract the label
if (!graph.IsNode(appData[0]))
    Contents.AppName = appData[0].ToString();

the second node in the appData collection is where the graph ’continues’, so to get the list of things that compose my application I needed to walk down that path –

//the second element should be the list of lines
foreach (object section in graph.GetSuccessors(appData[i]))
{
  //each successor would be a category (reference, importing binding, resource, etc), with a list of items
  processSection(graph,section);
}

with processSection start with

string sectionName = graph.GetSequenceLabel(graph.GetSuccessors(section).First()).ToString();
List<object> items = graph.GetSuccessors(graph.GetSuccessors(section).First()).ToList<object>();

Log("Found section '{0}'", sectionName);
switch (sectionName)

I hope that from these few examples you can see what it takes to work the graph – the graphBuilder (which is a somewhat confusing name, as I’m using it to walk the graph, not build it) has all the methods you need to access the various nodes ( but there’s no xpath-like- support), but as all the types are (currently) internal to the MS assembly you’re always working with objects, which is less then ideal.

Again – my full source code is on its way to codeplex, I just want to make sure it’s commented well enough to be well understood, and am struggling with time, but the bottom line is that once you figure out how the graph builder works, learnt how to see your graph visually (using mg.exe and mgx.exe) and got used to the fact that you’re dealing with objects for now, parsing the source code is very easy.

Obviously it is completely down to you what you then do with all the information you’ve extracted from the source code; in my case my runtime is using a plug-in model so the first part is all about using the Oslo SDK to get an instance of a BizTalkDeployment class populated based on the contents of the input file, this class looks like –

    public class BizTalkDeployment
    {
        public string AppName { get; set; }
        public List<object> References { get; set; }
        public List<object> Build { get; set; }
        public List<BizTalkAssembly> BizTalkAssemblies { get; set; }
        public List<object> ImportBindings { get; set; }
        public List<Binding> AddBindings { get; set; }
        public List<Assembly> Resources { get; set; }
    }

I then use late binding and configuration to load a plug in that would take an instance of this class and do the work, be it generation of msbuild scripts, deploying to the local machine using BTSTask or anything else.

DevWeek 2009 – it’s a wrap

DevWeek 2009 – it’s a wrap

We just wrapped up another great year at DevWeek 2009 in London. It’s one of my favorite shows to attend each year because of the central location, the attendees, the great conference organizers (thanks Nick!), and the chance to hang out with good friends in one of the best cities in the world. Numerous Pluralsight instructors speak here each year – it’s something of a yearly retreat for us). You can check out some of the #devweek action on Twitter.

I did the keynote this year on cloud computing, and then I gave several breakout sessions on WCF, REST, ADO.NET Data Services, .NET 4.0, and Dublin.  Here’s the code from my demos.

And here’s the new Pluralsight On-Demand! shirt that we gave away this week:

IMG_0440 IMG_0441

Time to head home now, and time to get back to recording the next few Pluralsight On-Demand! titles I’m working on.  Next up – the Azure Services Platform.