by community-syndication | Nov 2, 2010 | BizTalk Community Blogs via Syndication
“Yet another post on AppFabric Monitoring” I hear you saying. Indeed, and one that you may want to read, especially if you’re using AppFabric mostly for management and monitoring of code-based WCF services. If you had already looked at, or even better – used the monitoring features of AppFabric you would know that WF-based services are pretty well covered. With different monitoring levels, associated default tracking profiles, ability to assign a custom tracking profile, and more, AppFabric can easily be configured to provide comprehensive and insightful information about what’s happening inside your WF services. For pure WCF services however, the story is slightly different – at the Health Monitoring level, you get a single tracking record for each WCF service call (for a successful call, it would be the OperationCompleted record). This is all good as we do get data about the number of WCF calls and whether they completed successfully or resulted in an error. But what if we need more, what if we want to understand what the business meaning of each call was – for example, how many and which ones of the PurchaseOrder transactions have been approved, or rejected by the ApprovePO() service operation. This is where AppFabric lacks of functionality, and this is what we are going to address in this post with the help of user-defined events.
User-Defined Events
So, the first thing we need to look at is the flow of tracking data through the AppFabric Monitoring infrastructure. This is pretty well described in the AppFabric core docs. Here I’m borrowing the diagram from the core docs, only to highlight that in order to emit custom tracking records for WCF services we will need to plug into the WCF Analytic Tracing infrastructure marked as (1) below:
The idea is simple – from within our service implementation we will be emitting our own analytic tracing events containing the business data that we are interested to capture and report on. Once our custom events are placed on the ETW session used by AppFabric monitoring, they will find their way, via the AppFabric Event Collector service, through to the AppFabric Monitoring database. To achieve this we will use the EventProvider class defined in the System.Diagnostics.Eventing namespace. Luckily for us, the WCF team has a sample of using the EventProvider, whereby they encapsulate the mechanics of interacting with this API into a custom class called WCFUserEventProvider. Here is the code for this class:
public class WCFUserEventProvider
{
const string DiagnosticsConfigSectionName = “system.serviceModel/diagnostics”;
const int ErrorEventId = 301;
const int WarningEventId = 302;
const int InfoEventId = 303;
const int Version = 0;
const int Task = 0;
const int Opcode = 0;
const long Keywords = (long)0x20000000001e0004;
const byte Channel = 0x12;
const int ErrorLevel = 0x2;
const int WarningLevel = 0x3;
const int InfoLevel = 0x4;
EventDescriptor errorDescriptor;
EventDescriptor warningDescriptor;
EventDescriptor infoDescriptor;
bool hostReferenceIsComplete;
string hostReference;
private String HostReference
{
get
{
if (hostReferenceIsComplete == false)
{
CreateHostReference();
}
return hostReference;
}
}
EventProvider innerEventProvider;
public WCFUserEventProvider()
{
Guid providerId;
if (HostingEnvironment.IsHosted)
{
DiagnosticSection config = (DiagnosticSection)WebConfigurationManager.GetSection(DiagnosticsConfigSectionName);
providerId = new Guid(config.EtwProviderId);
hostReferenceIsComplete = false;
}
else
{
DiagnosticSection config = (DiagnosticSection)ConfigurationManager.GetSection(DiagnosticsConfigSectionName);
providerId = new Guid(config.EtwProviderId);
hostReference = string.Empty;
hostReferenceIsComplete = true;
}
innerEventProvider = new EventProvider(providerId);
errorDescriptor = new EventDescriptor(ErrorEventId, Version, Channel, ErrorLevel, Opcode, Task, Keywords);
warningDescriptor = new EventDescriptor(WarningEventId, Version, Channel, WarningLevel, Opcode, Task, Keywords);
infoDescriptor = new EventDescriptor(InfoEventId, Version, Channel, InfoLevel, Opcode, Task, Keywords);
}
public bool WriteErrorEvent(string name, string payload)
{
if (!innerEventProvider.IsEnabled(errorDescriptor.Level, errorDescriptor.Keywords))
{
return true;
}
return innerEventProvider.WriteEvent(ref errorDescriptor, name, HostReference, payload);
}
public bool WriteWarningEvent(string name, string payload)
{
if (!innerEventProvider.IsEnabled(warningDescriptor.Level, warningDescriptor.Keywords))
{
return true;
}
return innerEventProvider.WriteEvent(ref warningDescriptor, name, HostReference, payload);
}
public bool WriteInformationEvent(string name, string payload)
{
if (!innerEventProvider.IsEnabled(infoDescriptor.Level, infoDescriptor.Keywords))
{
return true;
}
return innerEventProvider.WriteEvent(ref infoDescriptor, name, HostReference, payload);
}
private void CreateHostReference()
{
if (OperationContext.Current != null)
{
ServiceHostBase serviceHostBase = OperationContext.Current.Host;
VirtualPathExtension virtualPathExtension = serviceHostBase.Extensions.Find<VirtualPathExtension>();
if (virtualPathExtension != null && virtualPathExtension.VirtualPath != null)
{
// HostReference Format
// <SiteName><ApplicationVirtualPath>|<ServiceVirtualPath>|<ServiceName>
string serviceName = serviceHostBase.Description.Name;
string applicationVirtualPath = HostingEnvironment.ApplicationVirtualPath;
string serviceVirtualPath = virtualPathExtension.VirtualPath.Replace(“~”, string.Empty);
hostReference = string.Format(“{0}{1}|{2}|{3}”, HostingEnvironment.SiteName, applicationVirtualPath, serviceVirtualPath, serviceName);
hostReferenceIsComplete = true;
return;
}
}
// If the entire host reference is not available, fall back to site name and app virtual path. This will happen
// if you try to emit a trace from outside an operation (e.g. startup) before an in operation trace has been emitted.
hostReference = string.Format(“{0}{1}”, HostingEnvironment.SiteName, HostingEnvironment.ApplicationVirtualPath);
}
}
Simply put, you call WriteInformationEvent(), WriteWarningEvent(), or WriteErrorEvent() from WCFUserEventProvider to emit an event of information, warning, or error type respectively. Each of these methods takes a Payload parameter, which is where you put the custom data you want to be logged in the AppFabric Monitoring store. Here’s a sample code snippet using the WCFUserEventProvider class within a WCF service:
Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider ev = new Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider();
ev.WriteInformationEvent(“Get completed”, string.Format(“{0}”, Trace.CorrelationManager.ActivityId));
OK, the user-defined event is in the monitoring database. Now what?
Querying User-Defined Events
AppFabric Dashboard
Well, to start with – you could use the AppFabric Dashboard to view user events. From the dashboard, click on the Tracked Events link in the Action Pane:
This takes us to the Query window, where we’ll see all tracked WCF events, including any User-defined records with their payload:
AppFabric Dashboard limitations
With this approach however, the following limitations apply:
- If your service call emits multiple events, and you are using a Monitoring level lower than End-to-End Monitoring, there is no way to correlate these events with each other and as belonging to the same service operation call. This is due to the fact that the end-to-end activity id used for multiple events correlation is only populated for the End-to-End Monitoring level and above
- The AppFabric tooling doesn’t offer functionality to query for user events using custom criteria – for example, based on the payload content
We can easily work around both limitations.
Correlation of multiple user events originating from a given service operation call
In the WCF service code, we’ll ensure that we have an end-to-end activity id generated, irrelevant of the monitoring level configured for the service. An end-to-end activity id is a Guid token that, if present in the SOAP Headers of a WCF request, automatically flows through the chain of WCF service operations execution. This end-to-end activity id is recorded as part of the tracking events emitted by these operations. As mentioned previously, an end-to-end activity id is generated for the AppFabric End-to-End Monitoring and Troubleshooting levels only.
So, to ensure that an end-to-end activity id is present, we will use the following code snippet in the very beginning of the service operation implementation:
if (Trace.CorrelationManager.ActivityId == Guid.Empty)
{
Trace.CorrelationManager.ActivityId = Guid.NewGuid();
}
This code generates a new ActivityId if one is not already present.
In addition to the activity id, when generating user events we will also need to indicate which operation generated the event – this information is not part of the auto-populated properties of a user event. To get this metadata into our user events, we will construct the Name parameter of the Write<EventType>Event() methods in the following format:
<OperationName>: <Business step>
The following snippet shows how to retrieve the operation name using the OperationContext class and then log the event (note that the <Business step> is an arbitrary text indicating the logical step in the implementation):
string action = OperationContext.Current.IncomingMessageHeaders.Action;
string operationName = OperationContext.Current.EndpointDispatcher.DispatchRuntime.Operations.FirstOrDefault(o => o.Action == action).Name;
Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider ev = new Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider();
ev.WriteInformationEvent(string.Format(“{0}: {1}”, operationName, “Approval completed”), “APPROVED”);
I’m sure by now you’re tired of snippets J. So, here is the full WCF service operation implementation:
public string GetData(int value)
{
//Ensure we have an end-to-end ActivityId
if (Trace.CorrelationManager.ActivityId == Guid.Empty)
Trace.CorrelationManager.ActivityId = Guid.NewGuid();
//Get the current operation name
string action = OperationContext.Current.IncomingMessageHeaders.Action;
string operationName = OperationContext.Current.EndpointDispatcher.DispatchRuntime.Operations.FirstOrDefault(o => o.Action == action).Name;
//Implementation goes here
//Log an information event that we “approved” the transaction
Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider ev = new Microsoft.Samples.WCFAnalyticTracingExtensibility.WCFUserEventProvider();
ev.WriteInformationEvent(string.Format(“{0}: {1}”, operationName, “Approval completed”), “APPROVED”);
return “Done”;
}
To close on this implementation, I should also note that the ActivityId generation and the operation name retrieval can be easily rolled into the WCFUserEventProvider class implementation. This will move these repetitive and plumbing-level lines of code away from the implementation of each WCF method, leaving only the calls to the Write<EventType>Event() methods in your code.
Custom Reporting Queries
If you’ve read my previous article on creating custom AppFabric reports using Excel and PowerPivot, you would already know how to write T-SQL queries against the AppFabric Monitoring API (the AppFabric Monitoring public database views). The following T-SQL can be used to query for both the AppFabric out-of-the-box WCF events and our user-defined events:
SELECT
ES.Name,
CASE
WHEN E.EventTypeId IN (214, 222, 223) THEN E.OperationName
WHEN E.EventTypeId IN (301, 302, 303) THEN E.Name
END as OperationName,
CASE
WHEN E.EventTypeId = 214 THEN ‘Success’
WHEN E.EventTypeId IN (222, 223) THEN ‘Error’
WHEN E.EventTypeId = 301 THEN ‘User Event (Exception)’
WHEN E.EventTypeId = 302 THEN ‘User Event (Warning)’
WHEN E.EventTypeId = 303 THEN ‘User Event (Information)’
END as EventType,
E.E2EActivityId,
E.TimeCreated,
E.Duration / 1000.0 as Duration,
E.Payload
FROM
ASWcfEvents E
JOIN ASEventSources ES ON ES.Id = E.EventSourceId
WHERE EventTypeId IN (214, 222, 223, 301, 302, 303)
ORDER BY TimeCreated DESC
This query will return the following result for a single WCF service call based on the code sample above:
It is not hard to imagine how the second limitation can be addressed/worked around – the WHERE clause in the above query can be extended to include criteria based on the payload and looking for a specific business data or status, and then the T-SQL can be used in any database-bound reporting tool such as SQL Server Reporting Services or Microsoft Excel, with or without PowerPivot.
Conclusion
Despite some limitations in AppFabric Monitoring in regards to code-based WCF services, the technology is easily extensible with user-defined events that capture as detailed information about the services execution as necessary. Coupled with custom reports on top of the AppFabric monitoring data, user events provide the IT Pro with a comprehensive monitoring infrastructure spanning both pure WCF and WF-based services.
Thanks for reading!
by community-syndication | Nov 1, 2010 | BizTalk Community Blogs via Syndication
In the middle tier AppFabric (AF) Caching can be setup in High Availability (HA) mode which is the feature that improves resiliency in the case a cache server goes down and for the backed, when using SQL for your configuration database, the simplest answer is to use mirroring. You may use SQL cluster but this blog will go over the easiest way to improve resilience in the backed for AppFabric configuration database. The SQL Server-Based Cluster Configuration Based article covers this subject but the following is a more holistic order step by step article including things to be mindful of and hints on how to go about testing, all base on a recent customer engagement.
Setup steps and basic concepts
Regardless of the AF Caching setup been a single or a cluster of servers, only one configuration database is required, and hence this is the one which needs SQL mirroring. For other more specific articles on SQL mirroring review this MSDN article, as well as these SQLCAT articles Implementing Application Failover with Database Mirroring and Database Mirroring Best Practices and Performance Considerations.
For a full SQL mirroring setup, a principal, a mirror and witness SQL servers are required. It is true that a witness may not be require to have a mirror setup but, in that case, the switch from a principal failing into a mirror will require an administrator to first become aware of the database failure, then manually switch to the mirror database and redirect all clients to it (DNS switching), and while all of these is taking place, caching requests will not be satisfied. To avoid this, the following only considers a setup including a witness. These are the general steps to take in order to setup a full SQL failover database for AF caching as the main purpose is to focus on the order of the steps and what is that AF caching requires.
1. Identify the 3 different SQL server instances for each required role; the obvious guidance is to have these in separate servers. For this example, they will be labeled SVR01 (principal), SVR02 (mirror) and SVR03 (Witness).
2. Create an empty DB in SVR01 (principal), this will be the principal DB, the one to be use by AF cache server(s), for this sample it will named AF4
3. Setup all the necessary logins as needed for AF cache and then configure the AF cache server(s) to point to your AF4
4. SVR01 will now have new logins (under security) created by the setup, one for each cache host machine, note that this will also be the case when running the AddCacheHost Script. As shown below, 4 cache hosts are setup – one for each account use by each of the AppFabric Caching Service running on each host (Redmond is the domain name). These logins will need to be replicated in SVR02 (mirror), this will allow the mirror DB to properly function as a principal DB for when a failure of the principal takes place.
Also note that these entries will be deleted when running the RemoveCacheHost Script
5. Stop all AppFabricCaching Service from all hosts and then generate a full backup (log and Data) of the AF4 DB (principal) and restore it to the SVR02 server as the SVR02.AF4 Database, this will be your mirror DB.
6. Take a Transaction log backup of Svr01.AF4, and then restore it on Svr02.AF4. This is important so that the log sequence chain is properly setup.
7. Right click on SVR01.AF4 and click on Tasks -> Mirror… to run the mirror wizard click on “configure Security”, follow the wizard to setup SVR01 as the principal, SVR02 as the mirror and SVR03 as the witness (no DB needs to be setup in this server) and choose to have a synchronous setup, once completed, start the mirror setup. The final setup should look similar to this, except that it should show three different names for each SQL server instance.
8. Go back to all the cache hosts and change their connection string to include the property “Failover Partner=SVR02”. Note that for 64 bit architecture machines, the DistributedCacheService.exe.config file will be found under C:\Windows\winsxs\ directory
However, a better alternative is to leverage the script mentioned on this MSDN page (under the Tip section of the “Availability Considerations” heading) which will automate this process.
9. Turn the cache host services back on and start testing
Testing
The original plan for testing what will happen when SVR01 went offline was to remove the network cable from its hub but since in a data center getting to the physical servers can be challenging, the resolution was to simply stopping the SQL Server (SVR01) service which will give the same effect.
Once the SVR01 (principal) goes offline, the witness will promote SVR02 to be the new principal.
Test showed that SVR02 took over rather fast and with no interruption to the service; however small interruptions could be seen, running in-house tests while the cache hosts run against load should be done to understand the behavior of your particular mirror setup.
Failback – in a scenario where an actual production failover takes place, the standard procedure should be to go back to a full SQL mirror setup as soon as possible (i.e. to recover the failed machine). Since right after the failure the full mirror setup no longer exists (a mirror SQL server is not present, only the principal SVR02 and the witness SVR03 remained). To mimic placing back the failed machine (SVR01) (understanding that it was repaired) simply restart the SVR01 SQL service. Once this takes place, the witness will turn the role of SVR01 into the mirror server (the role that SVR02 used to have). Hence, per each failback a switch of roles will take place.
Allow a few minutes between each of these operations to allow the witness to do all the appropriate changes. Again, as in the first failover, temporary interruption in service should not occur.
To take into account
A potential maintenance issue may be seen in step #4 above – the AppFabric Caching Service runs under the machine accounts and not a domain account; this therefore requires making account creation/deletion on the mirror server every time a new cache host machine is added/remove.
If this step is not executed when a cache host removal takes place then the missing machine account in the mirror server will go offline in the case of a failover, since the machine would not be allowed in the new principal SQL server (no user exists for it to access the server). If the AppFabric Caching Service runs under a domain group then this would not be an issue because the group account will only need to be added to the mirror server logins once and no creation or deletion of cache host will affect the access to the database.
Reviewed by: James Podgorski
by community-syndication | Nov 1, 2010 | BizTalk Community Blogs via Syndication
At PDC in my session “Building Web APIs for the Highly Connected Web” we announced WCF Web APIs, new work we are doing to make HTTP first class in WCF. In this post I am going to describe what we are doing and why. If you are saying, “just show me the bits”, then just head on over to wcf.codeplex.com our new site that we just launched!
Why HTTP?
HTTP is ubiquitous and it’s lightweight. Every consumer that connects to the web understands it, every browser supports it, and the infrastructure of the world wide web is built around it. That means when you travel HTTP, you get carte blanche status throughout the world wide web. It’s like a credit card that is always accepted everywhere. In the past, HTTP’s primary usage was for serving up HTML pages. Over time however our web applications have evolved. These newer breed are much more dynamic, aggregating data not only from the company server but from a multitude of services that are hosted in the cloud. Many of those services themselves are now being exposed directly over HTTP in order to have maximum reach.
Whereas in the past the primary consumer was a desktop / laptop PC, we’ve now moved into the age of devices including phones like IPhone, Android and Windows Phone as well as other portable tablets like the iPad and the upcoming Slate. Each of these devices (including the PC) have different capabilities, however one thing is consistent, they all talk HTTP.
WCF and HTTP, we’re going much further.
As the industry evolves, our platform needs to evolve. Since .NET 3.5, we have been continually evolving WCF to provide better support for surfacing services and data over HTTP. We’ve made good progress but there is more we can do. Developers using WCF have said they want more control over HTTP. We’ve also heard developers asking for better support for consuming WCF services with web toolkits like jQuery. Additionally, we’ve heard requests about simplifying configuration, removing ceremony, more testability, and just an overall simplified model. We hear you and we’re taking action. We’re making significant enhancements to our platform to address the concerns. Below is a list of some of the improvements we are focusing on specific to HTTP which we just made available on Codeplex (available in WCF HTTP Preview 1.zip). For the jQuery work check out this excellent post by Tomek and this post by Yavor.
Our HTTP focus areas
Media Types and Formats
HTTP is extremely flexible allowing the body to be presented in many different media types (content type) with html, pure xml and json, atom and OData being just a few. With WCF Web APIs we’re going to make it very easy for services to support multiple formats on a single service. Out of the box, we are planning to support xml, json and OData however we’re also making it very easy to add on support for additional media types including those that contain hypermedia (see my talk for an exampe). This gives WCF the flexibility to service a variety clients based on their needs and capabilities.
Below is a snippet which demonstrates taking a contact returned from an operation and representing it as a png file stored in an images folder. PngProcessor derives from MediaTypeProcessor. Processors are a new extensibility point in WCF Web APIs. MediaTypeProcessor is a special processor that you derive from to support a new format.
public class PngProcessor : MediaTypeProcessor
{
public PngProcessor(HttpOperationDescription operation, MediaTypeProcessorMode mode)
: base(operation, mode)
{
}
public override IEnumerable<string> SupportedMediaTypes
{
get
{
yield return "image/png";
}
}
public override void WriteToStream(object instance, Stream stream, HttpRequestMessage request)
{
var contact = instance as Contact;
if (contact != null)
{
var path = string.Format(CultureInfo.InvariantCulture, @"{0}bin\Images\Image{1}.png", AppDomain.CurrentDomain.BaseDirectory, contact.ContactId);
using (var fileStream = new FileStream(path, FileMode.Open))
{
byte[] bytes = new byte[fileStream.Length];
fileStream.Read(bytes, 0, (int)fileStream.Length);
stream.Write(bytes, 0, (int)fileStream.Length);
}
}
}
public override object ReadFromStream(Stream stream, HttpRequestMessage request)
{
throw new NotImplementedException();
}
}
This same Png formatter can sit side by side with formatters for other media types like json, xml, and atom. WCF will automatically select the right processor based on matching the request accept headers passed from the client against the SupportedMediaTypes.
To see different media types in action, check the ContactManager sample that ships with WCF Web APIs.
Registering formatters / processors
In order to register processors, we’re exploring a new programmatic configuration model which allows you to configure all processors (including formatters) at a single place within your application. To configure processors, you derive from a HostConfiguration class and override a few methods. You then pass your custom configuration class to the WebHttpServiceHost or WebHttpServiceHostFactory.
In the ContactManager sample we’re shipping on Codeplex you wll see the following in the Global.asax.
protected void Application_Start(object sender, EventArgs e)
{
var configuration = new ContactManagerConfiguration();
RouteTable.Routes.AddServiceRoute<ContactResource>("contact", configuration);
RouteTable.Routes.AddServiceRoute<ContactsResource>("contacts", configuration);
}
Both the ContactResource and ContactsResource are configured with a ContactManagerConfiguration instance. That class registers the processors for each operation.
public class ContactManagerConfiguration : HostConfiguration
{
public override void RegisterRequestProcessorsForOperation(HttpOperationDescription operation,
IList<Processor> processors, MediaTypeProcessorMode mode)
{
processors.Add(new JsonProcessor(operation, mode));
processors.Add(new FormUrlEncodedProcessor(operation, mode));
}
public override void RegisterResponseProcessorsForOperation(HttpOperationDescription operation,
IList<Processor> processors, MediaTypeProcessorMode mode)
{
processors.Add(new JsonProcessor(operation, mode));
processors.Add(new PngProcessor(operation, mode));
}
}
Notice above the request is configured to support Json and FormUrlEncoding while the response supports Json and Png. These Register methods are called per operation thus configuration of processors can even be more fine grained. Plus you can reuse your configuration classes even across applications.
JsonProcessor / FormUrlEncodedProcessor.
In addition to supporting representing types in multiple formats, we also can support untyped operations with the new Json primitives that come out of our jQuery work which I mentioned above. The JsonValueSample we’ve included illustrates how this works.
[ServiceContract]
public class ContactsResource
{
private static int nextId = 1;
[WebInvoke(UriTemplate = "", Method = "POST")]
public JsonValue Post(JsonValue contact)
{
var postedContact = (dynamic)contact;
var contactResponse = (dynamic)new JsonObject();
contactResponse.Name = postedContact.Name;
contactResponse.ContactId = nextId++;
return contactResponse;
}
}
In the snippet above you can see that the Post method accepts a JsonValue and returns a JsonValue. Within it casts the incoming parameter to dynamic (there actually is an extension method AsDynamic which you can use) pulls out the name and then creates a new JsonObject which it sets some properties on and returns.
If you look in the JsonValueSampleConfiguration you will see that it accepts Form Url Encoding (something not previously possible in WCF without a lot of work) for the request and returns Json.
public class JsonValueSampleConfiguration : HostConfiguration
{
public override void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode)
{
processors.Add(new FormUrlEncodedProcessor(operation, mode));
}
public override void RegisterResponseProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode)
{
processors.ClearMediaTypeProcessors();
processors.Add(new JsonProcessor(operation, mode));
}
}
This is extremely powerful for folks solely working with Uri Form Encoding and Json and who are comfortable/prefer working without a concrete type.
Queryability
One challenge when exposing data over HTTP is how to allow clients to filter that data. In WCF Web APIs we’re introducing IQueryable support on the client and server for addressing these challenges.
Making the service queryable
On the server side, your service operation returns an IQueryable<T> and you annotate it with a [QueryComposition] attribute. Once you do that, your service lights up and is now queryable using the OData uri format.
We’ve included a QueryableSample which illustrates how this works. Below is a snippet from the CoontactsResource in that sample.
[WebGet(UriTemplate = "")]
[QueryComposition]
public IEnumerable<Contact> Get()
{
return contacts.AsQueryable();
}
The Get method above returns an IQueryable of contacts. (Today the method must return IEnumerable<Contact> but this will be fixed in the near future).
With the query composition enabled, the host will now accept requests like “http://localhost:8081/contacts?$filter=Id%20eq%201” which says “find me the contact with an ID equal to 1”.
Note: Currently this feature is not compatible with our new WebHttpServiceHost / Processors, it only works with our existing WebServiceHost. This is temporary as we are planning to migrate over to the new host / processor model.
Querying the service, LINQ to WCF
On the client side we’re introducing the ability to do LINQ queries directly to resources which are exposed through query composition. We’ve added a CreateQuery<T> extension method which you can use with the new HttpClient (next section) to create a WebQuery<T>. Once you have that query, you can then apply a Where, or an Order by. Once you start to iterate through the result, we will automatically do a Get request to the server using the correct URI based on the filter. The results will come back properly ordered and filtered based on your query.
Below is a snippet that shows querying an Orders resource
public IEnumerable<Order> GetApprovedOrders()
{
string address = "http://contoso.com/orders";
HttpClient client = new HttpClient(address);
WebQuery<Order> orders = client.CreateQuery<Contact>();
return orders.Where<Order>(o=>o.State == OrderState.Approved).OrderBy(o=o.OrderID);
}
Getting first class support for HTTP
HTTP is more than a transport, it is a rich application layer protocol. There’s a lot more interesting information than just the body which lives in the headers. It is the headers that most of the web infrastructure actually cares about. For example if you want to allow requests to be cached throughout the web, you need to use entity tags which live where? In the headers. Point blank, if you want to access the full richness of HTTP you need to access those headers.
HTTP Messages
We’re introducing support for HttpRequestMessage and HttpResponseMessage. These classes which originally shipped in the REST starter kit allow unfettered and strongly typed access to the underlying HTTP request and response. With these new apis you can access HTTP wherever you are, whether you are authoring a service, or extending the stack and whether you are on the server or the client. Another nice thing about these messages is they are easy to use in unit testing. They don’t have any implicit dependencies to WCF as WebOperationContext does nor are they statically called. They are lightweight data containers that are very easy to create.
For example, you can author a service which receives the HttpRequestMessage and HttpResponseMessage, and which directly accesses the headers and the body. The HelloWorldResource below supports caching on the client side, as it returns an entity tag “HW” which the client can send in an IfNoneMatch header in subsequent requests. The resource can then then return a status 304 to tell the client to use it’s cached copy. The client in this case might not be the browser but a proxy server sitting in the middle.
[ServiceContract]
public class HelloWorldResource {
[WebGet(UriTemplate="")]
public void Get(HttpRequestMessage req, HttpResponseMessage resp) {
if (req.IfNoneMatch.Contains("HW")) {
resp.StatusCode = HttpStatusCode.NotModified;
return;
}
resp.HttpContent.Create("Hello World Resource", "text/html");
resp.StatusCode = HttpStatusCode.OK;
resp.Headers.Tag = "HW"; //set the tag
}
}
The code above would likely be factored into a common set of utility functions rather than being redundantly coded for each operation. The important thing is we’re providing the messages which enables that refactoring.
You can also mix and match using messages with strongly typed objects representing the body. For example you might want to do a redirect on a Get request for a document that has moved.
[ServiceContract]
public class DocumentResource
{
[WebGet(UriTemplate="{name}")]
public Document Get(string name, HttpResponseMessage resp)
{
Document doc;
//foo has moved
if (name == "Foo")
{
resp.StatusCode = HttpStatusCode.MovedPermanently;
resp.Headers.Location = new Uri("http://someplace/Foo");
return null;
}
//find the document
return doc;
}
}
Within the ContactManager sample you will see other examples of mixing messages with concrete types.
HTTP Client
Providing a client for consuming HTTP is equally as important as being able to expose it. For that reason we’re also bringing the HttpClient we shipped in the REST starter kit forward. You can use the new client within desktop applications or within services themselves in order to consume other HTTP services. We’re also providing extensions to the client for supporting queryability, which I will cover in the next section.
Below is a simple example of using HttpClient.
var client = new HttpClient();
client.DefaultHeaders.Accept.Add("text/xml");
var resp = client.Get("http://contoso.com/contacts/1");
resp.Content.ReadAsXmlSerializable<Contact>();
Request and response processing
When you work with HTTP, there are various parts of the request and response which need to be processed or transformed . With HttpRequestMessage and HttpResponseMessage we’re allowing you to do this processing within the actual operation as there are places where this is appropriate. However, there are other cases that are concerns that are cross-cutting which don’t belong in the operation. Take formatting for example. It’s very convenient to have the ContactResource simply return and accept a contact, rather than it have to drop down to a message and manually do the formatting. In the same way the ContactResource operation may depend on certain values extracted from segments of the request URI like the ID. In the past we dealt with each of these concerns in a one-off basis. With WCF Web APIs we’re exploring a more general purpose way to handle these concerns. We’re introducing a request and response pipeline of what we’re currently calling Processors. A Processor has a simple execute method with takes inputs and provides outputs. The inputs could be things like the request or response, or outputs from other processors. In this way processors are composable.
Out of the box we use processors today mainly for extracting values from the uri, for content negotation (selecting the format) and for media type formatters. However processors are extensible, and you can introduce your own for adding custom processing within the request or the response.
We’ve already seen above how to create processors specific for formatting. Here is an example of a different kind of processor that takes a latitude and longitude in a URI for example “http://contoso/map/12.3456,-98.7654” and converts it into Location object. Once the location processor is registered, the MapResource.Get method will automatically get a location object passed in.
public class Location
{
public double Latitude { get; set; }
public double Longitude { get; set; }
}
public class LocationProcessor : Processor<string, string, Location>
{
public LocationProcessor()
{
this.OutArguments[0].Name = "Location";
}
public override ProcessorResult<Location> OnExecute(string latitude, string longitude)
{
var lat = double.Parse(latitude);
var lon = double.Parse(longitude);
return new ProcessorResult<Location> { Output = new Location { Latitude = lat, Longitude = lon } };
}
}
[ServiceContract]
public class MapResource {
[UriTemplate="{latitude},{longitude}"]
public Stream Get(Location location) {
//return the map
}
}
The processor above inherits from Processor<T1, T2, TOutput> meaning that it takes two inputs (strings in this case) and it outputs a location. In the execute method the parameter names conventionally match against outputs coming from other processors in this case the method expects “latitude” and “longitude” params. You might be wondering where these parameters come from. If you look on the MapResource.Get method you see that it has 2 parameters named latitude and longitude respectively. A special processor UriTemplateHttpProcessor automatically extracts values from the uri and returns those values as outputs. In this case it returns latitude and longitude thus making those values available to the LocationProcessor (or any other processor).
The logic above is very simple in that it parses numbers. However, you could imagine expanding the processor to do more. For example it could be rewritten to also accept a more expressive uri like “http://contoso/map/12 deg 34’ 56” N, 98 deg 76’ 54” W ”.
This is just a small illustration of the kinds of things you can do with processors. You could imagine handling concerns related to entity tags like IfMatch / IfNonMatch in processors for example.
There’s a lot more to say about processors and their configuration. Look for more on both topics in future posts. Darrel Miller also has a nice post where he talks about processors here.
Conventions, Resources and Testability
We’ve heard plenty of feedback from folks in the community that they would like to see us offer configuration alternatives to attributes and provide more out of the box conventions. We’ve also heard developers asking for us to ensure that we provide better support for test driven development, and using tools like IoC containers. As we move forward we are definitely thinking about all of the above.
Our current focus for the platform has been to enhance the existing Web HTTP programming model to provide richer support for HTTP. These enhancements will likely roll into the framework soon and will provide a very smooth migration path for existing WCF HTTP customers.
Longer term, we are also exploring a new convention based programming model for configuring HTTP resources (services). With this new model we are also looking at how we can enhance it to be more resource oriented, for example allowing specification of child resources so that URIs can be constructed dynamically rather than being hardcoded. This new model will make it’s way to Codeplex soon where we’d like to incubate it with the community.
With the new bits we are delivering we are also being intentional about designing things in a more testable manner for example HttpRequestMessage and HttpResponseMessage allow developers to move away from static calls which are difficult to test. Processors are also easy to test as they each do a single thing, and do not have static dependencies. In addition to the new bits, we are looking at investments we can make into our existing bits to better support testability. For example we are exploring allowing you to plug in an IoC container for service instantiation.
It’s still early, we want your help
We’re still early in the development of these new features! Not all of these features will make it in the box, but many definitely will. You can help us prioritize by checking out our new bits on Codeplex, participating in the forums and adding work items so others can vote.
OK, what are you waiting for? Head on over to wcf.codeplex.com. The future awaits!!!