Good post explaining the new pricing model for Windows Azure AppFabric

Here’s a great post that explains the new pricing model announced yesterday for Windows Azure platform AppFabric usage. At first read, this new model makes sense to me and seems quite reasonable, in addition to being more predictable for customers.

http://blogs.msdn.com/netservices/archive/2010/01/04/announcing-windows-azure-platform-commercial-offer-availability-and-updated-appfabric-pricing.aspx

ASP.NET 4 SEO Improvements (VS 2010 and .NET 4.0 Series)

ASP.NET 4 SEO Improvements (VS 2010 and .NET 4.0 Series)

[In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

This is the thirteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s post covers some of the improvements being made around Search Engine Optimization (SEO) with ASP.NET 4.

Why SEO?

Search engine optimization (SEO) is important for any publically facing web-site.  A large percentage of traffic to sites now comes from search engines, and improving the search relevancy of your site will lead to more user traffic to your site from search engine queries (which can directly or indirectly increase the revenue you make through your site).

Measuring the SEO of your website with the SEO Toolkit

Last month I blogged about the free SEO Toolkit we’ve shipped that you can use to analyze your site for SEO correctness, and which provides detailed suggestions on any SEO issues it finds. 

I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in the site, and pinpoint ways to optimize it further.

ASP.NET 4 SEO Improvements

ASP.NET 4 includes a bunch of new runtime features that can help you to further optimize your site for SEO.  Some of these new features include:

  • New Page.MetaKeywords and Page.MetaDescription properties
  • New URL Routing support for ASP.NET Web Forms
  • New Response.RedirectPermanent() method

Below are details about how you can take advantage of them to further improve your search engine relevancy.

Page.MetaKeywords and Page.MetaDescription properties

One simple recommendation to improve the search relevancy of pages is to make sure you always output relevant “keywords” and “description” <meta> tags within the <head> section of your HTML.  For example:

image

One of the nice improvements with ASP.NET 4 Web Forms is the addition of two new properties to the Page class: MetaKeywords and MetaDescription that make programmatically setting these values within your code-behind classes much easier and cleaner. 

ASP.NET 4’s <head> server control now looks at these values and will use them when outputting the <head> section of pages.  This behavior is particularly useful for scenarios where you are using master-pages within your site – and the <head> section ends up being in a .master file that is separate from the .aspx file that contains the page specific content.  You can now set the new MetaKeywords and MetaDescription properties in the .aspx page and have their values automatically rendered by the <head> control within the master page.

Below is a simple code snippet that demonstrates setting these properties programmatically within a Page_Load() event handler:

image

In addition to setting the Keywords and Description properties programmatically in your code-behind, you can also now declaratively set them within the @Page directive at the top of .aspx pages.  The below snippet demonstrates how to-do this:

image

As you’d probably expect, if you set the values programmatically they will override any values declaratively set in either the <head> section or the via the @Page attribute. 

URL Routing with ASP.NET Web Forms

URL routing was a capability we first introduced with ASP.NET 3.5 SP1, and which is already used within ASP.NET MVC applications to expose clean, SEO-friendly “web 2.0” URLs.  URL routing lets you configure an application to accept request URLs that do not map to physical files. Instead, you can use routing to define URLs that are semantically meaningful to users and that can help with search-engine optimization (SEO).

For example, the URL for a traditional page that displays product categories might look like below:

http://www.mysite.com/products.aspx?category=software

Using the URL routing engine in ASP.NET 4 you can now configure the application to accept the following URL instead to render the same information:

http://www.mysite.com/products/software

With ASP.NET 4.0, URLs like above can now be mapped to both ASP.NET MVC Controller classes, as well as ASP.NET Web Forms based pages.  You can even have a single application that contains both Web Forms and MVC Controllers, and use a single set of routing rules to map URLs between them.

Please read my previous URL Routing with ASP.NET 4 Web Forms blog post to learn more about how the new URL Routing features in ASP.NET 4 support Web Forms based pages.

Response.RedirectPermanent() Method

It is pretty common within web applications to move pages and other content around over time, which can lead to an accumulation of stale links in search engines.

In ASP.NET, developers have often handled requests to old URLs by using the Response.Redirect() method to programmatically forward a request to the new URL.  However, what many developers don’t realize is that the Response.Redirect() method issues an HTTP 302 Found (temporary redirect) response, which results in an extra HTTP round trip when users attempt to access the old URLs.  Search engines typically will not follow across multiple redirection hops – which means using a temporary redirect can negatively impact your page ranking.  You can use the SEO Toolkit to identify places within a site where you might have this issue.

ASP.NET 4 introduces a new Response.RedirectPermanent(string url) helper method that can be used to perform a redirect using an HTTP 301 (moved permanently) response.  This will cause search engines and other user agents that recognize permanent redirects to store and use the new URL that is associated with the content.  This will enable your content to be indexed and your search engine page ranking to improve.

Below is an example of using the new Response.RedirectPermanent() method to redirect to a specific URL:

image

ASP.NET 4 also introduces new Response.RedirectToRoute(string routeName) and Response.RedirectToRoutePermanent(string routeName) helper methods that can be used to redirect users using either a temporary or permanent redirect using the URL routing engine.  The code snippets below demonstrate how to issue temporary and permanent redirects to named routes (that take a category parameter) registered with the URL routing system.

image

You can use the above routes and methods for both ASP.NET Web Forms and ASP.NET MVC based URLs.

Summary

ASP.NET 4 includes a bunch of feature improvements that make it easier to build public facing sites that have great SEO.  When combined with the SEO Toolkit, you should be able to use these features to increase user traffic to your site – and hopefully increase the direct or indirect revenue you make from them.

Hope this helps,

Scott

Azure Integration – Part 1: Creating an ESB on-ramp that receives from Azure’s AppFabric Service Bus

I am going to do a series of three blog posts (and accompanying videos) about how to integrate BizTalk and Azure. As you know, Azure is in the process of going live, and BizTalk can play a very compelling role bridging between on-premise and off-premise.

This post shows two key interactions with the Windows Azure platform AppFabric Service Bus:

  • receiving a message from the Service Bus through a BizTalk receive location
  • publishing to the Service Bus from an InfoPath form

It is the first point that I think illustrates what will become a VERY commonly used pattern. This sample is based on the Order Demo I created previously, and you can go watch those videos first to get a sense of what the demo is. It brings together BizTalk, SharePoint, ESB Toolkit, SQL Server Analysis Services, SQL Server Reporting Services, InfoPath and more, including showing how to use the Business Rules Engine for dynamic itinerary selection (a favorite pattern of mine). I blogged about that here. I like it because it’s a “SharePoint-based ESB-driven BizTalk-powered workflow”, really leveraging the power of the Microsoft stack.

The video for this blog post is available here at the MSDN BizTalk Developer site.

My goal in this post is to take an on-premises business process, and make it externally available. Now, traditionally in a BizTalk environment that would mean TYPICALLY that you expose a schema or orchestration as a Web service, and then reverse-proxy that so it can be reached from the outside world, probably with a load balancer in the mix.

There’s new tool in our toolbox now: Windows Azure platform AppFabric, which includes the Service Bus and Access Control Service. So, I set out to use that as an external relay, and an entry point into my existing order process demo.

Before we integrate with the existing system, let’s get the client->ServiceBus->BizTalk flow working.

Once you’ve signed up for the Azure service and received your developer token, the first thing you need to do is install the Windows Azure platform AppFabric SDK in your BizTalk environment. One of the things this gives you is a set of new WCF bindings, which is what I used in conjunction with BizTalk’s WCF-Custom adapter. I created a new BizTalk project, a new receive port and receive location, as shown below:

 

 

I had many options available, but as I was going .NET (BizTalk) to .NET (Service Bus), I opted to use the Net TCP binding for greater efficiency (perhaps not so important for a demo, but, might as well do it right). Note also that transport-level security is enabled.

 

For credentials, I used “shared secret”. This means that it will use the Access Control Service to authenticate me, and the corresponding issuerName and issuerSecret need to correspond to what you have set up using the Azure portal.

 

Normally, that would be it, but in my case there was a structural difference between what was being relayed to me from the Service Bus and what I needed for my existing process. So, I used the WCF adapters capability to reach into the message and only pass on what I had specified as being the body based on my XPath expression:

And that’s it! Enabling that receive location will create an endpoint “listener” (subscriber) in Azure’s Service Bus. Pretty remarkable really, when you consider that I needed to magic at the BizTalk end other than a new WCF binding, and I can now subscribe to message from the cloud. This is the sort of thing where when you first see it work, you sit back in your chair, say “wow”, and then think about what the implications are.

Next step, we need a publisher. My existing demo already had an InfoPath form, so I wanted to use that. However, how do you make InfoPath talk to Windows Azure platform Service Bus? Now, I’m hardly an InfoPath expert, but I thought I’d give it a shot anyhow. Following the lead from samples in the Azure SDK, I knew I had to set up some contracts. I had to embed this right in the formcode for the form, perhaps there’s a better way (any InfoPath experts, I’d love to know), but my goal here was “just make it work”, knowing full well that I’m dealing with CTPs and ultimate production-ready approaches would likely change anyhow. So, with that as my mindset, the following is what I ended up with. Notice the OrderRelayService class, and how that aligns with the URI in the BizTalk receive location.

 

Next, the “Save” button code just sets the order status, and then calls the SendForm method:

 

Lastly, we have the sending part. Yes, I have my credentials in the clear here, but once this post goes live those credentials will have changed 🙂

And, that’s it we can now use an InfoPath form to send messages to the Service Bus, and if the Access Control Service authorizes it, the message will be published. BizTalk will have an endpoint listening and will pick up that message.

The last thing we need to do now is hook this into the existing process flow. If you watched the original Order Demo video, you’ll know that I was creating orders in InfoPath, publishing to a SharePoint document list, and from there they were picked up by a BizTalk receive location that used the SharePoint adapter. As part of that, we use one of the standard ESB Toolkit pipelines to select and apply an itinerary, and we selected the itinerary based on a rules engine decision. So, how hard would it be to integrate that functionality into the messages we’re picking up from the Service Bus? Simple, we just add that pipeline to the receive location, and we’re done.

 

In this post, and the accompanying video, I have shown you how to take an on-premise BizTalk application and make it available externally via the Windows Azure platform Service Bus, by just adding another receive location. And, a huge benefit you get here against having exposed your own endpoint is the scale benefits of the cloud. You don’t need to worry about having servers in a DMZ, reverse proxies,  load balancers, or any of that infrastructure “goo”.

At this point you may want to sit back, say “wow”, and think about what new possibilities this opens up and the solutions you can now create, combining on-premise BizTalk ESB (and perhaps some of your existing applications) with the Windows Azure platform.

Announcing Windows Azure platform commercial offer availability and updated AppFabric pricing

As part of today’s announcement about the commercial availability of Windows Azure platform offers, we are also introducing updated pricing for the Windows Azure platform AppFabric, which helps developers connect cloud and on-premises applications. Based on discussion and feedback from hundreds of customers during the CTP process, we have made the pricing simpler and more predictable. Service Bus will now be priced at $3.99 per Connection-month, and Access Control will be $1.99 per 100,000 Transactions.


Last November at the 2009 Professional Developers’ Conference, Microsoft announced a new product offering named AppFabric. AppFabric delivers services that enable developers to build and manage composite applications more easily for both server and cloud environments. The server components, known as Windows Server AppFabric, provide caching capabilities and workflow and service management capabilities for applications that run on-premises. The cloud components, known as Windows Azure platform AppFabric  (formerly called “.NET Services”),  include cloud-based services that help developers connect applications and services between Windows Azure, Windows Server and a number of other platforms. Windows Azure platform AppFabric is available as a production ready service today and includes two services: Service Bus, which makes it easier to connect applications and services in the cloud or on-premises, and Access Control, which provides federated authorization as a service.


SERVICE BUS PRICING


For Service Bus, the pricing meter has changed from “Message Operations” to “Connections”. In many cases, each application instance that connects to the Service Bus will require just one Connection, which means that predicting your usage is often as simple as counting the number of application instances or devices that you need to connect. Whether your application requires two-way messaging, event distribution, protocol tunneling, or another architecture, the Connection-based model is designed to suit your business needs. Connections are charged at a rate of $3.99 per Connection per month (plus applicable data transfer charges), and will be billed on a pay-as-you-go, consumptive basis. Alternatively, for customers who are able to forecast their needs in advance, we offer the option to purchase “Packs” of Connections: a pack of 5 Connections for $9.95, a pack of 25 for $49.75, a pack of 100 for $199.00, or a pack of 500 for $995.00 per month (plus data transfer). Connection Packs represent an effective rate of $1.99 per Connection-month. Pack sizes larger than 500 may be available on request. In our FAQ, we provide more details on how Connections are defined, measured, and billed.


We expect that most customers will find this new meter to be simpler and more predictable. While the former Message Operations meter was well suited for uses such as discrete transactional messaging, it has been more complicated in other cases. For example, what happens if you want to stream a large file, tunnel a protocol persistently, or deploy a lot of devices that all “listen” idly all day? Knowing what gets counted as a message, and predicting usage from day to day, was difficult.


It turns out that our customers have all of those uses in mind, which is a good thing. Customers have asked for simpler pricing, and we are now able to deliver this for a wider range of uses, including streamed data, protocol tunneling, and transactional messaging. This new pricing structure will help make it easier for you to understand and control when and how many Connections are being used. In addition, it provides increased predictability, because under normal circumstances, your total Connection cost will stay the same whether you  generate more or less “message” volume from one month to the next.


ACCESS CONTROL PRICING


For Access Control, the pricing meter has changed from “Message Operations” to “Transactions”. In practice, these meters are the same; only the name has been changed to reflect the Access Control function more accurately. As previously announced, token requests and service management operations will both be counted as Transactions, and charged at a rate of $1.99 per 100,000 Transactions.


AVAILABILITY


The Service Bus and Access Control are available today, however to give customers more time to adjust to the new pricing structure, charges will start to accrue in April, 2010. Usage until that time will be free of charge, so we encourage you to upgrade your account and sign up for an offer today. Starting today, customers can already take advantage of the same support and benefits provided across the Windows Azure platform.  SLAs will take effect when charges begin to accrue in April, 2010.  In order to help customers monitor and predict their usage before charges begin to accrue, Connection and Transaction usage reports will be made available soon on their developer portal at http://appfabric.azure.com/.


For more information, please visit our FAQ and pricing pages.

What’s a Correlation and why do I want to Initialize it?

In .NET 4.0, we have introduced a framework for correlation.  What do I mean by correlation?  I’m glad you asked.  In our vocabulary, a “correlation” is actually one of two things:

  1. A way of grouping messages together.  A classic example of this is sessions in WCF, or even more simply the relationship between a request message and its reply. 
  2. A way of mapping a piece of data to a service instance.  Let’s use sessions as an example here also, because it makes sense that I’d want all messages in a particular session (SessionId = 123) to go to the same instance (InstanceId = 456).  In this case, we’ve created an implicit mapping between SessionId = 123 and InstanceId = 456.

As you can see, these patterns are related, hence why we call them both “correlations”.  But sessions are inherently short-lived, tied to the lifetime of the channel.  What happens if my service is long-running and the client connections aren’t?  The world of workflow services reinforces the need for a broader correlation framework, which is why we’ve invested in this area in .NET 4.0. 

There are two operations that can be performed on a correlation: it can be initialized, or it can be followed.  This terminology is not new; in fact, Biztalk has had correlation for many releases.  But what does “initializing” a correlation mean?  Another great question.  Well, it is simply creating this mapping between the data and the instance.

In addition, there are many types of correlation available in .NET 4.0, but let’s focus on this category of associating data with an instance.  When that data comes from within the message itself (e.g. as a message header or somewhere in the body of the message), we call that content-based correlation

Ok, too much theory and not enough application; let’s look at how this manifests in WF 4.0.  With every message sent or received from your workflow, you’ve got an opportunity to create an association between a piece of data in that message and the workflow instance.  That is, every messaging activity (Receive, SendReply, Send, & ReceiveReply) has a collection of CorrelationInitializers which let you create these associations.  Here’s what the dialog looks like in Visual Studio 2010:

As you can see, it’s been populated with a Key and a Query.  The Key is just an identifier used to differentiate queries, e.g. DocId = 123 should be different than CustomerId = 123.  The Query part is how we retrieve the data from the message; in this case, it’s an XPath expression that points within the body of the ReceiveDocument request message to the Document.Id value.  Some of the resulting correlation information is pushed into a CorrelationHandle variable (the DocIdCorrelation), which will be used by later activities to follow this correlation.

Now, you might be wondering: if I already have the Document.Id value in a workflow variable, why do I need this XPath expression in order to initialize my correlation?  That’s a great point.  In fact, we wrote another activity just for this purpose: the InitializeCorrelation activity.  As you would expect, the dialog looks very similar to what we just saw (Note: this dialog is different than what is present in Visual Studio 2010 Beta2):

This activity is particularly useful in scenarios where the data to initialize the correlation comes from a database, or if you need to create a composite correlation from multiple pieces of data like a customer’s first and last name concatenated together.  For all of you Biztalk experts, this means no more “dummy send” pattern!  Hooray!

Ok, you’ve initialized a correlation now what?  Regardless of how you’ve initialized it, a correlation is only useful if it is followed.  A Receive activity follows a correlation by correlating on the same piece of data (in this case, a DocId) and specifying the same CorrelationHandle.  Imagine that the document approval process includes some opportunity to update the document before the official approval is given.  An UpdateDocument request message is sent, which contains the DocumentId value in it.  Here we specify the XPath expression to point to that particular piece of data in our incoming message.  We also set the CorrelatesWith property to the same CorrelationHandle we specified previously; this ensures that when the Receive activity starts executing and goes idle (sets up its bookmark), the WorkflowServiceHost knows what correlation this workflow is waiting on and can resume the correct instance when the message with the corresponding Document Id comes in. 

  

And now you can consider yourself a content-based correlation expert!  No longer are you restricted to using a context-based binding for communicating with instances of your workflow services!  Now that’s freedom.  Give it a shot and let us know what you think!

WCF, Data Services & RIA Services Alignment Questions and Answers

Windows Communication Foundation (WCF) is the heart of the Microsoft Developer story around services. WCF is the unified programming model for working with services in the enterprise and across the Internet in .NET applications.


As highlighted in sessions at this year’s PDC, .NET Framework 4 makes it easier for developers to work with services from their managed code applications. With .NET 4, the WCF technology provides several different types of services to start from, based on your particular needs, but they all share the same underlying infrastructure. 


%u00b7 SOAP Services – Allows full flexibility for building operation-centric services. This includes industry standard interoperability, as well as channel and host plug-ability. Use this for operation-based services which need to do things such as interoperability with Java, be consumed by multiple clients, flow transactions, use message based security, use transports or channels in addition to HTTP, or host in processes outside of IIS. WCF has supported SOAP services since its initial release in .NET Framework 3.0, and .NET 4 adds additional WS-* support and default bindings to make it easier than ever to build SOAP services using WCF.


%u00b7 WebHttp Services – Is best when you are exposing operation-centric HTTP services to be deployed at web scale or are building a RESTful service and want full control over the URI/format/protocol. Support for HTTP services was added to WCF in .NET Framework 3.5 and generally referred to as “WCF REST”. A number of significant improvements, such as content negotiation have been added in .NET Framework 4.


%u00b7 Data Services -Data Services are best when you are exposing your data model and associated logic through a RESTful interface; it includes a full implementation of the Open Data (OData) Protocol for .NET (more on this below) to make this process very easy. . WCF Data Services was originally released as ’ADO.NET Data Services’ with the release of .NET Framework 3.5 SP1;


%u00b7 Workflow Services – Is best for long running, durable operations or where the specification and enforcement of operation sequencing is important. Workflow services are implemented using Windows Workflow Foundation (WF) activities that can make use of WCF for sending and receiving WCF-based service requests.


%u00b7 RIA Services – is best for building an end-to-end Silverlight application. WCF RIA services will be released with Silverlight 4, and is built upon WCF.


The important thing to remember is that each of the above service options builds on the WCF service model and should not be thought of as binary choices. These service options provide developers with a starting point that makes the job of accomplishing common tasks much easier; allowing any of the above services to be customized and extended using the power of the same WCF service model. We expect many applications will leverage multiple models for building out their applications and their developer’s knowledge will easily transfer from one model to the other, allowing developers to use one skill set across solution types.


So what does this mean to you as a developer? It means that WCF can be conceptualized using the following stack diagram:



Also of note in the above diagram is the appearance of the Open Data (OData) Protocol in the protocol box of the channel model. The OData Protocol made its initial splash as the protocol used by the Data Services programming model to present and interact with data. It is a protocol that defines how a client can query, navigate, and make changes to data in an open, RESTful manner. Data Services fully implements this protocol today, and RIA Services will (soon) expose an endpoint that supports this protocol.


Feedback from folks at PDC, after the sessions and at the booth, was very positive. General comments was that this made service development in .NET much more straightforward, changing the decision from ’which .NET technology do I use?’ to ’which part of WCF fits best to what I want to do?’ Today, This also provides more messaging consistency to products that share a common technology foundation. Over the coming years, this indicates the direction towards tighter technical consistency for the .NET developer.


That being said, we did receive a bunch of common questions at PDC; and we thought it was worth sharing them, along with the answers, here.


Why did we rebrand ADO.NET Data Services and .NET RIA Services at PDC? And how do they relate to WCF?


You told us that it was confusing having different communication technologies being built and shipped by different groups. In this case, ADO.NET Data Services (also known by the codename ’Astoria’) was falling under the ADO.NET brand and coming out of the Data group, while.NET RIA Services was being built and shipped with Silverlight. At PDC, and with the release of .NET Framework and Windows Server AppFabric, we are working to better align and simplify the developer experience on the .NET stack. This rebranding is the first step in bringing better alignment and simplification to the .NET communication stack that is WCF. Unifying these services offerings around WCF allows developers to use one set of skills when using services, in the short term this means being able to use these different service types with a dramatically shortened learning curve, and the long term this means getting full reuse of skills and code as the entire stack is aligned for these two technologies.


Technically speaking, these technologies share the same underlying programming model provided by WCF, but they are designed to meet different use cases. Data Services focuses on exposing your data model and associated logic through a RESTful interface, and RIA Services focuses on building end-to-end Silverlight applications.


Both WCF WebHTTP Services and WCF Data Services expose RESTful Services; how do they relate to one another?


The WebHTTP Service and Data Service technologies both share the same underlying programming model provided by WCF, but they are intended to meet different use cases. WebHTTP services are focused on operation-centric HTTP services to be deployed at web scale or when users are building a RESTful service and want full control over the URI/format/protocol.


Data Services on the other hand are focused on exposing your data model and associated logic through a RESTful interface. Today, Data Services is where you’ll find the richest implementation of the Open Data Protocol (OData), though in future releases of the .NET Framework, we plan on bringing OData support, as well as the declarative data model of Data Services, to WebHttp WCF Services.


How does a developer decide what service type my client should use to communicate with my Service?


If you are writing a “RIA Services Enabled” Silverlight application, the RIA Services tooling will automatically create the right client components for your RIA Services Domain Service.


If you are writing a client to an existing Service using Visual Studio, use “Add Service Reference” and the right client will be selected for you; a WCF Data Services Client for accessing a WCF Data Service or a WCF Client for accessing a WCF SOAP Service. Users can also use “Add Service Reference” to a WCF RIA Services Domain Service.


Are we deprecating any of these service types {SOAP Services, RIA Services, Data Services or Workflow Services}?


No; all of these technologies are core to our vision moving forward and each has a clearly defined use case for developers today. By bringing them all together within WCF, we provide developers with a clear place to start when building and using services. We also provide the developer with a collection of common concepts and extensibility point across these technologies, allowing you to be quickly productive using technologies with simple starting points while providing a powerful foundation should you need additional flexibility of the larger WCF stack.


If you are using one of these stacks today, we would like to reassure you that it still makes sense for you to continue to use what you are developing with. The stacks will get improved in the future, and the continuing unification story will provide you with additional benefits of the others, but none of these stacks are currently planned for deprecation.


If a key value proposition of RIA Services is its simplicity, how is this balanced with the flexibility for WCF?


WCF is a very powerful and flexible technology. What RIA Services provides is a prescriptive pattern that defaults many of those options for the best experience in the common cases. Early feedback on RIA Services has been that it makes it extremely easy to get an end-to-end solution up and running, by effectively hiding a lot of the WCF flexibility.  However, if need arises, developers are able to configure a RIA Services Domain Service like any WCF service with all the power and expressiveness they have come to expect from WCF. 

WCF extensions: The type ‘x’ registered for extension ‘y’ could not be loaded.

Quick note: if you configure a custom extension in WCF, don’t forget to add it as a reference to your project. If you don’t configuration will run just fine, but at runtime, the following exception occurs:

System.Configuration.ConfigurationErrorsException: The type ‘x’ registered for extension ‘y’ could not be loaded. (config information, line z)

   at System.Configuration.BaseConfigurationRecord.EvaluateOne(String[] keys, SectionInput input, Boolean isTrusted, FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult)
   at System.Configuration.BaseConfigurationRecord.Evaluate(FactoryRecord factoryRecord, SectionRecord sectionRecord, Object parentResult, Boolean getLkg, Boolean getRuntimeObject, Object& result, Object& resultRuntimeObject)
   at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject)
   at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject)
   at System.Configuration.BaseConfigurationRecord.GetSectionRecursive(String configKey, Boolean getLkg, Boolean checkPermission, Boolean getRuntimeObject, Boolean requestIsHere, Object& result, Object& resultRuntimeObject)
   at System.Configuration.BaseConfigurationRecord.GetSection(String configKey)
   at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection(String sectionName)
   at System.Configuration.ConfigurationManager.GetSection(String sectionName)
   at System.ServiceModel.Configuration.ConfigurationHelpers.UnsafeGetSectionFromConfigurationManager(String sectionPath)
   at System.ServiceModel.Configuration.ConfigurationHelpers.UnsafeGetAssociatedSection(ContextInformation evalContext, String sectionPath)
   at System.ServiceModel.Configuration.ServicesSection.UnsafeGetSection()
   at System.ServiceModel.Description.ConfigLoader.LookupService(String serviceConfigurationName)
   at System.ServiceModel.ServiceHostBase.ApplyConfiguration()
   at System.ServiceModel.ServiceHostBase.InitializeDescription(UriSchemeKeyedCollection baseAddresses)
   at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses)
   at Microsoft.Tools.SvcHost.ServiceHostHelper.CreateServiceHost(Type type, ServiceKind kind)
   at Microsoft.Tools.SvcHost.ServiceHostHelper.OpenService(ServiceInfo info)