BizTalk FTP Adapter – How to send an FTP message with a specified filename

BizTalk FTP Adapter – How to send an FTP message with a specified filename

There are many approaches to this. I’ll give the simple (or basic) way to do. Set the FILE.ReceivedFileName and use the %SourceFileName% macro In the orchestration Construct Message shape, add a Message Assignment shape and then you can set the FILE.ReceivedFileName property for your flat file message as so: OutputMsg(FILE.ReceivedFileName) = System.DateTime.Now.ToString("yyyyMMdd") + “.txt” This […]

Hosting BizTalk Server on Azure VM Role

Another great announcement on PDC was the Virtual Machine Role feature.  This feature is added to Azure with the primary goal to move existing applications to the cloud. 

The feature allows us to upload our own VHD (virtual hard disk) with the Operating System on it (Windows 2008 R2 Enterprise).  This machine could have your application pre-packaged and installed.  After doing this, you are now able to spin up multiple instances of that machine.

BizTalk on VM Role?

Being a BizTalk Server architect that is highly interested in the Azure platform, I immediately thought about a scenario where I could have my own BizTalk Server in the cloud, on Azure.  But, knowing some of the limitations of the Azure platform, I knew I would have a lot of potential issues. 

I listed these issues and added the various workaround or solutions for it.  This post is a post, based on the PDC information and can contain incorrect information.  Consider it as some early thinking and considerations.

No MSDTC support with SQL Azure

  • Problem description
    • BizTalk Server relies heavily on SQL Server and uses MSDTC to keep transactional state across databases and with the adapters.
    • SQL Azure does not support distributed transactions and also introduces more latency to the database queries.
  • Solution
    • SQL Server will need to be installed on the VHD image locally
  • Consequences
    • It won’t be possible to build a multi-machine BizTalk Server Group through the VM role.

The OS image is not durable

  • Problem description
    • All changes that are being made after a Virtual Machine instance is started will be lost, once the instance shuts down or fails over.  (there is only one VHD, but multiple instances are possible -> concurrency issues)
    • This means all database state (configuration, tracking, process state) will be lost if an instance fails.
  • Consequences
    • It won’t be possible to have a stateful BizTalk Server configured or to host long running processes on a VM Role BizTalk Server
    • We will need to expose BizTalk Server capabilities as services to a stateful engine (Workflow?)

The Virtual Machine name will be assigned by the Windows Azure fabric controller

  • Problem description
    • Since it is possible to have multiple instances of a VM running, these instances will get a specific Computer Name assigned by the Azure Fabric controller. 
    • It is very hard to change the computer name of a BizTalk Server machine
  • Solution
    • We will need to automate the BizTalk Server configuration, using a Silent Install, once the VM is initiating.
  • Consequences
    • One of the biggest painpoints in setting up BizTalk Server in a VM role will be to configure the BizTalk Server instance on the fly as a startup task.
    • Starting / restarting a BizTalk VM Role instance will take a considerable amount of time

Licenses are needed

  • Problem description
    • BizTalk Server and SQL Server licenses are needed for each instance that is running
  • Solution
    • Since everything will be installed on a single box, we could use a standard edition of BizTalk & SQL
  • Consequences
    • There is no big pricing advantage, except for the operational cost
    • Only 5 applications will be supported , when using the standard edition

General conclusion

If we succeed in setting up BizTalk Server on VM Role at all, it will be a BizTalk Server with the following limitations:

  • No support for long running transactions
  • Single box machine
  • Stateless BizTalk box

Considering that integration as a service is on the roadmap of Microsoft (see session at PDC), we should only consider it as a temporary solution to have BizTalk Server configured on a VM Role.  If we do this, then we should just see it as a BizTalk Server that exposes its various ’short running / isolated’ capabilities as a service.  (flat file parsing, transformation, pub/sub, connectivity, EDI)

Sam Vanhoutte, Codit

Using the SAML CredentialType to Authenticate to the Service Bus

The Windows Azure AppFabric Service Bus uses a class called TransportClientEndpointBehavior to specify the credentials for a particular endpoint.  There are four options available to you: Unauthenticated, SimpleWebToken, SharedSecret, and SAML.  For details, take a look at the CredentialType member.  The first three are pretty well described and documented – in fact, if you’ve spent […]

Using a SAML token for the Service Bus Transport Client Credential

The Windows Azure AppFabric Service Bus uses a class called TransportClientEndpointBehavior to specify the credentials for a particular endpoint.  There are four options available to you: Unauthenticated, SimpleWebToken, SharedSecret, and SAML.  For details, take a look at the CredentialType member.  The first three are pretty well described and documented – in fact, if you’ve spent […]

BizTalk – List of Macros

BizTalk – List of Macros

I’m always forgetting the list of macros that I use, which leads me to always be looking for them, so here’s a list of send macros that you can use: Macro name Substitute value %datetime% Coordinated Universal Time (UTC) date time in the format YYYY-MM-DDThhmmss (for example, 1997-07-12T103508). %datetime_bts2000% UTC date time in the format […]

Streaming over HTTP with WCF

Recently I had a customer email me looking for information on how to send and receive large files with a WCF HTTP service. WCF supports streaming specifically for these types of scenarios.  Basically with streaming support you can create a service operation which receives a stream as it’s incoming parameter and returns a stream as it’s return value (a few other types like Message and anything implementing IXmlSerializable are also supported). MSDN describes how streaming in WCF works here, and how to implement it here. There’s a few gotchas however if you are dealing with sending large content with a service that is hosted in ASP.NET. If you scour the web you can find the answers, such as in the comments here.

In this post I’ll bring everything together and walk you through building a service exposed over HTTP and which uses WCF streaming. I’ll also touch on supporting file uploads with ASP.NET MVC, something I am sure many are familiar with. The sample which we will discuss requires .NET 4.0 and ASP.NET MVC 3 RC. If you don’t have MVC you can skip right to the section “Enabling streaming in WCF”. Also it’s very easy to adopt the code to work for web forms.

The scenario

For the sample we’re going to use a document store. To keep things simple and stay focused on the streaming, the store allows you to do two things, post documents and retrieve them through over HTTP. Exposing over HTTP means I can use multiple clients/devices to talk to the repo.  Here are more detailed requirements.

1. A user can POST documents to the repository with the uri indicating the location where the document will be stored using the uri format “http://localhost:8000/documents/{file}”. File in this case can include folder information, for example the following is a valid uri, “http://localhost:8000/documents/pictures/lighthouse.jpg”.

Below is what the full request looks like in Fiddler. Note: You’ll notice that the uri (and several below) has a “.” in it after localhost, this is a trick to get Fiddler to pick up the request as I am running in Cassini (Visual Studio’s web server).

And the response

2. On a POST, the server creates the necessary folder structure to support the POST. This means if the Pictures sub-folder does not exist it will get created.

3. A user can GET a document from the repository using the same uri format for the POST. Below is the request for retrieving the document we just posted.

And the response: (To keep thing simple I am not wrestling with setting the appropriate content type thus application/octet-stream is the default).

4. The last requirement was I needed a simple front end for uploading the file.  For this I decided to use ASP.NET MVC3 to create a really simple front end. That also gave me a chance to dip my feet in the new Razor syntax.

Creating the application

The first thing I did was create a new MVC 3 application in Visual Studio. I called the application “DocumentStore” and selected to create an empty application when prompted.

Next thing I did was was add a HomeController and a new view Index.cshtml which I put in the Views/Home folder. The view which is below allows me to upload a document and specify the folder. I did tell you it was simple right?

Next I added an Upload view also in the View/Home folder. That view show she result of the upload.

Then in the HomeController I implemented the Upload action which contains the logic for POSTing to the service. I used HttpClient form the Rest Starter Kit to do the actual post.

Here’s the flow

  • Grab the uploaded file and the path.
  • Create the uri for the resource I am going to post.
  • Post the resource using HttpClient.
  • Set the filename info to return in the view.

Up until this point I’ve simply created the front end for uploading the document. Now we can get to the streaming part.

Enabling streaming over HTTP

Streaming is enabled in WCF through the transferMode property on the binding. In this case I set transferMode to “Streamed” as I want to support it bi-directionally. The setting accepts other values which can be used to enable streaming on the request OR the response. I also set the maxReceivedMessageSize to the maximum size that I want to allow for transfers. Finally I also also set the maxBufferSize, though this is not required. WCF 4.0 introduced default bindings which greatly simplifies this configuration. In this case because I’m going to use WCF HTTP, I can use the standard webHttpBinding as is shown.

If you read the MSDN articles I cited above, you might think I’m done, however because I’m hosting our service on ASP.NET, there’s one more critical setting I need to mess with. This one definitely threw me for a loop.

ASP.NET doesn’t know about WCF and it has it’s own limits for the request size which I have now increased to match my limit. Without setting this the request will be killed if it exceeds the limits. This setting and the previous settings have to kept in sync. If not, either ASP.NET will kill the request, or WCF will return a Status 400: Invalid Request. Service tracing is definitely your friend in figuring out any issues on the WCF site.

Building the DocumentsResource

Now that streaming is enabled, I can build the resource. First I created a DocumentsResource class which I marked as a ServiceContract and configured to allow hosting in ASP.NET.

 

Registering the resource through routes (.NET 4.0 Goodness!)

Because I am using .NET 4.0, I can take advantage of the new ASP.NET routing integration to register my new resource without needing an .SVC file. To do this, I create a ServiceRoute in the Global.asax specifying the base uri and the service type (resource).

Document retrieval (GET)

Our document store has to allow retrieval of documents. Below is the implementation.

As I mentioned earlier, when you enable streaming your operations only work with a limited set of input / return values. In this case my GET method returns a stream which represents the document. Notice the method is has no parameters. When building a streamed service, the method can only have a single parameter which is either of type stream, message or which implements IXmlSerializable. This forces me to jump through a small hoop to get access to the path info for the document, but it’s manageable.

Here’s the logic in the method.

  • The method is annotated with a WebGetAttribute to indicate that it is a GET. The UriTemplate is set to “*” in order to match against any uri that matches the base uri for this service.
  • Next I grab all the uri segments and concatenate in order to create the relative file location. For example for the uri “http://localhost:8080/documents/pictures/lighthouse.jpg” this will result in “pictures\lighthouse.jpg” assuming the base address is “documents’”.
  • If a file path was actually passed in, I then create a the relative file path by concatenating the relative location with the server root and the relative path for the doc store.
  • Finally I create a stream pointing to the file on disk and I return it. The resulting stream is then returned back to the client.

Document upload (POST)

Whereas in the Get method we return a stream, in the POST case we simply accept a stream. In this case we need to create the directory if it does not exist and write the document to disk so there’s a bit more work.

Here’s the logic:

  • The method is annotated with WebInvoke with the method set to POST. Similar to the GET, the UriTemplate is set to “*” to match everything.
  • Next I grab on to the uri of the request so that I can return it later as part of the Location header in the response. This is appropriate according to the HTTP spec. It’s important that I grab it before I actually stream the file as I found if I wait till after, the request is disposed and I get an exception.
  • Similar to the GET I also grab the path segments to create the file name and create the local path.
  • Next I check to see if the directory that was requested actually exists. If it does not, I create it.
  • I create a stream to write the file to disc and use the convenient CopyTo method to copy the incoming stream to it.
  • Finally I set the status code to Created and set the location header.

Testing out the app

That’s it, the app is done. Now all there is to do is launch it and see the streaming in action as well.

Like I said, simple UI. . Go press “Browse” and then grab an image or a video. In this case I’ll grab the sample wildlife.wmv included in win7.

Next enter “Videos” for the folder. You can also put a deep hierarchy if you want like Foo/Bar/Baz.

Select Upload and you will see the following once it completes.

Now that the file is uploaded, it can be retrieved from the browser.

Which in this case will cause Windows media player to launch and show our video,.w00t!

Now that our document store is in place, we can use different clients to access it. In this example we used a browser / MVC application to keep things simple. The same store can also be accessed from a mobile device line an IPad,, a Silverlight application, or from a desktop application. All it needs is a way to talk HTTP.

Why not just MVC?

I know some folks are going to see this and say “why didn’t you just use MVC?’”.  I could have accomplished the upload / retrieval completely through MVC. The main point of this post however was to show how you can use streaming with WCF. Let’s move on

Get the code here: http://cid-f8b2fd72406fb218.office.live.com/self.aspx/blog/DocumentStore.zip

FREE 30-day Azure developer accounts! Hurry!

If you’ve worked with Azure you know that the SDK provides a simulation environment that lets you do a lot of local Azure development without even needing an Azure account. You can even do hybrid (something I like to do) where you’re running code locally but using Azure storage, queues, SQL Azure.

However, wouldn’t it be nice if you had a credit-card free way that you could get a developer account to really push stuff up to Azure, for FREE? Now you can! I’m not sure how long this will last, or if there’s a limit on how many will be issued, but right now, you can get (FOR FREE):

The Windows Azure platform 30 day pass includes the following resources:

  • Windows Azure
    • 4 small compute instances
    • 3GB of storage
    • 250,000 storage transactions

  • SQL Azure
    • Two 1GB Web Edition database

  • AppFabric
    • 100,000 Access Control transactions
    • 2 Service Bus connections

  • Data Transfers (per region)
    • 3 GB in
    • 3 GB out

Important: At the end of the 30 days all of your data will be erased. Please ensure you move your application and data to another Windows Azure Platform offer.

You may review the Windows Azure platform 30 day pass privacy policy here.

 

Sign up at:

http://www.windowsazurepass.com/?campid=C4624604-86EE-DF11-B2EA-001F29C6FB82

promo code DP001

 

Enjoy!

ACSUG December Meeting: Microsoft BizTalk Server 2010 Demo with Thiago Almeida

ACSUG December Meeting: Microsoft BizTalk Server 2010 Demo with Thiago Almeida

The next Auckland Connected Systems User Group meeting is set for Thursday the 2nd of December at 5:30pm. Presentation starts at 6:00pm. It’s free and we will have the usual pizza, drinks and giveaways. Register here. Microsoft BizTalk Server 2010 Demo Presentation BizTalk Server is Microsoft’s premier server for building business process and integration solutions. […]

Getting started with WCF Discovery

One of the cool new features in WCF 4 is the support for WS-Discovery.

 

So what makes WCF Discovery so cool?

Normally when a client application wants to connect to a WCF service it has to know several thing about the service like the operations it supports and where it is located. Knowing the operations at design time makes sense, after all this is functionality you are going to call. But the address makes less sense, that is a deployment things and something we don’t really want to be concerned with at design time. So we can specify the service address in the configuration file of the client app and we can change this as needed. However that does mean that whenever we move the WCF service to a different location, for example a new server, we need to go into each clients configuration file and update it with the new address. Not so nice and error prone as we are manually editing a bunch of XML

Decoupling the client from the actual address of the WCF service is what WCF Discovery does meaning we can move our WCF service to a new machine and each client will automatically start using the new address. Now how cool is that?

 

The two modes of WCF Discovery

There are two different ways of using WCF Discovery. The first is add-hoc and is real simple because there is no central server involved and the second being managed which is more capable but also depends on some central service being available. In his post I will be taking a closer look at the simple add-hoc mode.

 

The beauty of add-hoc WCF Discovery

The real nice thing about add-hoc WCF Discovery is that a service can just add a discovery endpoint and the client can check for this whenever it needs to call the service. This is done using the UDP protocol through a broadcast message meaning that a service will just discoverable as long as it is on the same subnet. This means that on lots of cases where a service is used on a LAN this will just work without any extra configuration of a central discovery service. Nice and simple, just the way I like it

 

Adding add-hoc WCF Discovery to a service

To demonstrate how to use add-hoc discovery I wrote a real simple service and self host it using the following code:

using System;
using System.ServiceModel;
 
namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var host = new ServiceHost(typeof(MyService)))
            {
                host.Open();
                Console.WriteLine("The host is listening at:");
                foreach (var item in host.Description.Endpoints)
                {
                    Console.WriteLine("{0}\n\t{1}", item.Address, item.Binding                        );
                }
                Console.WriteLine();
                Console.ReadLine();
            }
        }
    }
 
    [ServiceContract]
    interface IMyService
    {
        [OperationContract]
        string GetData(int value);
    }
 
 
    class MyService : IMyService
    {
        public string GetData(int value)
        {
            var result = string.Format("You entered \"{0}\" via \"{1}\"", 
                value, 
                OperationContext.Current.RequestContext.RequestMessage.Headers.To);
            
            Console.WriteLine(result);
            return result;
        }
    }
}

Note there is nothing about WCF Discovery here at all, it is just a standard WCF service.

 

This WCF service has a configuration file that is almost standard. It exposes the service using three different endpoints, each with a different binding. The WCF Discovery extensions are added by adding an extra endpoint of kind udpDiscoveryEndpoint and the <serviceDiscovery/> element in the service behavior section.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.serviceModel>
    <services>
      <service name="ConsoleApplication1.MyService">
        <host>
          <baseAddresses>
            <add baseAddress="http://localhost:9000"/>
            <add baseAddress="net.tcp://localhost:9001/"/>
          </baseAddresses>
        </host>
        <endpoint address="basic"
                  binding="basicHttpBinding"
                  contract="ConsoleApplication1.IMyService" />
        <endpoint address="ws"
                  binding="wsHttpBinding"
                  contract="ConsoleApplication1.IMyService" />
        <endpoint address="tcp"
                  binding="netTcpBinding"
                  contract="ConsoleApplication1.IMyService" />
        <endpoint kind="udpDiscoveryEndpoint" />
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior>
          <serviceDiscovery/>
          <serviceMetadata httpGetEnabled="true"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>
</configuration>

 

If we run this service it prints a message to the console for each endpoint it is listening on.

Most of these endpoints are very standard, it’s the last one urn:docs-oasis-open-org:ws-dd:ns:discovery:2009:01 that stands out and is the UDP discovery endpoint.

 

Using this discovery enabled WCF service from a client application

The first option on the client is to use the DiscoveryClient and have it determine all possible endpoints for the service. This type lives in the System.ServiceModel.Discovery.dll assembly so you will need to add a reference to that first. Using the DiscoveryClient a FindCriteria we can now search for published endpoints and use one like this:

var client = new DiscoveryClient(new UdpDiscoveryEndpoint());
var criteria = new FindCriteria(typeof(IMyService));
var findResult = client.Find(criteria);
 
foreach (var item in findResult.Endpoints)
{
    Console.WriteLine(item.Address);
}
Console.WriteLine();
var address = findResult.Endpoints.First(ep => ep.Address.Uri.Scheme == "http").Address;
Console.WriteLine(address);
var factory = new ChannelFactory<IMyService>(new BasicHttpBinding(), address);
var proxy = factory.CreateChannel();
Console.WriteLine(proxy.GetData(42));
((ICommunicationObject)proxy).Close();

This code will work just fine but there are two problems with it. First of all the client.Find() call will take 20 seconds. The reason is it wants to find as many endpoints as it can and the default time allowed is 20 seconds. You can set this to a shorter time but in that case you run the risk of making the timeout too short and the service not being discovered. The other option is to set the criteria.MaxResults to one so the client.Find() call return as soon as single endpoint is found.

The bigger problem here is the binding. I am exposing a BasicHttpBinding, a WSHttpBinding and a NetTcpBinding from my service and the DiscoveryClient doesn’t tell me which binding needs to be used with which address.

 

The DynamicEndpoint to the rescue

There is a different way of using WCF Discovery from the client that is far easier if all we want to do is call a specific service and we don’t really care about all addresses exposed and that is the DynamicEndpoint. Using the DynamicEndpoint we can specify which service we want to call and which binding we want to use and it will just do it for us if there is an endpoint for that binding. The following code calls the service with each of the three bindings supported.

foreach (var binding in new Binding[] { new BasicHttpBinding(), new WSHttpBinding(), new NetTcpBinding() })
{
    var endpoint = new DynamicEndpoint(ContractDescription.GetContract(typeof(IMyService)), binding);
    var factory = new ChannelFactory<IMyService>(endpoint);
    var proxy = factory.CreateChannel();
    Console.WriteLine(proxy.GetData(Environment.TickCount));
    ((ICommunicationObject)proxy).Close();
}

This results in the following output where we can see the different endpoints being used.

 

What happens if there is no endpoint with the required binding varies depending on the missing binding. If I remove the BasicHttpBinding from the service and run the client it will find the WSHttpBinding instead and try to call that resulting in a System.ServiceModel.ProtocolException error with message “Content Type text/xml; charset=utf-8 was not supported by service http://localhost:9000/ws.  The client and service bindings may be mismatched.”. The fact that DynamicEndpoint tries to use the address with a different HTTP binding sounds like a bug in WCF Discovery to me. On the other hand removing the WSHttpBinding from the service and running the client produces a much better System.ServiceModel.EndpointNotFoundException error with the message “2 endpoint(s) were discovered, but the client could not create or open the channel with any of the discovered endpoints.”.

 

Conclusion

WCF Discovery is a nice way to decouple clients from the physical address of the service giving us an extra bit of flexibility when deploying our WCF services.

 

Enjoy!

www.TheProblemSolver.nl

Wiki.WindowsWorkflowFoundation.eu