BizTalk 2010 on Windows 2008 64bit watch out for default installation Challenges

We are in the process of migrating some of our existing stuff from BizTalk 2006 to BizTalk 2010. As part of the process I was prototyping a web service call with some complex data models using WCF-basicHttp adapter.

I was consistently getting the following error message despite my various attempts like restarting host instances, redeploying the whole solution, uninstalling and installing assembly into GAC, etc

xlang/s engine event log entry: Uncaught exception (see the ‘inner exception’ below) has suspended an instance of service

……

Exception type: TargetInvocationException

Source: mscorlib

Target Site: System.Object _InvokeMethodFast(System.IRuntimeMethodInfo, System.Object, System.Object[], System.SignatureStruct ByRef, System.Reflection.MethodAttributes, System.RuntimeType)

The following is a stack trace that identifies the location where the exception occured

……

Exception type: TypeInitializationException

Source: EA.BizTalk.Framework.WCF.Orchestrations

Target Site: Microsoft.XLANGs.BaseTypes.SchemaBase get_PartSchema()

The following is a stack trace that identifies the location where the exception occured

……

Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401)

……

Exception type: FileLoadException

Source: EA.BizTalk.Framework.WCF.Orchestrations

Target Site: Void .cctor()

Doing some research taken me to this article http://support.microsoft.com/kb/2282372 where it explains about the error Loading this assembly would produce a different grant set from other instances. Even though the KB article is not related to BizTalk issue directly, it just gave me the clue its something do with 64bit process and version of .NET framework.

I checked the settings of the default BizTalk host configuration and to my surprise its configured as 32 bit as shown in the below picture.

Surprisingly, when I tried to create a new BizTalk host, by default the “32-bit only” option is checked by default.

Solution:

So, the solution is to create a new 64 bit host (simply, uncheck the 32bit only option) and create required host instances, then configure your orchestration to run inside the newly created 64 bit host as shown below.

Once the orchestration issue is sorted, you’ll experience something similar on the WCF-basicHttp adapter send port as shown in the below figure.

There was a failure executing the response(receive) pipeline: “Microsoft.BizTalk.DefaultPipelines.XMLReceive, Microsoft.BizTalk.DefaultPipelines, Version=3.0.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35” Source: “XML disassembler” Send Port: “WcfSendPort_CustomerService_BasicHttpBinding_ICustomerService” URI: “http://localhost:7684/CustomerService.svc” Reason: Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401)

The solution to the problem is same, this time you need to create new Send handlers for the WCF adapter you are using and reconfigure your send port to use the new send handler as shown in below pictures

To the Cloud……

I have just returned from the MVP summit held in Redmond each year, there were some highlights and some low lights, as with each summit. The highlights are always all the cool new stuff, most of which I’m not allowed to talk about publicly, which is great (NOT), it does mean I can’t say much on the blog. I can only say what has already been announced publicly at PDC, Microsoft is doing the cloud thing, the next off the conveyer belt is composite apps, what’s in the box, wait and see, I will say it’s interesting, and then more interesting when you add in the comments of the MVP’s present when they told us how interesting it was.

We are talking about the cloud here, and of course I want to run apps on it, I want to run workflow in the cloud, I’ve wanted this since they had it a few years back, and then took it away because it was so limited. It makes sense, in the right scenario.

I also want to access my applications inside the organisation (on premise), provide a rich integration layer to them, to enable my cloud apps to communicate with my on premise systems.

This kind of application is called a hybrid model, and it is/will become very common. I would like to use the same technology I use in the cloud to do the whole integration and workflow as I do to access my on premise applications. Currently I use BizTalk for my on Premise applications and then have to write something different to enable my cloud applications to do this, hence the use of the term “Hybrid” it’s using a bit of both. This is currently possible by various means of the service bus in windows Azure and the new bits to enable BizTalk to expose a port or Orchestration on the service bus to accept connections. I can then establish a communications pattern into my organisation’s “legacy” on premise applications.

My problem with this approach is detailed in my recent webcast at http://www.cloudcasts.net/Default.aspx?category=BizTalk+Light+and+Easy my on premise middleware still needs to exist, and it needs to scale in line with my cloud system, it’s not 1:1 more like 2-3(cloud instances):1(on Premise) but it needs to scale, hence I need to still invest in on premise hardware, however I don’t want to have to scale it I want to leverage the cloud to scale on demand, and scale back when I don’t need it. It’s one of the key selling factors for using the cloud.

Whilst I cannot put everything in the cloud, it’s never going to happen, I want to have the option of scaling to the cloud and then scaling back to on premise when I have low load levels, hence justifying my on premise costs.

I do have this for websites in the cloud; this is a little more difficult for an integration platform that needs to access legacy systems and is written in a non-cloud friendly way.

I would love to provide this on the cloud, but this is one ask that is some time away which ever provider you look at. My view is whoever cracks this will dominate the cloud market.

The rest of the detail will come. I don’t know when and I can’t say how but it’ll come wait and see, with more announcements coming. It’s how you leverage the cloud to work for you that will make the real difference in adopting a cloud/hybrid model or not.

Integration is a hard sell enough, to add cloud to the mix makes it even harder, I’m not the only one out there trying this on, customers are not buying yet, and the amount of convincing, assurances and explaining needed is staggering.

Connecting the device to the cloud: iPhone & Azure

My blog has been quiet recently, mainly because I’ve been spending a bunch of time putting together an iPhone app which talks to a set of RESTful WCF services hosted in Azure, backing onto SQL Azure for storage. This post is a technical walkthrough of that architecture and some of the learning experiences, but after this it will be back to normal. I have a couple of nice open-source projects which are coming soon, including a log4net appender which writes to Event Tracing for Windows (ETW), and a plug-in caching behavior for WCF services.

Anyway, the iPhone app: “iFormula1 2011”. Background – it’s a Fantasy Formula 1 game, where you put together your own F1 team from the real Formula 1 paddock. Then as the season progresses, your team’s point go up in line with the race results, and youwatch and see how badly you do compared to other players in the main league, and in the private leagues you can set up. If that’s up your street, check out the app site: http://bit.ly/iF1-2011. If not, the rest of this post is technical. </plug>

Technically, there are some pretty standard disconnected rich-client patterns in use, but the mixture of platforms made for some interesting challenges. The majority of the approaches are equally applicable for client solutions exposing system functionality within the Enterprise. Almost all the pain points were around the learning curve for Objective-C and the iOS framework. The Azure offerings for WCF service hosting and SQL storage work superbly, URL rewriting enables real REST leverage, and switching between on-premise dev boxes and the cloud is seamless (albeit slow – publishing to Azure from Visual Studio is a 20-minute cycle, so you need to manage your release plan).

Challenges & Approaches

Reference data updates

The app is bundled with a baseline set of data – all the drivers and team options, but the attributes are not static. After each race, the total number of points for each option needs to be updated. Less frequently, options may change – e.g. Robert Kubica is injured and is replaced with Nick Heidfeld; HRT take forever to sign up their second driver so there’s a “TBC” to replace. These updates apply to all users, so they need to be distributed as efficiently as possible.

REST supports efficient loading, allowing you to leverage the caching of the Internet without any effort other than some response headers. See Udi Dahan’s excellent post: Building Super-Scalable Web Systems with REST. The key is structuring your service such that many clients use the same request URL. You only want to get deltas, so you need to record a “last updated” stamp on the client and send that to the server to get changes since that stamp. A date/time stamp is the obvious choice, but it means clients will all be sending different requests, each with their own timestamp, e.g. x/y/z/lastUpdated/20110318T013002168 so they won’t share a URL and you won’t benefit from HTTP caching. If you use an integer data version instead, then the URL becomes x/y/z/dataVersion/2, so all clients on the same version will use the same request. The tradeoff is the data updates on the server need to increment the version correctly.

Transactional data updates

The ranking for a fantasy F1 team will change whenever anyone signs up a new team, or joins a league, so it’s not as predictable as reference data. The stats also get updated after every race, and the result of that update is specific to an individual user.

As with the reference data load, it’s useful to try and maximise the number of shared URLs. So I could use one method to get the user’s F1 team and nest all their leagues in the response, but that’s not reusable. Instead I have a very slim “TeamSnapshot” service which contains data unique to one user, and a “LeagueSnapshot” service which contains data shared across a league. The client then makes mulitple calls – it becomes chattier (which I see as a benefit, given the potential for connection loss), but a client is more likely to request a URL which has already been requested, by another user in the same league.

Encryption

There is some sensitive data which needs to go across the wire (users select a username and password), so we need to be able to round-trip encrypt and decrypt between client and server.

Given the standardised nature of encryption, this should be a non-issue, but cross-platform that’s not necessarily the case. With iOS and .NET, AES-256 is provided in both frameworks, so it’s a matter of carefully coding the encryptors so all the parameters (padding, key size etc.) match. DotMac has a good starting point – AES interoperability between .NET and iPhone. I started by working this up into simple iPhone and Windows clients, so you can quickly see that ciphers match on both sides – I’ll put that work up on github soon.

Securing the server

Although there’s only an iPhone client so far, the services are built with the potential for multiple client types. But they are RESTful services which expose CRUD operations to the Internet at large, so we need a simple and efficient way of identifying a type of client and knowing that it is valid to process their request.

This is an interesting one. I’ve adopted the API key approach that the social networking sites use – you register your app, they give you a unique key, and you send the key in every request. If you own all the clients, then you own all the keys and it’s a simple matter to append one to all the URLs, and you have the option to kill all instances of a client server-side if you need to. It’s a pretty flimsy option security-wise, but assuming your service provider is securing the infrastructure, and your concern is making sure that expired clients don’t continue to work, this is fine.

Dealing with network issues

Clients could be disconnected, connected on slow or unreliable networks, the service could be down, the user could terminate the client after receiving an update but before the changes are persisted. All these permutations need to be catered for.

This will depend a lot on what you’re doing, but for my domain I’ve mitigated this by making the client deliberately chatty, and making the service responses as small as possible. This way we’re dealing with a large number of small network transfers, rather than a small number of large network transfers. The chunky approach which is typically preferred in enterprise design has greater exposure to losing a connection, and more complex compensation needed for interruptions.

JSON is to be preferred over XML, as you immediately halve the overhead of the response, and the lack of a strong (XSD-style) contract may be beneficial if you have clients which are on different versions – they need only extract the attributes they’re interested in from the response. Attribute names are worth considering, too – in the iF1 app most of the responses consist of a handful of strings and integers. Verbose attribute names can easily mean the JSON overhead is 3x or 4x the size of the actual data you want to transfer, so there’s a balance between readability and transfer size. JSON is native in WCF with the DataContractJsonSerializer, and there’s a tried-and-trusted open source JSON Framework for iOS on Google Code. Both are compliant to the JSON standard, so interop is painless.

Client-specific issues

Two stand out on the iPhone – memory management and the App Store. iOS does not have a garbage collector, so you need to look after memory yourself, freeing up allocations when an object is no longer needed. I’ve taken a couple of different approaches to fixing up memory leaks, and I’m not happy with either of them. From a .NET point of view, you can think of reference-counted memory management in this way:

  • consider all iOS objects as IDisposable
  • if you create an object using a constructor – [[Type alloc] init], then you own it and need to dispose (“release”) it
  • if you don’t dispose it, the memory will not be reclaimed
  • if you get the object from any way other than alloc-init (e.g. from a static method on the class), then you don’t own it
  • if you dispose an object you don’t own, your app will crash gracelessly.

In the early days when I was picking up Objective-C I kept over-aggressively releasing objects and crashing the app which slowed me up a good deal. Second approach was to not release anything, let all the memory leak, and then at the point of having a release candidate, run it under instrumentation (XCode has the “Leaks” option which does this very well) to find and fix the leaks. The second option meant I could focus on development, but then had to spend the best part of a day fixing leaks. Option 3 – having a good enough understanding of Objective-C to do memory management right first time, is the aim…

On the non-functional side, if you’re putting out a commercial app then the sales channel has to be accounted for, and with the App Store, Apple make the final decision on whether your app goes in. That can impact functionality, as Apple will reject apps that don’t account for connectivity loss, or leak megabytes of memory, so you need to understand those requirements early in the dev cycle.

And everything takes longer than expected, from signing up as a developer to posting your app for review. So leave as much time as you can, or you’ll be cutting in close if there’s a deadline for getting out there.

String comparison: StartsWith() slower than Contains()

How can that be? We’re in the realm of micro-micro-optimisation here, but I’m working on a log4net appender which writes to Event Tracing for Windows. The setup gives you the runtime configuration of ETW with the easy use of log4net. ETW logging is ultra-efficient, so I want to impact that as little as possible, which is why I’m optimising string comparison.

The appender will let you capture different levels of logging per assembly or per type, and I need to check where the call comes from to identify the correct level. So I can specify INFO level for anything in the namespace Sixeyed.Logging, and DEBUG for the class Sixeyed.Logging.Log.

In that comparison, equality comes first (so a type can be specified more granularly than an assembly), but if not equal I need to check if the type name starts with the specified namespace. StartsWith has to be more efficient than Contains, of course, because we’re only comparing the start of the string, not the whole thing. But actually no, for positive matches, StartsWith is consistently 40-50% slower than Contains:

10,000,000 iterations may seem excessive, but not in the context of enterprise logging. For non-matches (e.g. ascertaining that Acme.PetShop.Entities isn’t in the Sixeyed.Logging assembly), the two calls are more equally matched. Contains is typically slower, but usually less than 10%:

A very simple optimzation is the fastest – comparing the size of the strings first, then if the test string is longer than the pattern, checking the substring:

var match = (typeName.Length >= lookup.Length);

if (match)

{

match = typeName.Substring(0, lookup.Length) == lookup;

}

Assert.IsTrue(match);

That’s the fastest for positive and negative checks:

I’m expecting positive matches to be the dominant scenario, so this optimization saves a second every 10,000,000 logs. That efficient ETW appender for log4net is coming soon…

BizTalk 2010: Integration Roadshow hits Sydney

April 4th folksApril 4th.

Quick background: The BizTalk team have been travelling the globe on a ’Microsoft
Integration Roadshow’ covering countless countries and cities.

On April 4th the bus stops in Sydney. Here’s the official blurb and
I’ll be presenting – let me know if there’s anything you’d like covered in my demo
and I’ll try and accommodate.

Enjoy,

Mick.

 

 

 

REGISTER
TODAY
>>

Date

Monday,
4th April, 2011

Location

The
Menzies Sydney

14
Carrington Street,

Sydney
NSW 2000

Time

8:30am-12:30pm


 


 


 


 


 


 


 


 

 

 

 

 

Sydney 
|  Monday April 4th, 2011


 

Microsoft
Integration Road Show

Worldwide
events running Feb – Apr 2011

Overview

Enterprises
today typically work in a fairly heterogeneous environment with disparate systems.
Connecting the systems and applications sitting across the diverse platforms and tying
them to the business processes has become one of the top priorities for most organisations.
As they continue to evolve towards a cloud strategy – to take advantage of the economic
and scale benefits – the need to have a robust Integration Platform escalates. Microsoft
offers a tremendous opportunity for customers to make a paradigm shift in the way
they do business to maximize their benefits and profitability whilst maintaining an
optimized cost structure.


 

Don’t
miss this exciting opportunity to learn how we can help you beat the demands of today’s
difficult economy,
about
our commitment to BizTalk Server and how we plan to continue to innovate in the integration
space helping you begin your journey to the Cloud.


 

Agenda

8:30am
– 9:00am:
 
Light Breakfast and Registration

9:15am –
10:00am:
Keynote

“Innovations
in Integration – Begin your journey to the Cloud”

Speaker: Paul Larsen

Group Program Manager, Microsoft Corporation

10:00am –
11:00am:
 
Customer session

Caltex is Australia’s
leading oil refiner and supplies products via a network of pipelines, terminals, depots
and the company-owned and contracted transport fleet. Caltex made the business decision
to acquire many of their independent resellers – who were spread across every state
of Australia.

In this session
you’ll learn how Caltex COSMOS project integrated those different reseller businesses
into a single operating entity now called Caltex Petroleum Services.

Robin Brown,
IT Project Manager, Caltex Australia

11:00am –
11:30am

Break

11:30am –
12:30pm

Technical Drilldown

Mick Badran,
CTO, Breeze

This session
is for those that want to delve into the technology to see the latest integration
best practices and products including BizTalk Server 2010, AppFabric and Azure.


 

Location

The
Menzies Sydney

14
Carrington Street,

Sydney
NSW 2000

Target
Audience


CIO/TDM/BDM,
IT Directors/Managers, Architects, IT Pro & Developers

To
Register


Click here to
register. Space is limited so register today to ensure your attendance at this event.

 

Microsoft
confidential information. %u00a9 2011 Microsoft Corporation. All right reserved.

 


 


 


 


 


 

 

Blog Post by: Mick Badran