The Architecture Journal – Edition 21

I co-authored an article titled ‘Design Considerations for S+S and Cloud Computing’ in this months Architecture Journal.  I co-authored this along with 8 other architects at Microsoft.  They were Fred Chong, Alejandro Miguel, Jason Hogg, Ulrich Homann, Brant Zwiefel, Danny Garber, Joshy Joseph and Scott Zimmerman.


 



Here is the summary: The purpose of this article is to share our thoughts about the design patterns for a new generation of applications that are referred to as Software plus Services, cloud computing, or hybrid computing. The article provides a view into S+S architectural considerations and patterns as they affect common architectural domains such as enterprise, software, and infrastructure architecture.


Check it out at http://msdn.microsoft.com/en-us/architecture/aa699439.aspx


Also be sure to check out the other great articles in the 21st edition of the magazine.  This edition is focused on SOA today and tomorrow.

Orchestrating the Cloud: Part II – Creating and Consuming a Salesforce.com Service From BizTalk Server

Orchestrating the Cloud: Part II – Creating and Consuming a Salesforce.com Service From BizTalk Server

In my previous post, I explained my cloud orchestration scenario where my on-premises ESB coordinated calls to the Google App Engine, Salesforce.com and a local service, and returned a single data entity to a caller. That post looked at creating and consuming a Google App Engine service from BizTalk.
In this post, I’ll show you how […]

Book Review: Pro BAM in BizTalk 2009

A while back now I got the Pro BAM in BizTalk Server 2009 book. I have always liked BAM and we always try to use it in our solutions, if nothing else then for infrastructural logging purposes. However BAM has never been something that has been described in any detail or highlighted within the BizTalk documentation. There are also a great deal many BizTalk solutions and developers out there that have never used BAM, perhaps in part because they haven’t had a good source to learn about it. When we had a user group meeting and talked about BAM last year we did a short put-your-hand-up poll and, if my memory serves, only about one out of five did put their hand up. And this in a group that to a large part I would judge as pretty progressive. I didn’t ask how many had used BAM outside of BizTalk, but I am pretty sure that if I had the answer might have been one or two, out of the whole group, if that.

If the issue is that it’s hard to find a source that covers BAM, one that is decently complete in its coverage, then that is one issue that is now resolved. Pro BAM in BizTalk Server 2009 succeeds in being that source. It covers both development, administration and business aspects of BAM. And with Business I don’t solely mean the Business Analyst role, but also where BAM fits, where it makes sense, and how you can get your data into the observation model as well as how you can get it out and report and research on it.

Although BAM presently is a BizTalk bundled technology the book approaches BAM from a BizTalk independent way, and talks as much about BAM in relation to other connected system technologies like WCF and WF as it does BizTalk. But that’s in line with the trends of BizTalk in general, where WCF more and more is taking on a very central role. Not everything is 100% up to date, but that’s not to be expected – change happens so fast that yesterday can be old news today, but the book still strives to put things in context of the latest technology and concepts and touches on topics such as Dublin and Oslo.

The book also goes into great detail about how to use the different types of tooling that comes along with BAM aimed for the different roles of Business Analyst, Developer, Administrator, and Information Worker (or Data Consumer as the book calls it). I also like how the book has specific sections on troubleshooting, should everything not work as expected, and tips that goes beyond just configuring it but also living with it.

It’s a really complete book in its coverage of BAM, and pointing out what’s missing is not an easy task, and isn’t really fair to the authors. If anything a discussion on BAM and performance could have been present. Although BAM has a highly performing infrastructure, a performance discussion is always of interest, especially from a BizTalk perspective when comparing it to for example the DTA tracking. The book also doesn’t go into much detail about when different tables are used, or what they contain and what flags have what meaning. Such things are however not need to know for you too call yourself a BAM wiz, something which this book may very well help you become.

Thanks Jeff and Geoff, it’s a great addition to my library. And I’m a better BizTalker for reading it 😉

Book update: SOA with .NET and Azure

Well folks, it’s been a very long time coming, but I’m happy to say that the end is near! As you can see below, we’re heading towards availability in Q1 of 2010.

“SOA with .NET and Azure” is an addition to the highly successful and respected Thomas Erl Service-Oriented computing series from Prentice Hall. It’s been a pleasure working with Thomas and the entire team as we’ve gone through this. Some pre-release content will be available at the SOA Symposium in Rotterdam that I’ll be speaking at in a couple of weeks.

To sign up for notifications, visit http://soabooks.com/book.asp?book=soa_net&page=overview

 

BAM and SSIS notes worth repeating

SQL Server Integration Services (SSIS) is not a clustered resource. Connecting to SSIS generally means connecting to any of the SQL Server servers in your cluster environment directly, not to a virtual cluster address. Configuring SSIS for a named BAM instance still involves configuring it against your clustered BAM database instance. The BizTalk install documentation (Multicomputer) also claims that SSIS needs to be installed on the BizTalk Servers. That’s incorrect. You could install SSIS and that’d be good, but I much rather install the management tools, as explained and outlined here.

While talking about BAM I can’t help but to mention an excellent article recently written by Saravana Kumar here.

Doesn’t everyone want to be the one that chooses?

Lazy? Perhaps. But bad? Unfair!

IT departments and consulting companies alike are not populated by bad developers, or lazy developers, or impassionate for that matter. The word passionate is appropriate for developers or architects that do keep in sync with all the new choices available to us. Pragmatic may very well be a good definition for the rest. But calling them bad developers wont motivate anyone, and, in my opinion, is unfair. Keeping that up to date is not a task necessary for all developers. But all developers could benefit and grow from doing so.

The developer isn't the problem

However, as I see it, the problem isn't with the developers, the problem is with management. Developers want to learn. I think that applies to most if not all developers. The problem however is two fold. One, Developers are not given the time need to learn by management to be able to make educated choices. You really have to be passionate to take that learning outside of your working hours, and push that passion onto your family and friends, to the point where it's not just your job – it's become a much bigger part of your life. That's why I think the word passionate fits.

What choice is there?

So if you aren't given time to learn as part of your job, you really have very little choice. The choice left is instead to do it in your free time or not. Two, even though Microsoft may sometimes claim that new choices are driven by business demand, and I'm sure it often is, it's often not driven by the business that you as a developer are supporting. What I mean is – the people manning your business will not always (and do not often) see how the new technologies benefits the business. The use for the business is often visualized to them by the developers, and this is where the real issue and catch 22 lies… 

It will never be the same again

This increased flow of choices is in itself the root of the problem. Developers used to know it all. Management has gotten used to that. Today, the technologies to learn are so many more and diverse. We will never know it all again. But we can become fairly good and know enough to be good at our job. But we need to be given the time and possibility. Given that, I think everyone would choose to learn.

Focus on management

So, my call to action is to instead shift the focus from the developers, whom I'm believe in general want to learn, to management and the business, and make them understand how enabling people to learn new technology will help them realize their business goals. Because I do firmly believe they will benefit.

This post was my thoughts on the topic initiated by the duoblog done by Johan Lindfors and Patrik L%u00f6wendahl. Oh, and incidentally, we’ve been here before. I wrote about this topic, or one very close too it, as a result of things said or written by close to the same people a year ago, see here and here, if interested.

Converting InfoPath to PDF in BizTalk

Hi all

So, the other day I had this requirement for a BizTalk pipeline component:

Take an InfoPath formula and convert it into a PDF that is to be sent out via email.
This seemed easy enough. I searched a bit, and found that three simple steps were
needed:

  1. Install this: 2007
    Microsoft Office Add-in: Microsoft Save as PDF 
  2. In my code, reference Microsoft.Office.InfoPath.dll and Microsoft.Office.InfoPath.FormControl.dll
  3. Write these lines of code:
 1: FormControl
formControl = new FormControl();

 2: formControl.Open(pInMsg.Data);

 3: string output
= Path.GetTempFileName();

 4: formControl.XmlForm.CurrentView.Export(output,
Microsoft.Office.InfoPath.ExportFormat.Pdf);

Of course, this would also mean some code that would read the pdf file back in and
then create the output message. But hey, that was just the price I had to pay.

BUT I was being naive As the more clever of my readers have probably all ready realized,
if something is called FORMcontrol, then it is for programs that have a UI. The code
crashed big time at runtime with some ActiveX exception 🙁

Then I remembered that I have a colleague who had previously told me that she had
done this at some point, so I emailed her for her code.

Unfortunately, her code involved taking the form, extracting the XSL from the XSN
file, perform a transformation on the XML using the XSL which will generate HTML and
then using some utility to convert this into PDF. This was more complex than I had
hoped, but I saw no other way. Unfortunately, her code had this line in it:

 1: StreamReader
stream = new StreamReader(XmlFormView.XmlForm.Template.OpenFileFromPackage("View1.xsl"));

which, as you might have guessed also requires a UI, in this case it is used in a
web application. So no go.

So, it seems that I will have to do a lot of dirty work myself 🙁

This turned into quite a list of subtasks:

  • Take the XML document that comes through the pipeline component
  • Take the value of the processing instruction called “mso-infoPathSolution” This processing
    instruction is always present in an InfoPath form and it looks something like this:

    <?mso-infoPathSolution
    solutionVersion="1.0.0.2" productVersion="12.0.0" PIVersion="1.0.0.0"
    href="http://path.to/form.xsn" name="urn:schemas-microsoft-com:office:infopath:MyForm:-myXSD-2009-09-21T15-43-10" ?>
  • Take the value of the href “attribute” that is in the value of the processing instruction.
    The href is a URI that points to the XSN that this XML is an instance of, you see.
  • Get the XSN file that is located at the URI.
  • Extract the XSL file that matches the view of the form you want to convert into PDF.
  • Perform the transformation
  • Convert into PDF

 

So I am now going from the few lines of code I was hoping for to a more complex solution
so lets look at the code:

First of all, I need the value of the processing instruction. This is easily done:

 1: private static string GetHrefFromXml(XmlDocument
infoPathForm)

 2: {

 3: XmlNode
piNode = infoPathForm.SelectSingleNode("/processing-instruction(\"mso-infoPathSolution\")");

 4: if (piNode
!= null && piNode is XmlProcessingInstruction)

 5: {

 6: var
pi = (XmlProcessingInstruction)piNode;

 7: string href
= pi.Value;

 8: int location
= href.IndexOf(Href);

 9: if (location
!= -1)

 10: {

 11: href
= href.Substring(location + Href.Length);

 12: href
= href.Substring(0, href.IndexOf("\""));

 13: return href;

 14: }

 15: throw new ApplicationException("No
href attribute was found in the procesing instruction (mso-infoPathSolution). Without
this, the location of the form cannot be detected and without the form no PDF can
be generated.");

 16: }

 17: throw new ApplicationException("Required
XML processing instruction (mso-infoPathSolution) not found. Without this, the location
of the form cannot be detected and without the form no PDF can be generated.");

 18: }

The most annoying part is, that the value of a processing instruction can be anything.
In this case, it appears to be a list of attributes like “normal” XML, but since this
is not guaranteed, there is no language support for getting the value of the href
“attribute”. So I chose to use string manipulation to get the value.

After getting the href, I need to get the XSN file from SharePoint Server, where the
form is published. This turned out to be a challenge also.

My first approach was quite simple:

 1: private static byte[]
GetFormByUrl(string href)

 2: {

 3: var
wc = new WebClient

 4: {

 5: Credentials
= CredentialCache.DefaultCredentials

 6: };

 7: return wc.DownloadData(href);

 8: }

This turned out to be something silly, though. What happens when SharePoint and Forms
Server get a request for the XSN file, it assumes some one is trying to fill out the
form. So what I got back was the HTML that the Forms Server was sending a user that
wanted to fill out the form. Then I thought I’d try to do this:

 1: private static byte[]
GetFormByUrl(string href)

 2: {

 3: HttpWebRequest
wr = (HttpWebRequest)HttpWebRequest.Create(href);

 4: wr.AllowAutoRedirect
= false;

 5: WebResponse
resp = wr.GetResponse();

 6: Stream
stream = resp.GetResponseStream();

 7: using (MemoryStream
ms = new MemoryStream())

 8: {

 9: byte[]
buffer = new byte[1024];

 10: int bytes
= 0;

 11: while ((bytes
= stream.Read(buffer,0, buffer.Length)) != -1)

 12: ms.Write(buffer,0,bytes);

 13: return ms.ToArray();

 14: }

 15: }

Basically, using an HttpWebRequest I could ask it to not redirect. This didn’t work
either, since what I then got back was some HTML that basically just said that the
page has moved. Bummer.

But then another colleague who apparently is better at searching than I am found out
that I can add a noredirect parameter to my request that will instruct SharePoint
to not redirect. This is different from my current approach because my current approach
instructs .NET to not follow redirects, whereas this new approach instructs SharePoint
to not ask me to redirect.

So I ended up with something as simple as this:

 1: private static byte[]
GetFormByUrl(string href)

 2: {

 3: string url
= href + "?noredirect=true";

 4: var
wc = new WebClient

 5: {

 6: Credentials
= CredentialCache.DefaultCredentials

 7: };

 8: return wc.DownloadData(url);

 9: }

Simple and beautiful 🙂

Now I have the XSN file and the next issue pops up, naturally; How do I get the XSL
extracted from the XSN file. The XSN file is just a cabinet file with another extension,
so I thought this must be easy. I found out it is not. I searched and searched and
ended up finding all sorts of weird stuff where people used p/invoke to do stuff and
what not. I am confused that Microsoft have not added at least extraction functionality
to the .NET framework, but they haven’t.

I ended up doing this:

 1: private static string ExtractCabFile(string cabFile)

 2: {

 3: string destDir
= CreateTmp(true, "");

 4:  

 5: var
sh = new Shell();

 6: Folder
fldr = sh.NameSpace(destDir);

 7: foreach (FolderItem
f in sh.NameSpace(cabFile).Items())

 8: fldr.CopyHere(f,
0);

 9: return destDir;

 10: }

This code assumes that the XSN file has been written to a temporary file with the
extension .CAB – this is very important, since the shell command will open up the
.CAB file with the default program, which is then the explorer. After that, all files
in the cabinet file is copied to “destDir” which is just a directory created in the
users Temp directory.

I am quite annoyed to have to go through all this, but that’s how things go sometimes.

So now I have found the href of the form, downloaded the form and extracted its files.
Time for the transformation:

 1: private static MemoryStream
PerformTransformation(XmlDocument xmldoc, string destDir, string view)

 2: {

 3: var
transform = new XslCompiledTransform();

 4: var
stream = new StreamReader(destDir + @"\"
+ view + ".xsl");

 5: XmlReader
xmlReader = XmlReader.Create(stream);

 6: transform.Load(xmlReader);

 7:  

 8: var
outputMemStream = new MemoryStream();

 9: transform.Transform(xmldoc, null,
outputMemStream);

 10: stream.Close();

 11: xmlReader.Close();

 12: outputMemStream.Seek(0,
SeekOrigin.Begin);

 13: return outputMemStream;

 14: }

So just a normal XSLT transformation, resulting in some HTML that is returned in a
stream.

After this, I need to convert it into PDF, which is really simple using a tool we
bought for this:

 1: private static byte[]
GetPdfFromHtml(Parameters param)

 2: {

 3: var
pdfConverter = new PdfConverter

 4: {

 5: LicenseKey
= "SomethingElse - You are not getting the correct
License Key :-)"

 6: };

 7:  

 8: byte[]
pdfBytes = pdfConverter.GetPdfBytesFromHtmlStream(param.HtmlStream, Encoding.UTF8,
param.DestDir.EndsWith(@"\") ? param.DestDir
: param.DestDir + @"\");

 9: return pdfBytes;

 10: }

We are using the ExpertPDF library
for this. The third parameter for the GetPdfBytesFromHtmlStream method call is the
directory where the cabinet file was extracted to, since this is where all images
used in the form are also kept and they are needed for the PDF to include them.

All in all; the component now works, but it turned out to be a lot more difficult
than I had hoped.

As a last detail, I added a property to my pipeline component that the developer can
use to decide which view to use for the transformation form XML to HTML.

The complete code for the pipeline component will not be available for download, since
this was done for a customer, but I might do something a bit smaller and simpler and
add it to my pipeline
component collection later on.

eliasen

BizTalk 2009 – Configuring High Receiving Throughput

While on a current project and having a need to tweak (as always) how well BTS is
processing these receives, I came across a Perf document on BTS 2009 Receiving.

This document below deals mainly with netTCP receive locations – oneway ports + oneway
Orchs.

Enjoy.

——

BizTalk Server 2009 Performance Optimization Guide

Brief Description

The BizTalk Server 2009 Performance Optimization Guide provides prescriptive guidance
on the best practices and techniques that should be followed to optimize BizTalk Server
performance.

http://www.microsoft.com/downloads/details.aspx?FamilyID=24660797-0C8F-4687-9D5F-B76D99B37EC2&displaylang=en

BizTalk 2006 R2 SP1 Beta hits


  Service
Pack 1
Beta


The BizTalk team are pleased to announce the availability of the beta release
of Service Pack 1 for BizTalk Server 2006 R2. We would like to offer you the opportunity
to download this early preview of the Service Pack and encourage you to test it out
and let us have any feedback before we release it to the BizTalk community.

Microsoft BizTalk Server 2006 R2 Service Pack 1 (SP1) is an update for BizTalk Server
2006 R2.  It includes fixes to issues that have been reported through our customer
feedback platforms, as well as internally discovered issues. To see a listing of the
customer-reported issues that are fixed in this service pack, go to http://go.microsoft.com/fwlink/?LinkId=164985. 
For a description of some of the other updates included in this service pack, see What’s
new in BizTalk Server 2006 R2 SP1
(http://go.microsoft.com/fwlink/?LinkId=163958). 

A guide for this service pack is also available on the download page.  This guide
contains important information to read before you install SP1.  It also provides
installation instructions and a section on troubleshooting installation problems.
Finally, it contains a section on known issues in this service pack release.

The service pack can be downloaded from here and
any feedback or issues you encounter can be reported here

Thank you in advance!

Regards

BizTalk Product Group