by community-syndication | Apr 16, 2009 | BizTalk Community Blogs via Syndication
We had a little problem a few days ago when we were reviewing the testing of a B2B solution implemented with BizTalk.
This implementation was basically the collection of a file via FTP and then a splitter pattern which would break up a batch and cascade updates to appropriate systems. The file which was recieved was a moderately complex file containing multiple rows containing different types of positional records.
We had implemented this as per the specification and were moving from testing this internally to test it with the business partner. One of our limitations was that we could not do integration testing as early as we would like due to some external constraints.
Our internal tests were fine, but during integration testing we found some unexpected behaviour from the partners FTP service. A summary of this behaviour is:
- The server would store multiple files with the exact same file name. If the partner uploaded 2 files we could see them as seperate instances with seperate file creation dates, but they would both have the same name.
- When we executed a GET command on a file it would get the file and then mark it as collected and prevent us from downloading the file again. To be able to get a file if transmittion had failed we need the partner to make the file available again
- We are unable to delete files from the server
- If 2 files with the same name can be seen on the remote server, if we execute a get it will actually merge the files together to give us one file locally containing all of the date from both remote files. The data positions the content of the files in date order
We experienced a little problem and became aware of this unexpected behaviour and our first thoughts were that we would need to do some custom coding to deal with this. On closer inspection our problem seemed to be something else and BizTalk was actually dealing with this FTP behaviour in a way that worked well for us. Our setup basically had the FTP adapter polling the remote server when files were downloaded they were streamed through a custom pipeline which contained the FFDasm pipeline component which used a schema based on the partners message specification.
The way BizTalk was dealing with this additional FTP requirements was as follows:
- It didnt matter that BizTalk couldnt delete the file from the remote server because once the file was downloaded it was no longer available anyway.
- When 2 or more files were merged together when downloaded the FFDasm component was recognising this and actually still broke the message up correctly. If 2 files were downloaded and merged together we would actually get 2 disassembled messsages in the messagebox which had been handled correctly by FFDasm
I guess in hindsight this kind of makes sense, but it was nice to come across a situation where something like this happens and you done end up having to pull together some hack to workaround it.
In terms of the partners FTP service we didnt get confirmation on the vendor specific setup but based on some googling (sorry I mean Live searching) I believe that the it could be based on a VAX file system where you can have multiple versions of a fileand the FTP service could be either HP/UX Enterprise Secure FTP or possible an offering from Sterling
by community-syndication | Apr 16, 2009 | BizTalk Community Blogs via Syndication
| The Australian BizTalk User Groups (Sydney Connected Systems User Group, Brisbane BizTalk Community (BrizTalk) and Melbourne BizTalk User Group, would like to invite you to attend one of the BizTalk Server 2009 Hands On Days being presented in May and June 2009.
The event is targeted for those using previous version of BizTalk Server and those wishing to learn more about BizTalk Server. The attendees will have an a dedicated BizTalk 2009 development environment to use during the event. The attendees can either work on the hands on labs or to experiment with the feature of BizTalk 2009. The BizTalk 2009 development environment will include BizTalk 2009, RFID, ESB 2.0, Windows 2008, SQL 2008, Team Foundation Server 2008, Visual Studio 2008 Team Suite. |
|
The Hands On Days will be on the following dates:
|
Melbourne
|
Brisbane
|
Sydney
|
Canberra
|
Perth
|
Adelaide
|
|
Saturday May 30th
|
Saturday June 13th
|
Saturday June 20th
|
TBA
|
TBA
|
TBA
|
|
|
Registration Opens Soon |
Register Your Interest |
Register Your Interest |
Register Your Interest |
| Register early as there are limited seats. |
The Event cost is $200* (inc GST) and will include lunch.
The event fee will be used to cover the venue and travel expenses of the presenters, any left over funds will be used for food and drinks at upcoming user group events.
* Please note that the registration fee will be processed via PayPal by Chesnut Consulting Services
|
Agenda for the Days
|
|
Presenters
|
|
Time
|
Topic
|
Duration
|
Melbourne
|
Brisbane
|
Sydney
|
| 9:00 AM |
Intro |
15 minutes |
Bill Chesnut |
Daniel Toomey |
Mick Badran |
| 9:15 AM |
What’s New in 2009 |
60 minutes* |
Bill Chesnut |
Daniel Toomey |
Mick Badran |
| 10:15 AM |
TFS Integration (Unit Testing & Automated Build) |
60 minutes |
Bill Chesnut |
Dean Robertson |
Bill Chesnut |
| 11:15 AM |
RFID |
60 minutes |
Mick Badran |
Mick Badran |
Mick Badran |
| 12:15 PM |
Lunch / Networking |
45 minutes |
|
|
|
| 1:00 PM |
Enterprise Integration (WCF LOB adapters, EDI, AS2 and Accelerators) |
60 minutes |
Miguel Herrera |
Miguel Herrera |
Miguel Herrera |
| 2:00 PM |
ESB 2.0 |
60 minutes |
Bill Chesnut |
Bill Chesnut |
Bill Chesnut |
| 3:00 PM |
Trouble Shooting & Problem Determination |
60 minutes |
Miguel Herrera |
Miguel Herrera |
Miguel Herrera |
| 4:00 PM |
Q & A |
30 minutes |
All |
All |
All |
| 4:30 PM |
End of Day |
|
|
|
|
* Each presentation will finish in time for the attendees to have a chance to put what they have learned to use on the BizTalk 2009 Environments that will be provided.
by stephen-w-thomas | Apr 16, 2009 | Stephen's BizTalk and Integration Blog
With BizTalk Server 2009, setting up integration with Team Foundation Server (TFS) has become much simpler. While setting up continuous integration, automated unit tests, and msi packaging was possible before BizTalk 2009 it was a huge pain.
Below I will walk through the steps to set this up with BizTalk 2009. I was able to get this up and running in about 30 hours including the time to create the Virtual Machine (that was 15 hours). It took 47 build attempts in TFS before all the bugs were worked out in the process. While I cut some corners for the demo it would not take much more time to develop a true production ready solution.
We can start by taking a look at the Virtual Machine setup:
- Windows 2008 SP1
- TFS 2008 SP1 with Build Server installed
- SQL 2008 with all optional components installed
- Visual Studios 2008 SP1
- BizTalk Server 2009 with MSBuild tools installed
As you read though the steps below keep in mind I have about 10 hours of experience with MS Build and TFS 2008. This was my very first time setting up automated unit tests and continuous integration with BizTalk. This is just one approach for demo purposes. In real life, for example, all these systems would not be on the same server. This would surely make the process harder.
At a high level, this is what is happening:
Update to a file is checked in -> Build is kicked off -> Build completed with no errors -> Unit Tests are ran -> (Verification – Not Shown) -> MSI is created
Keys Pain Points:
- Setting up TFS 2008 with SQL 2008 is many times more complex that you would think. Make sure you Google this before starting.
- Remember user permissions. This will affect your share permissions, other folders, and the ability to run scripts. For example, to run the Create MSI Process below the user running the Build Agent will need to be a BizTalk Admin.
- Relative and absolute file paths are a killer. I spent a lot of time finding temp locations and getting relative paths to work. Looking at the Build Log and MSBuild Targets was a huge help.
- Keep in mind TFS will use the version of code checked into TFS. If you update a file or the build project make sure you check it in or the new settings will be ran.
Download the Solution Code
Setting Up Continuous Integration, Unit Tests, and MSI Creation in BizTalk 2009 Sample Code
Setting Up Unit Tests for use in Continuous Integration
Step 1: Setup a Unit Test project following the help guide instructions at http://msdn.microsoft.com/en-us/library/dd224279.aspx
Step 2: Create a Test List in the .vsmdi that was added to the solution when the Unit Test project was created. Right click on the Lists of Tests. Creating the new list named RunAllUnitTests is shows below.
Step 3: Add the test methods from Step 1 to the new test list. Drag and drop the test into the test list. This is shown below.
Pointers: The hardest part of setting up the unit tests is getting the file paths correct for Schema and Map testing. I finally got tired of trying to figure it out and hard coded the paths to known local files. This is not the right way to do it.
Setting Up Continuous Integration
Step 1: Add Solution to Source Control in its own folder tree. In this case it is called CIDemo as shown below.
Step 2: Create a new Build Definition inside Team Explorer.
Step 3: Give the Build Definition a name. In this case it is called CIDemoBuildDefinition.
Step 4: Set the Workspace to the CIDemo solution folder created in Step 1. This is shown below.
Step 5: Go to the Project File section of the wizard. Select Create on the Project File screen to make a new Build Project. A new wizard will open.
Step 6: Select the CIDemo solution. This will be the solution that the build project will build.
Step 7: Select the build type. In this case it is Release.
Step 8: Since we already created the Unit Tests and a Test List select the RunAllUnitTests test list to have the unit tests ran when a build is performed. This is shown below. This can always be updated in the build project later on if the Unit Tests are not ready. Click Finish to end this wizard.
Step 9: Back on the main wizard, leave the Retention Policy items unchanged.
Step 10: Under Build Defaults, select New to create a new Build Agent. Name the build agent and set the computer name. In this case the name is CIDemoBuildAgent and the computer name is Win2008Ent-Base as seem below.
Step 11: Set the Share location for the builds to be copied to, also known as the Drop Location. A local share was created called Builds. To ensure no permission problems Everyone was added to this share. This is not what should be done in real life. Most likely this would be on another server.
Step 12: Under Trigger, select the Build each chick-in radio button. This will create a new build with each check in. Click OK to create the Build Definition.
Step 13: Test the process. Check in a file.
Creating A BizTalk MSI
The process used here to build an MSI package first installs the BizTalk Assemblies and Binding file to a local BizTalk Server. Then it exports out the MSI Package. While other approaches can be used that do not require a local BizTalk instance, this approach would allow for additional BizUnit style Unit Test (or Build Verification tests) to be ran against deployed code.
Step 1: Modify the CreateFullandPartialMSI.bat sample file in the CreateApp folder under Application Deployment in the BizTalk 2009 SDK. This file is called BuildMSI.bat in the Helper folder in the solution. Changes made to the file include changing paths, dll names, and application names. Make sure the order of the dlls is in the correct deploy order. i.e. Schemas before Maps.
Step 2: Modify the Build Project created in Step 5 above. This file is a MSBuild file that controls the build and tests ran against the build. At the end of the file right before the closing </Project> tag add the following:
<!– This Target created a directory for the binding files and Copies them and the build bat file to the temp directory. –>
<Target Name="AfterTest">
<MakeDir Directories="$(BinariesRoot)/Release/Bindings" ></MakeDir>
<Copy SourceFiles="$(SolutionRoot)/Bindings/CIDemo_Bindings_Dev.xml" DestinationFiles="$(BinariesRoot)/Release/Bindings/CIDemo_Bindings_Dev.xml"></Copy>
<Copy SourceFiles="$(SolutionRoot)/Helper/BuildMSI.bat" DestinationFiles="$(BinariesRoot)/BuildMSI.bat"></Copy>
</Target>
<!– This Target runs the build bat file, copied the completed MSI, and deletes the bat file from the file share. –>
<Target Name="AfterEndToEndIteration">
<Exec Command="$(BinariesRoot)/BuildMSI.bat" WorkingDirectory="$(BinariesRoot)" ></Exec>
<Copy SourceFiles="$(BinariesRoot)/CIDemo.msi" DestinationFiles="$(DropLocation)/$(BuildNumber)/CIDemo.msi"></Copy>
<Delete Files="$(DropLocation)/$(BuildNumber)/BuildMSI.bat"></Delete>
</Target>
This code will copy the binding files, the bat file used to build the MSI, and do some clean up. This can be customized as needed and the possibilities are almost endless. Make sure the updated file is checked into TFS.
Step 3: Ensure the user account running the build agent is a member of the BizTalk Admin Group. This use can be found inside TFS by starting a build and viewing the properties as seen below. This account is set up when you install TFS.
Step 4: Watch for output in the folder share when a file is checked in or a build manually started.
This outlines at a high level the process to create automated unit tests, set up continuous integration, and create a BizTalk MSI package. I hope to put this together into a video shortly. Until then, best of luck.
by community-syndication | Apr 16, 2009 | BizTalk Community Blogs via Syndication
I am building a BizTalk 2009 book library for my team at work and thought I’d share the list of books with some comments. They can all be pre-ordered from Amazon (with the exception of Pro Mapping in BizTalk 2009 that is already shipping).
Must have books:
SOA Patterns with BizTalk Server 2009 by Richard Seroter
Very much […]
by stephen-w-thomas | Apr 15, 2009 | Downloads
This sample code outlines how to set up Continuous Integration, Automated Unit Tests, and to create an MSI Package using Team Foundation Server 2008 (TFS) with BizTalk Server 2009.
At a high level, this is what is happening in the end to end process.
A file is updated and checked in -> A build is started -> The build completes -> Defined unit tests are ran -> a MSI Package is created
This sample code goes along with a step-by-step blog post that outlines this process.
That blog post can be found here: http://www.biztalkgurus.com/blogs/biztalk-integration/2009/04/16/setting-up-continuous-integration-automated-unit-tests-and-msi-packaging-in-biztalk-2009/
by community-syndication | Apr 15, 2009 | BizTalk Community Blogs via Syndication
Hi all
I have had posts about the context accessor functoid here and here.
Just a couple of notes about the context accessor functoids (plural – because there
are two functoids at codeplex):
-
One of the functoids will only work when called from a map that is
executed inside an orchestration.
-
The other functoid will only work when called from a map in a receive
port AND only if the pipeline component that ships with the functoid
has been used in the receive pipeline.
As you can see, creating a map based on either of these functoids makes your map impossible
to use in either an orchestration or a receive port based on which functoid you chose.
So you are creating a pretty hard coupling between your map and where it should be
used. This can be ok, but if other developers mess around with your solution in a
year or so, they wont know that and things can start breaking up.
My self: I am a user of the functoids – I would use them instead of assigning values
inside an orchestration using a message assignment shape.. but this discussion is
pretty much academic and about religion 🙂
Anyway, beware the limitations!
—
eliasen
by community-syndication | Apr 15, 2009 | BizTalk Community Blogs via Syndication
Hi all
I had a post about one of the context
accessor functoids which can be seen here: http://blog.eliasen.dk/2009/04/01/TheContextAccessorFunctoidPartI.aspx
This post is about the other one – the one that can only be used in a map that is
used in a receive port.
Basically, the functoid takes in three inputs:
The first is the name of the property and the second parameter is the namespace of
the property schema this property belongs to. The third parameter is an optional string
that is returned in case the promoted property could not be read.
This functoid only works in a map that is called in a receive port
and only if the receive location uses a pipeline that uses the ContextAccessorProvider
pipeline component that is included in he same DLL as the functoids.
What the pipeline component does is, that it takes the context of the incoming message
and saves it in a public static member. This way, the functoid can access this static
member of the pipeline component and read the promoted properties this way.
Good luck using it.
—
eliasen
by community-syndication | Apr 15, 2009 | BizTalk Community Blogs via Syndication
[Source: http://geekswithblogs.net/EltonStoneman]
Managing concurrency within an application boundary can be straightforward where you own the database schema and the application’s data representation. By adding an incrementing lock sequence to tables and holding the current sequence in entity objects, you can implement optimistic locking at the database level without a significant performance hit. At the service level, the situation is more complicated. Even where the database schema can be extended, you wouldn’t want the internals of concurrency management to be exposed in service contracts, so the lock sequence approach isn’t suitable.
An alternative pattern is to compute a data signature representing the retrieved state of an entity at the service level, and flow the signature alongside the entity in Get services. On Update calls, the original data signature is passed back and compared to the current signature of the data; if they differ then there’s been a concurrency violation and the update fails. The signature can be passed as a SOAP header across the wire so it’s not part of the contract and the optimistic locking strategy is transparent to consumers.
The level of transparency will depend on the consumer, as it needs to retrieve the signature from the Get call, retain it, and pass it back on the Update call. In WCF the DataContract versioning mechanism can be used to extract the signature from the header and retain it in the ExtensionData property of IExtensibleDataObject. The contents of the ExtensionData property are not directly accessible, so if the same DataContract is used on the Get and the Update, and the signature management is done through WCF extension points, then concurrency control is transparent to users.
I’ve worked through a WCF implementation for this pattern on MSDN Code Gallery here: Optimistic Locking over WCF. The sample uses a WCF behavior on the server side to compute a data signature (as a hash of the serializable object – generating a deterministic GUID from the XML string) and adds it to outgoing message headers for all services which return a DataContract object. On the consumer side, a parallel behaviour extracts the data signature from the header and adds it to ExtensionData, by appending it to the XML payload and using the standard DataContractSerializer to extract it.
The update service checks the data signature passed in the call with the current signature of the object and throws a known FaultException if there’s been a concurrency violation, which the WCF client can catch and react to:
Sixeyed.OptimisticLockingSample
The sample solution consists of four projects providing a SQL database for Customer entities, WCF Get and Update services, a WCF client and the ServiceModel library which contains the data signature behaviors. DataSignatureServiceBehavior adds a dispatch message formatter to each service operation, which computes the hash for any DataContract objects being returned, and adds it to the message headers. DataSignatureEndpointBehavior on the client adds a client message formatter to each endpoint operation, which extracts the hash from incoming calls, stores it in ExtensionData and adds it back to the header on outgoing calls.
Concurrency checking is done on the server side in the Update call, by comparing the given data signature to the signature from the current object state:
Guid dataSignature = DataSignature.Current;
if (dataSignature == Guid.Empty)
{
//this is an update method, so no data signature to
//compare against is an exception:
throw new FaultException<NoDataSignature>(new NoDataSignature());
}
Customer currentState = CustomerEntityService.Load(customer);
Guid currentDataSignature = DataSignature.Sign(currentState);
//if the data signatures match then update:
if (currentDataSignature == dataSignature)
{
CustomerEntityService.Update(customer);
}
else
{
//otherwise, throw concurrency violation exception:
throw new FaultException<ConcurrencyViolation>(new ConcurrencyViolation());
}
A limitation of the sample is the use of IExtensibleDataObject to store the data signature at the client side. Although this is fully functional and allows a completely generic solution, it relies on reflection to extract the data signature and add it to the message headers for the update call, which is a brittle option. Where you have greater control over the client, you can use a custom solution which will be more suitable – e.g. creating and implementing an IDataSignedEntity interface, or if consuming the services in BizTalk, by using context properties.
by community-syndication | Apr 15, 2009 | BizTalk Community Blogs via Syndication
[Source: http://geekswithblogs.net/EltonStoneman]
The venerable log4net library enables cheap instrumentation with configured logging levels, so logs are only written if the log call is on or above the active level. However, the evaluation of the log message always takes place, so there is some performance hit even if the log is not actually written. You can get over this by using delegates for the log message, which are only evaluated based on the active log level:
public static void Log(LogLevel level, Func<string> fMessage)
{
if (IsLogLevelEnabled(level))
{
LogInternal(level, fMessage.Invoke());
}
}
Making the delegate call with a lambda expression makes the code easy to read, as well as giving a performance saving:
Logger.Log(LogLevel.Debug,
() => string.Format(“Time: {0}, Config setting: {1}”,
DateTime.Now.TimeOfDay,
ConfigurationManager.AppSettings[“configValue”]));
For simple log messages, the saving may be minimal, but if the log involves walking the current stack to retrieve parameter values, it may be worth having. The sample above writes the current time and a configuration value to the log, if set to Debug. With the log level set to Warn, the log isn’t written. Executing the call 1,000,000 times at Warn level consistently takes over 3.7 seconds if the logger call is made directly, and less than 0.08 seconds if the Lambda delegate is used:
With a Warn call, the log is active and the direct and Lambda variants run 5,000 calls in 8.6 seconds, writing to a rolling log file appender:
I’ve added the logger and test code to the MSDN Code Gallery sample: Lambda log4net Sample, if you’re interested in checking it out.
by community-syndication | Apr 15, 2009 | BizTalk Community Blogs via Syndication
I’ve added some functionality to the PGP Pipeline component to enable it to Sign and Encrypt files.
Properties Explained:
ASCIIArmorFlag – Writes out file in ASCII or Binary
Extension – Final File’s extension
Operation – Decrypt, Encrypt, and now Sign and Encrypt
Passphrase – Private Key’s password for decrypting and signing
PrivateKeyFile – Absolute path to private key file
PublicKeyFile – Absolute path to public key file.
TempDirectory – Temporary directory used for file processing.
Email me if you could use this.