by community-syndication | Jan 29, 2006 | BizTalk Community Blogs via Syndication
Last year a customer had a requirement to process DBF files in BizTalk. I created a custom pipeline component that saved the incoming binary stream to a physical file on the BizTalk machine and then used basic ADO.NET to parse the DBF File into an XML document. I then modified/extended this pipeline component to accept and parse other ODBC files to XML, such as:
DBF
Excel
FoxPro
Possibly others such as Access Files.
At this point in time, this custom pipeline component will only parse Excel and DBF files, but it is possible to modify the component to process other ODBC types.
By using this custom pipeline component in a BizTalk Receive Pipeline it will do the following:
Raw DBF, Excel messages are delivered to BizTalk by any transport such as:
File
FTP
MSMQ
etc. etc.
The raw message will be parsed to XML in a BizTalk Receive Pipeline with the parsed XML message published into the MsgBox.
This component requires no special APIs and uses basic ADO.NET code to parse the ODBC type files into XML.
You can download the full source code for the Custom Pipeline component at the end of this entry.
The component works as below:
1) The incoming file is saved to a temporary file on the BizTalk machine.
2) An OLEDB connection will be used to connect to the file from 1).
3) A Sql query is performed against the OLEDB datasource.
4) The results from the query are stored to an ADO.NET dataset/datatable.
5) The XML is extracted from the datatable and modified for a root node name and target namespace.
6) The temporary file from 1) is deleted
7) The XML from 5) is added back to the pipeline message stream.
The custom pipeline component was coded as a Decoder pipeline component, but it could be modified to implement a Disassembler pipeline component.
The Custom Pipeline Component exposes a number of properties for dynamic configuration.
The connection string and query differs slightly for an Excel and DBF file. Therefore the configuration for an Excel file and DBF file are discussed separately:
Excel
The incoming Excel file to be parsed looks as below:
The resultant parsed XML file will look as below:
Note: Only two Employee nodes are present in the XML file due to a filter condition in the configuration (see below).
The Configuration for this Pipeline is as below:
1) ConnectionString -> The OLEDB Connection string for the Excel file.
The following is set for the ConnectionString property:
Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=Excel 8.0;
But, the final Connection String that is produced by the code looks like below:
Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=Excel 8.0;Data Source=C:\Temp\afgd1234.xls
This is because the code, dumps the Excel File to the TempDropFolderLocation and must dynamically add the Data Source section to the connection string.
Note : Other Connection Properties for an Excel File:
“HDR=Yes;” indicates that the first row contains columnnames, not data
“IMEX=1;” tells the driver to always read “intermixed” data columns as text
(Above From: http://www.connectionstrings.com/ )
2) DataNodeName -> The XML Node name for the Data. In this case Employee
3) DeleteTempMessages -> If set to True, will delete the Excel file that is dropped to the TempDropFolderLocation after processing.
4) Filter -> Filter for the SqlStatement. In this case, will only Select LastNames Like %B%
Note: This is optional. If all data is to be returned, leave blank.
5) Namespace -> NameSpace for the resultant XML message.
6) RootNodeName -> Root Node Name for the resultant XML Message.
7) SqlStatement -> OLEDB Select Statement.
SQL syntax: SELECT * FROM [sheet1$] – i.e. worksheet name followed by a “$” and wrapped in “[” “]” brackets.
(Above From: http://www.connectionstrings.com/ )
Note: The SqlStatement could also look as below:
Select FirstName,LastName FROM [sheet1$] (only bring back selected columns)
Select FirstName as FName, LastName as LName FROM [sheet1$] (rename the column Names in the resultant XML)
8) TypeToProcess -> In this case Excel File.
DBF
The incoming DBF file to be parsed looks as below:
The resultant parsed XML file will look as below:
Note: Only two Items nodes are present in the XML file due to a filter condition in the configuration (see below).
The Configuration for this Pipeline is as below:
Note: The above is an example of Per Instance Pipeline Configuration for BizTalk 2006.
1) ConnectionString -> The OLEDB Connection string for the DBF file.
The following is set for the ConnectionString property:
Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=dBASE IV;
But, the final Connection String that is produced by the code looks like below:
Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=dBASE IV;Data Source=C:\Temp\
This is because the code, dumps the DBF File to the TempDropFolderLocation and must dynamically add the Data Source section to the connection string.
2) DataNodeName -> The XML Node name for the Data. In this case Items
3) DeleteTempMessages -> If set to True, will delete the DBF file that is dropped to the TempDropFolderLocation after processing.
4) Filter -> Filter for the SqlStatement. In this case, will only Select PRICE >= 200 and PRICE <=500
Note: This is optional. If all data is to be returned, leave blank.
5) Namespace -> NameSpace for the resultant XML message.
6) RootNodeName -> Root Node Name for the resultant XML Message.
7) SqlStatement -> OLEDB Select Statement.
In this case only have the columns part of the Select Statement as below:
Select *
This is because the code dumps the DBF File to the TempDropFolderLocation and must dynamically add the FROM statement as below:
SELECT * FROM i0lb1gcr.dbf
Note: The SqlStatement could also look as below:
Select COD, PRICE (only bring back selected columns)
Select COD as Id, Price as Amount (rename the Node Names in the resultant XML)
8) TypeToProcess -> In this case DBF File.
Note: When configuring a Pipeline Component in the BizTalk Server 2006 Administration console,
for TypeToProcess :
0 -> Excel
1 -> DBF
You can download the code Here. Before installing, look at the Readme
Note: This code was written in VS2005. If you want to use it in VS2003, create a new Pipeline type of project in VS2003 and then just copy the code from the DecodeODBC.cs to the VS2003 class. Also thoroughly test the code before using.
Finally:
The not so good things about this Component are:
1) It has to write the ODBC file locally to disk before parsing. This will create
extra disk I/O. I did test it with multiple submissions of 1 MB DBF files. The performance still seemed
pretty good.
2) The types of Excel files it can process are flat. If you’re Excel files to process are
complex, not sure how well this Component will parse to XML.
The good things about this component are:
1) The code to parse the ODBC files is dead simple, looks something like the below:
OleDbDataAdapter oCmd;
// Get the filter if there is one
string whereClause = “”;
if (Filter.Trim() != “”)
whereClause = ” Where ” + Filter.Trim();
if (this.TypeToProcess == odbcType.Excel)
oCmd = new OleDbDataAdapter(this.SqlStatement.Trim() + whereClause, oConn);
else // dbf
oCmd = new OleDbDataAdapter(this.SqlStatement.Trim() + ” From ” + filename + whereClause, oConn);
oConn.Open();
// Perform the Select statement from above into a dataset, into a DataSet.
DataSet odbcDataSet = new DataSet();
oCmd.Fill(odbcDataSet, this.DataNodeName);
oConn.Close();
// Write the XML From this DataSet into a String Builder
System.Text.StringBuilder stringBuilder = new StringBuilder();
System.IO.StringWriter stringWriter = new System.IO.StringWriter(stringBuilder);
odbcDataSet.Tables[0].WriteXml(stringWriter);
2) This code can be modified to process other types of ODBC files. The modifications
may be minor.
3) You can filter the data in an incoming Excel or DBF file.
by community-syndication | Jan 28, 2006 | BizTalk Community Blogs via Syndication
I had an interesting problem yesterday, and Google wasn’t able to find anything to help me, so I thought I’d do a post it here for the benefit of others that may experience the same thing.
I had a project that built and compiled fine, but when I went to enlist the orchestration, I got a “parameter is incorrect” error. Not much to go on! (this was with BizTalk 2004, BizTalk 2006 may work differently)
I found some references that implied this could be caused by a missing dependency, but that didn’t seem to fit here as I had quintuple-checked all dependencies.
After some digging, it turns out that what was failing was that there was a namespace issue with a promoted property. I was using that promoted property as part of a filter condition in a dynamically bound activating receive port. When the enlistment process ran, it was unable to create the subscription, resulting in this error.
It all makes perfect sense (now!)
by community-syndication | Jan 25, 2006 | BizTalk Community Blogs via Syndication
By way of background, let me take you way back to the proof-of-concept (POC), which is where the project I’m on had its genesis.
It was a competitive situation, where various vendors were given a set of requirements and a month to deliver an ESB-like solution that met them. Our team consisted of three people: Marty Wasznicky (Microsoft), Curt Peterson (Neudesic) and Todd Sussman (Neudesic). I was off doing other things at the time. Our team met all the objectives well ahead of the deadline, and then in fact proceeded to exceed expectations. At the end of the POC process, our team was the clear winner.
The basic capabilities of the POC were:
- Dynamic transformation (selecting and applying a map based on some external criteria, in our case the rules engine)
- Dynamic routing (contacting a UDDI directory [SOA Software] to get a SOAP endpoint URI)
- Integration with SalesForce.com
- Integration with AmberPoint
Transports involved were:
- SOAP (calling a SalesForce.com Web service)
- MQSeries
- File drop (as in all BizTalk demos and POCs :))
Although functionally the POC was very simple, it did prove out the various technologies, and showed that BizTalk was more than capable of playing a pivotal role in the client’s heterogeneous environment.
There were some interesting challenges with the SalesForce.com integration. Here’s what Curt has to say:
Integration with SalesForce.com’s Web Services interface, at first glance, appeared pretty straightforward. Unfortunately, the WSDL that was produced by SalesForce included nested Schema references that confused the BizTalk Web Services adapter wizard; we ended up creating the artifacts manually in BizTalk. The good thing is, all of this could be done manually, the wizards didn’t do anything “magic”.
This is of course all fluid, and that statement was true in June 2005. Your results may vary now. Here an internal Neudesic Field Note that Curt wrote up that you may find of interest if you’re up against this.
This is the last background/intro post, after this we’ll start delving into architectural issues, and the timelines will converge with my somewhat hectic reality.
by community-syndication | Jan 24, 2006 | BizTalk Community Blogs via Syndication
Flat file parsing in BTS2006 is pretty nice in general – you get a very specific error back in the error log if your incoming file is in the wrong format for the schema (or, put a more likely way, if you have buggered up creating the schema).
However, this vague error stumped us for a little while:
There was a failure executing the receive pipeline: “Orders.ContoFFPipeline.ContoFF, Orders.ContoFFPipeline, Version=1.0.0.0, Culture=neutral, PublicKeyToken=d5d6f7acdeb815af” Source: “Flat file disassembler” Receive Port: “RcvEnvelopedDocument” URI: “C:\FFIN\*.TXT” Reason: Unrecognized data in remaining stream
In our case this was due to a rogue carriage return at the end of the file (we had two instead of one!). Hope this saves someone some time!
by community-syndication | Jan 24, 2006 | BizTalk Community Blogs via Syndication
Here’s a braindump that I did for people with 2004 experience who are moving across to 2006. It’s just a very quick run through some of the new bits and pieces to look out for. It’s not complete, but it might be handy as the 50,000 ft view. It’s also written with a dev audience in mind and is not necessarily the view of my employer either! (Is that enough caveats?).
New Features
Functionally very smilar to 2k4 – nothing like the shift in experience from 2k2 to 2k4. It’s a “tidying up/more functionality” release, not a re-write.
The #1 new feature: You can zoom orchestrations in the development environment 😉
New BTS Admin tool which is actually useful (cf current admin console) and can be used to make modifications to your server. Also contains really good reporting, far nicer than HAT, for investigating failed messages.
New security group aimed at system operators who need to maintain, not modify, a server install
Includes lots and lots more adapters out of the box – eg MSMQ, POP3. Also all the iWay adapters such as JD Edwards, out of the box.
Really nice flat-file schema wizard, should save hours for anyone who needs to work with flat files.
Solution Deployment
Really, really, really, really nice. Collections of assemblies/bindings/resource files/etc are grouped into “Applications” within BTS. These applications appear as their own controllable group within server administrator. You can start and stop the whole thing (including orchs, send/receive ports) simply with a right click. You can also right-click and generate an MSI ready for import to another BTS2k6 box – and yes you can include multiple bindings files for multiple environments and it prompts you at MSI import time as to which environment you are setting up. Interesting to see how Scott Colestock makes use of this.
Server Installation
This is now much easier. All pre-requisites are supplied in a CAB file and installed for you, so no more hunting around for Patch A and Service Pack B.
Install is a two phase process: 1 – Install 2 – Configure so you can install to multiple boxes, then just run the configuration wizard to set up multiple identically configured BizTalk boxes. Config tool is now robust and usable (cf ConfigFramework.exe)
You can install on XP – but don’t, as you get a slightly different feature set and the install is slightly different. Better to stick to 2k3, unless your client’s servers will be running XP of course 😉
You can pre-create BTS Databases and the installer will use them and configure them. Just create an empty DB with the right name, and off you go. This is good for live deployments where you can create the database on teh server and disks that you want, precreate to a sensible size instead of forcing it to autogrow up to 2-3GB – a good way to boost performance.
Apparently, renaming of a biztalk server will be supported using a UI. This has yet to make an appearance in beta though.
Migration
BTS2004 projects “will” open fine in 2006. There have been reports of one single schema file that had issues, but it took 5 minutes to fix.
BAM
New BAM portal, which runs on ASP.NET. BAM can now hook into pipelines and message properties, not just orchestrations, so you can use it for pure messaging solutions too out of the box without dropping down into custom pipelines, custom components and the BAM API.
Can maybe use BAM for large-scale message tracking instead of TPE, with potential big performance gains as you get a dedicated database for it. This is just an idea some people are floating at the moment, and worth looking into for advantages/disadvantages
Samples
BTS2k6 ships with “Scenarios” – showing, eg, b2b, messaging, and so on. These are enterprise quality, complex applications showing best practice. Significant dev effort from MS into making these, so leverage them. Also probably contain useful components and utilities.
Random
Apparently XBox Live runs on BizTalk 2006 🙂