In an effort to standardize the consumption of EDI documents, what I have come across is the same pattern.
- Create flat file schema
- Bring in and EDI Transaction
- Create the mapping to a fixed length flat file.
- Deploy
- Start Testing
This is pretty standard, and there is nothing wrong with this approach, it is pretty easy, and relatively quick to implement.The problem is that when the backend changes, the business now wants different data, or they come back and inform that something is missing. The normal request is “we need to be getting field X, can you run the past {month/year} data?”
So what that means is the following pattern must me followed:
- Modify flat file schema
- Modify existing map
- Undeploy map project
- Redeploy modified flat file project
- Redeploy modified mapping project
- Apply bindings
- Run all files through updated process
This becomes very tedious when either the output is not clearly defined, or there are a lot of transactions to process.
What we have created is a EDI warehouse that stores all EDI data. Below is the data flow: (click to enlarge)
Once the data is loaded with it’s subsequent primary keys, it waits until the next transaction in the interchange is processed. Once the transactions have been processed, it calls a stored procedure that will extract the data to whatever the business needs.
The nice part of this design, is that when the business needs different data, simply changing the query is all that is now needed. No more code changes, redeploys, etc.
Items that come with the EDI Warehouse package:
- Because the SQL Adapter runs out of memory for large transactions, a custom Bulk Load adapter is included
- The following transactions are included: 820, 850, and 856 which include the following:
- Table definitions for each transaction, map into warehouse, mapping schema for load process
Any ANSI X12 transaction can be purchased for the prices on the Products/Services page.