In our Integration projects, especially nowadays on Azure Integration Services, sometimes we have the need to work with Base64 strings format. This is very common in some connectors inside Logic App, where the request or response is in Base64, like the Service Bus or the HTTP. And when we need to debug or troubleshoot our solution or business process, we need to understand what is the request and response payload. That means that we need most of the time to decode the Base64 string.
And I know what you guys are thinking… I also was thinking that way! Why do you need a Windows tool if we have plenty of online tools like https://www.base64decode.org/ that can easily do the job simply and fashionably?
And my straightforward and honest answer is privacy/security! The problem with using these online tools is that we never know behind the scenes what they are doing. Are you sure that they are not keeping logs of the inputs we provide and the result outputs? That is the magic question because we often have sensitive (private) information on those Base64 strings, like a connection string or usernames and passwords. And we need to be careful about where we put this information.
I have an amazing Ethical hacking friend, Nino Crudele, and every time I speak with him about security, I’m always more suspicious about how to use specific tools available on the web or, in general, how to secure my personal stuff and my solutions. And even speaking with Michael Stephenson, something that we try to do regularly, we share these concerns, and it was Michael that raised my suspicions about the decode online tools. Since that talk, I have stopped using them and decided to create my personal tool.
Base64 Decode Windows tool
This is a very simple Windows tool that allows you to decode your data. This is a handy tool if you have to deal with Base64 format.
To not raise the same suspicions about this tool, the source code is available on GitHub!
Hope you find this useful! So, if you liked the content or found it useful and want to help me write more content, you can buy (or help buy) my son a Star Wars Lego!
Luis Rigueira | Member of my team and one of the people responsible for developing this tool.
Author: Sandro Pereira
Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc.
He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community.
View all posts by Sandro Pereira
For those who are pt familiar, this project is a set of custom pipeline components (libraries) with several custom pipeline components that can be used in received and sent pipelines, which will extend BizTalk’s out-of-the-box pipeline capabilities.
BizTalk PDF2Xml Pipeline Component
BizTalk PDF2Xml Pipeline Component is, as the name mentioned, a decode component that transforms the content of a PDF document to an XML message that BizTalk can understand and process. The component uses the itextsharp library to extract the PDF content. The original source code was available on the CodePlex (pdf2xmlbiztalk.codeplex.com). Still, I couldn’t validate who was the original creator. So, the component first transforms the PDF content to HTML, and then using an external XSLT, will apply a transformation to convert the HTML into a know XML document that BizTalk Server can process.
My team and I kept that behavior, but we extended this component and added the capability also to, by default, convert it to a well know XML without the need for you to use an XSTL transformation directly on the pipeline.
How does this component work?
This is the list of properties that you can set up on the PDF2XML pipeline component:
Value to decide if you want the component to transform the PDF content to HTML or XML
Value to decide if you want to apply a transformation on the pipeline component or not
Path to an XSLT transformation file
Once you pass the PDF by this component and depending on how you configure it, the outcome can be:
All PDF content in an HTML format;
All PDF content in an XML format;
Part of the PDF content on an XML format (if you apply a transformation)
Unfortunately, on my initial tests, this component works well with some PDF files, but others simply ignore its content. Nevertheless, I make it available as a prof-of-concept.
THIS COMPONENT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND.
You can download BizTalk PDF2Xml Pipeline Component from GitHub here:
You may already know my BizTalk Pipeline Components Extensions Utility Pack project available on GitHub for those who follow me. The project is a set of custom pipeline components (libraries) with several custom pipeline components that can be used in received and sent pipelines, which will extend BizTalk’s out-of-the-box pipeline capabilities.
This month my team and I update this project with another new component: ODBC File Decoder Pipeline Component.
ODBC File Decoder Pipeline Component
ODBC File Decoder Pipeline Component is, as the name mentioned, a decode component that you can use in a receive pipeline to process DBF or Excel files. Still, it can be possible to process other ODBC types (maybe requiring minor adjustments). The component uses basic ADO.NET to parse the incoming DBF or Excel files into an XML document.
If consuming DBF files is not a typical scenario, we can’t say the same for Excel files. Yet, we often find these requirements, and there isn’t any out-of-the-box way to process these files.
Honestly, I don’t know the original creator of this custom component. I came across this old project that I found interesting while organizing my hard drives. However, when I tested it in BizTalk Server 2020, it wasn’t working correctly, so my team and I improved and organized the structure of the code of this component to work as expected.
How does this component work?
If we take has an example and Excel File (.xls) that has a table with:
We can use the ODBC File Decoder Pipeline Component to process these documents. First, we need to create a custom pipeline component and add this component to the decode stage. Once we publish this pipeline, we can configure it as follows to be able to process these types of Excel documents:
ConnectionString: ODBC Connection String
For Excel documents: Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=Excel 8.0;
For DBF: Provider=Microsoft.Jet.OLEDB.4.0;Extended Properties=dBASE IV;
DataNodeName: Rows node name for the generated XML message
For example: Line
Filter: Filter for Select Statement
Leave it empty
NameSpace: Namespace for the generated XML message
For example: http://ODBCTest.com
RootNodeName: Root node name for the generated XML message
For example: TesteXMLResult
SqlStatement: Select Statement to Read ODBC Files
For example: SELECT * FROM [Sheet1$]
TempDropFolderLocation: Support temp folder for processing the ODBC Files
For example: c:Tempodbcfiles
TypeToProcess: Type of file being Processed
0 to process Excel
1 to process DBF
The outcome result will be an Excel similar to this:
<?xml version="1.0" encoding="utf-8"?><ns0:TesteXMLResult xmlns:ns0="http://ODBCTest.com">
<Address>187 Main Street</Address>
<Address>182 Front Street</Address>
<Address>183 Main Street</Address>
Does it work with Xlsx files?
Honestly, I didn’t try it yet. I didn’t have that requirement, and I only remember this scenario now that I’m writing this post, but it should be able to process it. The only thing I know is that we need to use a different connection string, something similar to this:
This library includes a suite of functoids that make data encoder that you can use inside BizTalk mapper.
The main purpose of encoding is to transform data so that it can be properly and somehow safely consumed by a different type of system. Don’t get wrong here the goal is not to keep information secret, but rather to ensure that it’s able to be properly consumed.
Encoding transforms data into another format using a scheme that is publicly available so that it can easily be reversed. It does not require a key as the only thing required to decode it is the algorithm that was used to encode it, like, encoding/decoding to ASCII, BASE64, or UNICODE.
And of course, decoding is the opposite, it is the process of converting an encoded format back into the original sequence of characters.
This project includes the following Custom Functoids:
BASE64 Encoder Functoid: This functoid allows you to convert a string type into BASE64 encoded string.
The functoid takes one mandatory input parameters:
A string that represents the text that you want to encode to BASE64
The output of the functoid is a string, Example: U2FuZHJvIFBlcmVpcmE=
BASE64 Decoder Functoid: This functoid allows you to decode BASE64-encoded text strings into the original sequence of characters.
The functoid takes one mandatory input parameters:
BASE64 string representation that you want to decode to a text string
The output of the functoid is a string, Example: Sandro Pereira
BizTalk Mapper Extensions UtilityPack
BizTalk Mapper Extensions UtilityPack is a set of libraries with several useful functoids to include and use it in a map, which will provide an extension of BizTalk Mapper capabilities.
Where to download?
You can download this functoid along with all the existing one on the BizTalk Mapper Extensions UtilityPack here: