Session Recap: Microsoft Azure Day – Gurgaon

Session Recap: Microsoft Azure Day – Gurgaon

After the success of organizing events in Coimbatore and Bengaluru under the TechMeet360 branding, BizTalk360 organized its first event in Gurgaon on February 26th (Sunday). The event “Microsoft Azure Day – Gurgaon” was a full day session on Microsoft Azure technologies.

Microsoft Azure Day – 26th Feb 2017

There were about 77 registrations for this event on Meetup, and 28 attendees turned up on a Sunday morning ready to learn something new.

Azure Web Apps – Deep Dive

The sessions started off at 10:30 am with my session on Azure Web Apps – Deep Dive.

It was a learn by doing workshop. In the first few minutes, I covered topics about WebJobs, Deployment Slots, Application Insights, Continuous Delivery, Traffic Manager and then went ahead with full-on demos. The session was very interactive and got stretched for two and a half hours long, on audience demand. It was so much fun and learning together as everyone was very interested.

After the first session, it was time for a much needed lunch. Many thanks to Microsoft for sponsoring our lunches, courtesy Subway :-).

We all missed Dhananjay Kumar as he fell sick and couldn’t join us for the event (as a speaker). We wish him a “Speedy Recovery” 🙂

After lunch, at 2:00 PM, Abhimanyu K Vatsa started his talk on Azure CDN. Abhimanyu discussed about the Fundamentals of Azure CDN, it’s feature and how to use it.

Azure CDN – Accelerated Availability & Performance

The Azure CDN session concluded with a live demo and was appreciated by everyone.

Learn building Nano Services using Azure Functions

I took the podium one more time and presented on Azure Functions. This session discussed the features, reliability, scalability with a couple of quick demos.

We did some Q&A where I shared some curated links from internet helpful for Azure Certifications preparations. The list is available here on my github repo here: AzureCertification Prep Guidelines

We had some nice goodies, courtesy BizTalk360. We did a raffle and distributed them among the attendees. The raffle round was much fun as everyone was on high to recall everything we discussed throughout the day and answer first! The Best part of the Day!

We concluded the event at 4:30 pm with cheerful good byes! Thank you all the attendees for making it happen. Special Thanks to Deepak Rajendran for the accommodation and all the support throughout the event.

See you all in next event! Happy Learning 🙂

Author: Sunny Sharma

Sunny Sharma works at BizTalk360 as a Senior Software Engineer. He is a Microsoft MVP – having 6 years of experience in development using Microsoft .NET technologies. View all posts by Sunny Sharma

Microsoft Integration Weekly Update: Feb 27

Microsoft Integration Weekly Update: Feb 27

Do you feel difficult to keep up to date on all the frequent updates and announcements in the Microsoft Integration platform?

Integration weekly update can be your solution. It’s a weekly update on the topics related to Integration – enterprise integration, robust & scalable messaging capabilities and Citizen Integration capabilities empowered by Microsoft platform to deliver value to the business.

If you want to receive these updates weekly, then don’t forget to Subscribe!

On-Premise Integration:

Cloud and Hybrid Integration:

Feedback

Hope this would be helpful. Please feel free to let me know your feedback on the Integration weekly series.

Advertisements

Great Week at Ignite Australia!

Great Week at Ignite Australia!

Last week I had the opportunity to attend Microsoft Ignite on the Gold Coast, Australia. Even better – I had a free ticket on account of agreeing to serve as a Technical Learning Guide (TLG) in the hands-on labs. This opportunity is only open to Microsoft Certified Trainers (MCTs) and competition was evidently keen this year – so I am glad to have been chosen. Catching up with fellow MCTs like Mark Daunt and meeting up with new ones such as Michael Schmitz was a real pleasure. Of course the down side was that I missed quite a few breakout sessions during the times I was rostered. Nevertheless, I still got to see some of the most important sessions to me, particularly those that centred around Azure and integration technologies. Please have a read of my summary of these on my employer’s blog.

By and far this was my best Australian Ignite/Tech-Ed event experience for many reasons, including:

  1. The Pro-Integration team from Redmond came all the way out to Australia show everyone what the product group is doing with Logic Apps, Flow, Service Bus, and BizTalk Server
  2. I was chosen to present an Instructor-Led Lab in Service Fabric – my first ever speaking engagement at Ignite
  3. I had the rare opportunity to catch up with some fellow MVPs from Perth and Europe.

It was truly phenomenal to see enterprise integration properly represented at an Australian conference, as it is typically overlooked at these events. In addition to at least four breakout sessions on hybrid integration, Scott Guthrie actually performed a live demo of Logic Apps in his keynote! This was a good shout-out to the product team that has worked so hard to bring this technology up to the usability level it now enjoys. I’m glad that Jim Harrer, Jeff Holland, Jon Fancey and Kevin Lam were there to see it!

Teaching the lab in Service Fabric was a thrilling experience, but not without some challenges. The lab itself was broken and required a re-write of the second half, which I had pre-prepared and uploaded to One-Drive here so the students could progress. The main lab content is only available to Ignite attendees, however if you want to have a go at a similar lab you can try these ones available from Microsoft:

Despite the frustration that some attendees expressed about the lab errata and the poor performance of the environment, I was pleased that all the submitted feedback relating to the speaker was very positive! Smile

Finally, perhaps the best part of events like these is the ability to catch up with old friends and meet some new ones. It was a pleasure to hang out with Azure MVP Martin Abbott from Perth and meet a few of his colleagues. It was also great to see Elder Grootenboer and Steef-Jan Wiggers from the Netherlands, who happened to travel to Australia this month on holidays and to speak at some events. Steef-Jan also took time to include me in a V-Log series he’s been working on with various integration MVPs, recording his 3-minute interview with me at the top of Mount Coot-tha on a sunny Brisbane Saturday! And Mexia’s CEO Dean Robertson & myself got to enjoy a nice dinner out with the Microsoft product group and the MVPs.

All good things must come to an end, but it was definitely a memorable week! Now it’s time to start getting ready for the Brisbane edition of the Global Integration Bootcamp on Saturday, 25th March, to be followed not long after by the Global Azure Bootcamp on Saturday 22nd April! I’ve got a few demos and presentations to prepare – but now with plenty of inspiration from Ignite!

Building a composite service using Logic Apps

Building a composite service using Logic Apps

Microsoft has made the Logic Apps General Available (GA) a few months ago (end of July 2016) and this Azure service has evolved ever since. Each month the product group responsible for the service, Jeff Hollan and Kevin Lam together with Jon Fancey, present updates through a YouTube channel. The latest update (26 January 2017) included for instance, browsing enhancements for connectors, switch case condition (similar to C# switch), JSON Parse to tokenize properties, new SaaS connectors, operations and upcoming new features.

Besides the evolution and thus an increase in maturity, the adoption of Logic Apps rises. I myself and a colleague at Macaw have built and deployed a Logic App solution for a customer in production that runs for over a month now without any issues. And we are not the only ones as there are many more customer cases. Businesses start seeing the added value brought by Logic Apps with its connectors, short lead times and no requirements for servers. Logic Apps run in Azure and are Server less, i.e. you can have an enterprise ready integration solution running in the cloud for low cost and maintenance (TCO).

This blog post will provide some additional insight on the power and value of Logic Apps. It can be seen as a game changer concerning integration in the cloud. Logic Apps fall under the moniker integration Platform as a Service (iPaaS), which is what actually is, and I will be quoting Wikipedia here to emphasize this:

“The most common cloud-based integration service model is iPaaS (Integration Platform as a Service), which is a suite of cloud services enabling customers to develop, execute and govern integration flows between disparate applications. Under the cloud-based iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware.”

You provision a service/host, define the logic and execute in Azure. And iPaaS service is characterized as (according to Wikipedia):

Characteristic Azure
Deployed on a multi-tenant, elastic cloud infrastructure. The Logic App service
Subscription model pricing (OPEX, not CAPEX) Consumption Plan
No software development (required connectors should already be available) Logic App Definition, the available connectors (i.e. managed and/or custom) and actions
Users do not perform deployment or manage the platform itself ARM Template and the Azure Portal for management
Presence of integration management & monitoring features Logic App overview (blades), integration with OMS

Scenario

In this blog post I would like to share another use case or scenario of a Logic App to demonstrate its value and some of the new features. We will build a scenario to demonstrate API integration, where the focus is on connecting different API’s. One common reason for doing this is to automate more of a business process. And Logic Apps can facilitate the integration of one or multiple API’s.

A Logic App will be a composite service that, based on a set of parameters, will provide a result containing weather and event details. The parameters will be city for which you like to know the details about the weather the coming 7 days and events in that time frame. The Logic App will call two API’s hosted in API Management, and combine the result of each call into one single response to the client. The complexity of calling the API’s is abstracted away in API Management, and the composition of the response of both API’s is done in the Logic App.

Building the API

The API’s are generally available API’s, which will be abstracted away from the Logic App by creating a proxy in API Management. In API Management, we will create an API to add operations to them and tie them to the actual API and its operations combined with policies to manage security, and other quality aspects. We provide a name for the API, set the Web service URL (endpoint of the API) and can observe the actual WebAPI URL i.e. API Management instance name with a standard DNS addition .azure-api.net + suffix.

The WebAPI will have an operation daily that connects to the daily forecast operation of the WeatherAPI (api.openweathermap.org) and its parameters.

A policy will be applied to the operation, to add an APPID key in the query parameters.

<policies>
        <inbound>
                <set-query-parameter name="APPID" exists-action="append">
                        <value>67b0fe39a73fdf8c5045f9538270cee4</value>
                        <!-- for multiple parameters with the same name add additional value elements -->
                </set-query-parameter>
                <rewrite-uri template="daily" />
        </inbound>
        <backend>
                <forward-request />
        </backend>
        <outbound />
        <on-error />
</policies>

The policy ensures that the operation can be executed, as the APPID is mandatory in the query parameters. However, the consumer of the Azure API is unaware of this as it is abstracted away through the policy. The same process will be done in the event API (api.eventful.com).

Build the function

A HTTP function will merge the two bodies i.e. JSON strings into one and currently there is no action for it. Hence, we will build a function that will accept a string with two JSON bodies in them.

In general, when it comes to certain operations on your data within a Logic App, you will notice that a function can help facilitate in having that specific capability like merging JSON strings or perhaps date time manipulation. In our scenario, we’ll merge the two JSON responses to one using a function that can be called through HTTP. It is one big string that will be sent to a function, and with a Web Hook type of function it expects a JSON! A HTTP Function can accept a string, manipulate it and return a JSON as shown below.

The two JSON response bodies are concatenated together when calling the function, which means the braces }{ need to be replaced by a comma ‘,’ to form a JSON again.

Building the Logic App Definition

The Logic App itself is a service, i.e. the infrastructure is in Azure and once provisioned you can define your logic. Logic in this context is a definition with a trigger and one or more actions. To define the trigger and action we will have a designer available to add the trigger followed by actions and/or workflow specific actions i.e. for instance a condition, case or scope.
The switch case is one of the new additions.

The Logic App definition of our solution will have a HTTP request trigger followed by an action to consume the weather API and later the event API. It contains a switch to support the possible responses from the API i.e. HTTP status like 200, 400 or other. And based on those types of responses you have a switch in place to control the various outcomes of your Logic App. A 200 will mean proceed to call the next API or create some sort of response to the caller of your Logic App. In status 200 branch, a JSON parse will be used to tokenize the JSON response from the weather API. The JSON parse is another recently added action in Logic Apps as discussed in the introduction. Subsequently, the event API operation search will be called and if the HTTP status is 200, the left branch will call the function through HTTP presenting both response (bodies) to create a JSON response, which will be returned to the client as a response. If the call to search fails, the right branch will return a custom response.

Test the Logic App

To test the Logic App, we will use Postman to send a request (POST) with a payload. This payload will contain a city, start- and end date.

In the Azure Portal we can observe the detail of the Logic App run and drill down into each of the actions. And this is a great feature of Logic Apps, you can see what happens and even resubmit, which is very useful to investigate where failures happen or change the flow in the designer and then resubmit. You do have to use a client like Postman to resubmit!

The Logic App provides monitoring capabilities in its blade like metrics. You can monitor your runs and select metrics to see how your Logic App definition i.e. flow performs.

Metrics are not the only feature within the monitoring section of Logic App. You have diagnostics, and logs you can examine. Thus, its adheres to the characteristic of “Presence of integration management & monitoring features” of iPaaS.

Considerations

Logic App can add value to the business with automating processes, which are agile by the nature of this Azure Service. A process can be created and operational in hours instead of days, can be changed on the fly or replaced by a new process i.e. Logic App. Logic Apps will bring IT closer to the business than ever before. The tremendous number of connectors lowers the connectivity barrier or API’s can be tied together as shown in this blog post as a composite service or to create a business flow/process (orchestration).
However, the power and agility of Logic Apps will also bring the risk of losing grip on the processes if everyone can easily build these flows and run them, i.e. you will need some form of governance to prevent the business completely go wild with this service. With a great deal of flexibility or agility and lower barrier to build integrations some form of governance will necessary to prevent overlap, over consumption, and conflicts.

Logic App can be viewed as a counterpart for BizTalk Server, a server product from Microsoft suitable for application integration. In BizTalk business processes can be automated by connecting two or more applications or services. In this blog post we have seen that two API calls are combined to create one response. However, Logic Apps with its various connectors can do the same thing as we did with BizTalk on premise. With BizTalk, we can create a composite application similar to what we have shown in this post by providing a common front end to a group of existing applications. Hence the value BizTalk provided on premise is now available through Logic Apps. The most important difference is hosting and pricing i.e. cloud versus on premise and license fees versus pay as you go.

Logic Apps can be called directly through HTTP and thus are exposed to world wide web i.e. everyone, which means it will need some sort of security. Out of the box is that a signature (SAS) is bound to the endpoint and is required for clients to provide in the request. Some see this as a form of security by obscurity and to further enhance security you can use API Management. API Management abstracts away the security and you can enforce other means of security to access the endpoint i.e. OpenID or OAuth 2.0.

Supporting the solution such as the one described in this post can be a challenge since you need to monitor or keep an eye out for the API keys at API supplier i.e. the weather API and event API, key(s) for the function and SAS signature in the Logic App endpoint, and the consumption of the Logic App. Each action in Logic App is an Azure consumption and has a price, which can be monitored easily through observing the resource costs in a resource group; it is operational costs of executing Logic Apps. You have to factor in notifications and alerting in your solution, which is easy to set up using alert rules on your resource.

Call to Action

You can learn to work the service yourself, learn more about or express your wishes:

Author: Steef-Jan Wiggers

Steef-Jan Wiggers has over 15 years’ experience as a technical lead developer, application architect and consultant, specializing in custom applications, enterprise application integration (BizTalk), Web services and Windows Azure. Steef-Jan is very active in the BizTalk community as a blogger, Wiki author/editor, forum moderator, writer and public speaker in the Netherlands and Europe. For these efforts, Microsoft has recognized him a Microsoft MVP for the past 5 years. View all posts by Steef-Jan Wiggers

XXVIII Porto.Data Community Meeting | February 28, 2017 | SharePoint, PowerApps & Microsoft Flow

XXVIII Porto.Data Community Meeting | February 28, 2017 | SharePoint, PowerApps & Microsoft Flow

This post is for the BizTalk Server and other Portuguese Community, will be held on February 28, 2017 between 18:45 – 21:30 the XXVIII Porto.Data Community meeting at the Science and Technology Park of University of Porto (UPTEC) in Oporto.

For me it is a pleasure to return once again to this community, this time with a topic about SharePoint, PowerApps and Microsoft Flow: “SharePoint integration: How can PowerApps and Microsoft Flow give power to your SharePoint users

Abstract

Every organization faces constant pressure to do more with less. While technology is often key to operating more effectively and efficiently, cost and complexity have often prevented organizations from taking maximum advantage of the potential benefits. The growth of SaaS (software as a service) has lowered barriers – no need to deploy servers or to install and configure complex software systems. Just sign up and go.

Microsoft Flow and Microsoft PowerApps will help these people (normally business users) achieve more.

We know not every business problem can be solved with off-the-shelf solutions. But developing custom solutions has traditionally been too costly and time consuming for many of the needs teams and departments face, especially those projects that integrate across multiple data sources or require delivery across multiple devices from desktop to mobile. As a result, too many technology needs end up unsolved or under-optimized. We piece together spreadsheets, email chains, SharePoint or/and manual processes to fill in the gaps.

PowerApps and Microsoft Flow are both aimed squarely at these gaps. They give people who best understand their needs and challenges the power to quickly meet them, without the time, complexity and cost of custom software development.

In this session, we will look at these two-new offering from Microsoft: PowerApps and Flow. What are day? How can I use it? But special we will walk through and create from scratch some live demos showing how to create a new PowerApp that connects to a list stored in SharePoint Online, or how to create a new Microsoft Flow that connects to a list stored in SharePoint Online.

Agenda

18:45 – Welcome reception

18:50 – Community News

19:00 – “Elasticsearch – First Glance” – Vitor Saraiva – Software Architect at Farfetch & Mário Barbosa – Engineering Lead at Farfetch

20:10 – Coffee break

20:30 – ” SharePoint integration: How can PowerApps and Microsoft Flow give power to your SharePoint users” – Sandro Pereira – Azure MVP – DevScope

21:15 – Closure

21:20 – Prize draw

21:30 – Dinner (optional)

This is a free event with very limited vacancies that you will not want to miss, register now!

Join us and come to spend a different carnival night! We are waiting for you.

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira

TUGA IT 2017 Integration Track: Call for speakers

TUGA IT 2017 Integration Track: Call for speakers

After the success of the first year, TUGA IT is back for its second edition on May 18-20 at Microsoft Portugal’s Headquarters in Lisbon. This is one of the broadest and largest IT conferences in Portugal, covering topics like SharePoint, Office 365, Data Platform, Azure, Integration (BizTalk, API Management, App Service, Logic Apps, IoT, Functions, …) and Programming I a 3-day conference is organized by TUGA Association with the involvement of several technical communities and Microsoft MVPs, in which I fit. TUGA Association, a non-profit association whose main purpose is to promote knowledge, is also behind events such as SharePoint Saturday and SQL Saturday.

The Call for Speakers is now open until March 1st, 2017. So, if you’re interested in delivering a session, hurry up and follow these steps:

  • Register on the website as a speaker
  • Login to website using the account you have just registered
  • Submit your sessions

There are already several sessions submitted to the event but I encourage everyone, whether they are MVPs, Microsoft product group or community member enthusiasts, who want to talk about the following topics:

  • BizTalk Server 2016: new capabilities, open source tools or frameworks, …
  • Logic Apps and Enterprise Integration Pack
  • Building reliable and scalable cloud messaging and hybrid integration solutions with Service Bus
  • Event Hubs
  • IoT
  • API Management
  • Hybrid Integration using BizTalk Server and Logic Apps, or connecting to on-prem resources using the On-Premise Gateway
  • Microsoft Flow

Or other related Integrate topic, to submit a session on the Integration track, everyone can have a shot to speak in one of the best IT conferences in Portugal.

The organization is preparing a lot of new stuff for this second edition, but it is still too early for announcements. Stay tuned!

Author: Sandro Pereira

Sandro Pereira lives in Portugal and works as a consultant at DevScope. In the past years, he has been working on implementing Integration scenarios both on-premises and cloud for various clients, each with different scenarios from a technical point of view, size, and criticality, using Microsoft Azure, Microsoft BizTalk Server and different technologies like AS2, EDI, RosettaNet, SAP, TIBCO etc. He is a regular blogger, international speaker, and technical reviewer of several BizTalk books all focused on Integration. He is also the author of the book “BizTalk Mapping Patterns & Best Practices”. He has been awarded MVP since 2011 for his contributions to the integration community. View all posts by Sandro Pereira

Integrating Apache Active Message Queue(AMQ) with BizTalk Server – Publishing messages Part 2 (extra linefeed)

Integrating Apache Active Message Queue(AMQ) with BizTalk Server – Publishing messages Part 2 (extra linefeed)

In previous blog I wrote about publishing a message to a AMQ. The documentation says “ActiveMQ implements a RESTful API to messaging which allows any web capable device to publish or consume messages using a regular HTTP POST or GET.”. In this article I want to share an issue that we have not been able to solve. A HTTP POST of a string “blah” to an Apache AMQ in our hands always adds an extra linefeed character i.e. “blahLF”.

We posted a positional flat file to an AMQ as shown below that does not contain any end of line(EOL) character.

clip_image002

We also used fiddler to confirm that the payload did not have a EOL character. The capture is shown below;

POST http://server01d:8161/api/message/MISC.MAX.DATA?type=queue&clientId=misc_data_biztalk&message_type=7228&message_version=1&branch_number=0&message_length=1268 HTTP/1.1
Content-Type: text/plain; charset=utf-8
Authorization: Basic bRlzY19kYXnotRealRhbGs6cUFUMUpLQ2dZUQ==
User-Agent: Microsoft (R) BizTalk (R) Server 2013 R2
Host: server01d:8161
Content-Length: 1268
Expect: 100-continue
Connection: Keep-Alive

T201701160700102017011620001020170116NTheGarbageseEftpos              NTWLPalmerstonNorth              NPN19.25     N1 Y                                                                Y            NNZDN483741……28  N0317N0317                            NL A/ARAVENA ARAVENA                                                                                                             000000032d028299NTXN100717                                                                       N00N00      NAPPROVED                                Y                N0000030170532858N9030200048573514    Y                                                                Y                                                                Y                                                                Y                                NSCR200          NPXULETSL_LIVE                   N895038         N89503801N4640      N146XA                           N4020618221          N0 NUnknown   NJY                                Y  YNZN281       Y                                                                Y                                Y                                N483741    NContactEMV                      N250       N3028      N755139225           N20170116NBEAEA6171CE0C6FF

When we retrieve the message from the queue using a HTTP GET or programmatically using STOMP we always get a line feed as an EOL character as shown below.

image

We believe it is the Apache REST API that is adding the additional LF character. Is this normal behaviour for the Apache AMQ REST API? We do not know how the REST API is implemented but if it is using the STOMP protocol it may be normal to add a line feed character. I’m wondering if the STOMP Frame is adding the LF.  I don’t know enough about the ActiveMQ to know for sure. See https://svn.apache.org/repos/asf/activemq/stomp/trunk/webgen/src/stomp10/specification.page which says;

We will use the augmented Backus-Naur Form (BNF) used in the HTTP/1.1 (rfc2616) to define a valid stomp frame:

LF                  =
CHAR                =
OCTET               =
DIGIT               =
NULL                =

frame-stream        = 1*frame

frame               = command LF
*( header LF )
LF
[ content ]
NULL
*( LF )

                          *( header LF )
                          LF
                          [ content ]
                          NULL
                          *( LF )

In summary we have found that the if we use Apache REST API to POST a message to a AMQ an extra line feed character is appended to the message. Does anyone know whether this is normal behaviour or is there a way to suppress this happening?

BizTalk team want your feedback

So as the blog has died a little off (I am sorry for that) I still want to share more with you as we go forward, I’ve moved from consultancy, to MVP for Integration, to Microsoft as Technical sales and I’m currently working as a Program Manager for BizTalk in Microsoft Corp, my passion for BizTalk is still there but I feel the purpose of this blog is to help you as a BizTalk operator to make it easier.

As BizTalk is moving forward and the commitment for vNext have been given by us in the product group we continue our investment and want more feedback from our customers.

Please go to https://aka.ms/BTsUserVoice to cast your vote or add new suggestions so that we can make BizTalk better for you as an operator, customer, developer, or passionate integration expert.

Understanding the BizTalk Deployment Framework

Understanding the BizTalk Deployment Framework

Part 2 – Advanced setup

This is an article in a series of 2 about the BizTalk Deployment Framework (BTDF). This open source framework is often used to create installation packages for BizTalk Applications. More information about this software can be found here CodePlex.
With this series of articles, we aim to provide a useful set of hands-on documentation about the framework.

The series consists of the following parts:

  • Part 1 – Introduction and basic setup
  • Part 2 – Advanced setup (this article)
    • Introduction
    • Custom Targets
    • Conditions
    • Host Groups
    • Executing third party executables
    • Tips and Tricks
    • Conclusion

Introduction

In the first part  of this series we have seen how you can setup a BizTalk Deployment-project, add artefacts to the project, how you can deploy variable settings for other DTAP environments and how you can generate distributable MSI packages.
In this second article we’ll show somewhat more advanced topics like Custom Targets, Conditions, Host Groups and we’ll see how you can run third party executables from within your deployment. We’ll finish with a couple of Tips and Tricks.

Custom Targets

To be able to customize your deployments, the BizTalk Deployment Framework uses a concept called Custom Targets. These Custom Targets allow you to execute commands during certain phases of the deployment. We will describe a couple of Custom Targets and show how you can use them later on as well. The currently available Custom Targets are shown below.

Adding Custom Targets to your Deployment project

Before describing the Custom Targets, let’s first have a quick look on how to add Custom Targets to your Deployment project.
To keep your Deployment project nicely organized, it is a good idea to keep all the custom targets you will be using grouped together below all the BizTalk artefacts you will deploy. So the Custom Targets will be added at the end of your Deployment project.
Below is a picture which shows where the CustomRedist target is added to the Deployment project. This target was actually already added to the project during creation of the Deployment project.

As you can see, you can simple type the text <Target Name=”…”></Target> and at the ellipsis, provide the name of the Custom Target you will be going to use. Everything you want to perform during the execution of the target, can be entered between the start tag and the end tag. We’ll see examples of the kind of commands which can be performed later in this article.

CustomRedist

This target becomes executed right after BTDF has finished copying files. So this is the ideal moment if you would like for example to add additional files to the MSI package, which will be copied to the installation folder on deployment.
Below you have a sample in which a folder becomes created and some files are copied to that folder:

    <MakeDir Directories="$(RedistDir)SQLScripts" />
    <CreateItem Include="SQLScripts*.*">
          <Output TaskParameter="Include" ItemName="SQLScripts" />
    </CreateItem>
    <Copy DestinationFolder="$(RedistDir)SQLScripts%(RecursiveDir)" SourceFiles="@(SQLScripts)"/>

CustomDeployTarget and CustomPostDeployTarget

The CustomDeployTarget target runs just before Deployment has started, while the CustomPostDeployTarget target runs directly after Deployment has finished. Either can be used if you want to deploy additional stuff. Later we’ll describe how these targets are used to run third party executables, but you might also use these targets to create for example an EventLog source.
CustomUndeployTarget and CustomPostUndeployTarget
The CustomUndeployTarget target runs just before any undeployment has been performed, while the CustomPostUndeployTarget target runs right after the undeployment has taken place. These targets can be used if you would like to clean up stuff after undeployment. You can think of that as cleaning any created files during deployment. All just to be sure that you keep your environment nice and clean.

Conditions

The concepts of Conditions can be used in both Bindingfiles as in the BTDF project file. With Bindingfiles this might be used to create different ports in different environments.
In the BTDF project file this feature can become used when for example, you want to execute certain targets only in certain environments.
As applying Conditions is almost the same for both kind of files, let’s just have a look at the latter one.
Let’s assume that we want to execute an .exe file only for the Test and Production environment. To be able to execute certain targets just for certain environments, you need to do the following:
– Add an Environment variable to your BTDF Settings file
– Add the variable to your BTDF project file
– Add a filter to the CustomPostDeployTarget

Add an Environment variable to your BTDF Settings file

From Visual Studio open the SettingFileGenerator.xml file in Excel and add a variable called Environment. Each environment will get a unique value. Below picture shows a setting file which is extended with such a setting and contains unique values for all environments, namely DEV, SDEV, TEST and PROD.

Add the variable to your BTDF project file

To be able to use the Environment variable, you need to do the following:
• Open the Deployment.btdfproj file and look for an ItemGroup which contains the value PropsFromEnvSettings. Typically this settings contains the values SSOAppUserGroup and SSOAppAdminGroup.
• Add a semicolon and the new variable Environment. Afterwards the ItemGroup will look like below.

    <ItemGroup>
      <PropsFromEnvSettings Include="SsoAppUserGroup;SsoAppAdminGroup;Environment" />
    </ItemGroup>

Add a filter to the CustomPostDeployTarget

In the final step you’ll arrange that the step which executes the .exe file only gets executed in case you’re deploying on the TEST or PROD environment. Navigate to the target called CustomPostDeployTarget and add the condition: $(Environment)==’TEST’ OR $(Environment)==’PROD’ to it.
When you’ve done that, the target will look as follows:

    <Target Name="CustomPostDeployTarget" Condition="$(Environment)=='TEST' OR $(Environment)=='PROD'">
        <MakeDir Directories="$(RedistDir)SQLScripts" />
        <CreateItem Include="SQLScripts*.*">
                <Output TaskParameter="Include" ItemName="SQLScripts" />
        </CreateItem>
        <Copy DestinationFolder="$(RedistDir)SQLScripts%(RecursiveDir)" SourceFiles="@(SQLScripts)"/>
    </Target>

Now, when you save the project file, create an MSI and execute that MSI, the CustomPostDeployTarget will only be executed when you are deploying to the Test or the Production environment.

Host Groups

In most cases a BizTalk application uses only a couple of Host Instances. After a deployment it is a good practice to restart all relevant Host Instances. BTDF enables you to achieve this by using an BizTalkHosts ItemGroup. If you omit this ItemGroup, all Host Instances will be restarted.
You can put such an ItemGroup in the BTDF-project file. To keep the project file nicely organized you can consider to have that ItemGroup below the BizTalk artefacts you are deploying.

    <ItemGroup>
        <BizTalkHosts Include="SendHost;ReceiveHost;OrchestrationHost" />
    </ItemGroup>

As you can see, you can easily add multiple hosts by using semicolons between the separate hosts.

Executing third party executables

The BizTalk Deployment Framework enables you to execute third party executables. Below you find brief examples to show what you can achieve with this capability.

Executing SQL Server scripts

Say you have a custom database which is used by your BizTalk solution and this database needs to be extended with additional tables and stored procedures. Without using the BizTalk Deployment Framework, you would need to run the necessary SQL Server scripts on each environment by hand. This is a time-consuming and error-prone task which can be automated with BTDF.
To be able to execute SQL Server commands, you need to execute an executable called sqlcmd. Below you find an example on how to execute SQL scripts:

  <Target Name="CustomPostDeployTarget">
    <Exec Command="sqlcmd -S $(SQLServer) -i&quot;SQLScripts<Name of T-SQL deploy script>&quot;" />
  </Target>

As you can see, you can simply add an Exec command to the deployment target.
For all the ins and outs on deploying SQL Server scripts, check this article on TechNet Wiki.

Create BizTalk360 alerts

Another example of running third party executables is the creation of BizTalk360 alarms. Normally you’ll have to do this manually after each deployment of a new or changed BizTalk application. This too can be a time-consuming and error prone task. By using BTDF and a tool called BT360Deploy, you can automate the creation of alarms during the deployment of you BizTalk application.

In the picture here above, you have an example of the output of BT360Deploy. More information on BT360Deploy can be found on CodePlex.

Tips and Tricks

We’ll end this article by providing you with a couple of Tips and Tricks

Do not automatically start the Application after deployment

BTDF allows you to automatically start the deployed BizTalk application, i.e. start all ports and orchestrations after deployment. Although that sounds like a convenient feature, especially in Dev and Test environments, your BizTalk administrator probably prefers that you leave the just deployed BizTalk application not started on Production environments. The reason for this is that in this way the BizTalk Administrator has better control over the moment when the BizTalk application must be activated.

Consider (not) restarting Host Instance

We have seen that BTDF can restart the Host Instance(s) after deployment. But in case you use scripting for unattended deployment of multiple BizTalk applications at the same time, you don’t want the Host Instances restarted after each deployment package as this is a waste of time and resources. In that case, you remove the ItemGroup that contains the BizTalkHosts. Instead you set the SkipHostInstancesRestart property in the first PropertyGroup of the project file to True:

<SkipHostInstancesRestart>False</SkipHostInstancesRestart>

ItemGroups per artefact

To keep your BTDF project file organized, it is preferable that you use separate ItemGroups for all the separate types of artefacts you will be deploying.

  <ItemGroup>
    <Schemas Include="Kovai.BTDF.Schemas.dll">
      <LocationPath>..$(ProjectName).Schemasbin$(Configuration)</LocationPath>
    </Schemas>
    <Schemas Include="Kovai.BTDF.MoreSchemas.dll">
      <LocationPath>..$(ProjectName).MoreSchemasbin$(Configuration)</LocationPath>
    </Schemas>
  </ItemGroup>

  <ItemGroup>
    <Transforms Include=" Kovai.BTDF.Maps.dll ">
      <LocationPath>..$(ProjectName).Mapsbin$(Configuration)</LocationPath>
    </Transforms>
    <Transforms Include=" Kovai.BTDF.MoreMaps.dll ">
      <LocationPath>..$(ProjectName).MoreMapsbin$(Configuration)</LocationPath>
    </Transforms>
  </ItemGroup>

  <ItemGroup>
    <Orchestrations Include=" Kovai.BTDF.Orchestrations.dll ">
      <LocationPath>..$(ProjectName).Orchestrationsbin$(Configuration)</LocationPath>
    </Orchestrations>
    <Orchestrations Include=" Kovai.BTDF.MoreOrchestrations.dll ">
      <LocationPath>..$(ProjectName).MoreOrchestrationsbin$(Configuration)</LocationPath>
    </Orchestrations>
  </ItemGroup>

Conclusion

In a series of 2 articles we have seen quite a lot of the capabilities of the BizTalk Deployment Framework. In part 1 we started with pointing out the advantages of using BTDF over the out-of-the-box capabilities of BizTalk Server and we created a simple Deployment project which showed how to setup a deployment with basic artefacts like Schemas, Maps and Orchestrations. We extended this project by creating variables to be used for environment specific settings like URL’s of endpoints.
In part 2 we have seen some more advanced features like Conditions and Host Groups. We have also seen how we can run third party executables to automate your deployment as far as possible and to decrease the amount of manual and error prone labour as much as possible.
Still there are much more possibilities with BTDF which we did not discuss. For example, you can also use it to create virtual directories, deploy SSO settings, create EventLog Event Sources, create Registry keys etc. etc. etc..

It must be clear by now that the BizTalk Deployment Framework is a very powerful framework that can be used with straight forward to complex deployments. So when you are already familiar with the framework and are using it for the deployment of standard BizTalk artefacts, why not expand your experience with BTDF and start deploying less common artefacts like SQL scripts. When you do that properly, your BizTalk Administrator might become your new best friend!

Happy deploying!

Author: Lex Hegt

Lex Hegt works in the IT sector for more than 25 years, mainly in roles as developer and administrator. He works with BizTalk since BizTalk Server 2004. Currently he is a Technical Lead at BizTalk360. View all posts by Lex Hegt