Elevating permissions for BizTalk Server Operators Group

Elevating permissions for BizTalk Server Operators Group

December 1, 2016

Elevating permissions for BizTalk Server Operators Group

Filed under: BizTalk — mbrimble @ 4:54 pm

I found this blog article , https://www.codit.eu/blog/2012/07/02/elevating-permissions-for-biztalk-server-operators-group/ by Toon Vanhoutte   very useful today.  Adding this link to my blog so i don’t forget it.

Advertisements

No comments yet.

RSS feed for comments on this post. TrackBack URI

Failed to stop service WINMGMT while trying to apply BizTalk Server Cumulative Update

Failed to stop service WINMGMT while trying to apply BizTalk Server Cumulative Update

Being in The Netherland for the first time and provably inspired by the cold outside my hotel room, that discourages any "reasonable person" to walk in streets of Arnhem at night, let’s talk about another familiar problem that can occur in BizTalk Server. The error that I will address today is quite normal to appear […]
Blog Post by: Sandro Pereira

Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

Using Concourse to continuously deliver a Service Bus-powered Java app to Pivotal Cloud Foundry on Azure

Guess what? Deep down, cloud providers know you’re not moving your whole tech portfolio to their public cloud any time soon. Oh, your transition is probably underway, but you’ve got a whole stash of apps, data stores, and services that may not move for a while. That’s cool. There are more and more patterns and services available to squeeze value out of existing apps by extending them with more modern, scalable, cloudy tech. For instance, how might you take an existing payment transfer system that did B2B transactions and open it up to consumers without requiring your team to do a complete rewrite? One option might be to add a load-leveling queue in front of it, and take in requests via a scalable, cloud-based front-end app. In this post, I’ll show you how to implement that pattern by writing a Spring Boot app that uses Azure Service Bus Queues. Then, I’ll build a Concourse deployment pipeline to ship the app to Pivotal Cloud Foundry running atop Microsoft Azure.

2016-11-28-azure-boot-01

Ok, but why use a platform on top of Azure?

That’s a fair question. Why not just use native Azure (or AWS, or Google Cloud Platform) services instead of putting a platform overlay like Pivotal Cloud Foundry atop it? Two reasons: app-centric workflow for developers, and “day 2” operations at scale.

Most every cloud platform started off by automating infrastructure. That’s their view of the world, and it still seeps into most of their cloud app services. There’s no fundamental problem with that, except that many developers (“full stack” or otherwise) aren’t infrastructure pros. They want to build and ship great apps for customers. Everything else is a distraction. A platform such as Pivotal Cloud Foundry is entirely application-focused. Instead of the developer finding an app host, packaging the app, deploying the app, setting up a load balancer, configuring DNS, hooking up log collection, and configuring monitoring, the Cloud Foundry dev just cranks out an app and does a single action to get everything correctly configured in the cloud. And it’s an identical experience whether Pivotal Cloud Foundry is deployed to Azure, AWS, OpenStack, or whatever. The smartest companies realized that their developers should be exceptional at writing customer-facing software, not configuring firewall rules and container orchestration.

Secondly, it’s about “day 2” operations. You know, all the stuff that happens to actually maintain apps in production. I have no doubt that any of you can build an app and quickly get it to cloud platforms like Azure Web Sites or Heroku with zero trouble. But what about when there are a dozen apps, or thousands? How about when it’s not just you, but a hundred of your fellow devs? Most existing app-centric platforms just aren’t set up to be org-wide, and you end up with costly inconsistencies between teams. With something like Pivotal Cloud Foundry, you have a resilient, distributed system that supports every major programing language, and provides a set of consistent patterns for app deployment, logging, scaling, monitoring, and more. Some of the biggest companies in the world deploy thousands of apps to their respective environments today, and we just proved that the platform can handle 250,000 containers with no problem. It’s about operations at scale.

With that out of the way, let’s see what I built.

Step 1 – Prerequisites

Before building my app, I had to set up a few things.

  • Azure account. This is kind of important for a demo of things running on Azure. Microsoft provides a free trial, so take it for a spin if you haven’t already. I’ve had my account for quite a while, so all my things for this demo hang out there.
  • GitHub account. The Concourse continuous integration software knows how to talk to a few things, and git is one of them. So, I stored my app code in GitHub and had Concourse monitoring it for changes.
  • Amazon account. I know, I know, an Azure demo shouldn’t use AWS. But, Amazon S3 is a ubiquitous object store, and Concourse made it easy to drop my binaries there after running my continuous integration process.
  • Pivotal Cloud Foundry (PCF). You can find this in the Azure marketplace, and technically, this demo works with PCF running anywhere. I’ve got a full PCF on Azure environment available, and used that here.
  • Azure Service Broker. One fundamental concept in Cloud Foundry is a “service broker.” Service brokers advertise a catalog of services to app developers, and provide a consistent way to provision and de-provision the service. They also “bind” services to an app, which puts things like service credentials into that app’s environment variables for easy access. Microsoft built a service broker for Azure, and it works for DocumentDB, Azure Storage, Redis Cache, SQL Database, and the Service Bus. I installed this into my PCF-on-Azure environment, but you can technically run it on any PCF installation.

Step 2 – Build Spring Boot App

In my fictitious example, I wanted a Java front-end app that mobile clients interact with. That microservice drops messages into an Azure Service Bus Queue so that the existing on-premises app can pull messages from at their convenience, and thus avoid getting swamped by all this new internet traffic.

Why Java? Java continues to be very popular in enterprises, and Spring Boot along with Spring Cloud (both maintained by Pivotal) have completely modernized the Java experience. Microsoft believes that PCF helps companies get a first-class Java experience on Azure.

I used Spring Tool Suite to build a new Spring Boot MVC app with “web” and “thymeleaf” dependencies. Note that you can find all my code in GitHub if you’d like to reproduce this.

To start with, I created a model class for the web app. This “web payment” class represents the data I connected from the user and passed on to the Service Bus Queue.

package seroter.demo;

public class WebPayment {
        private String fromAccount;
        private String toAccount;
        private long transferAmount;

        public String getFromAccount() {
                return fromAccount;
        }

        public void setFromAccount(String fromAccount) {
                this.fromAccount = fromAccount;
        }

        public String getToAccount() {
                return toAccount;
        }

        public void setToAccount(String toAccount) {
                this.toAccount = toAccount;
        }

        public long getTransferAmount() {
                return transferAmount;
        }

        public void setTransferAmount(long transferAmount) {
                this.transferAmount = transferAmount;
        }
}

Next up, I built a bean that my web controller used to talk to the Azure Service Bus. Microsoft has an official Java SDK in the Maven repository, so I added this to my project.

2016-11-28-azure-boot-03

Within this object, I referred to the VCAP_SERVICES environment variable that I would soon get by binding my app to the Azure service. I used that environment variable to yank out the credentials for the Service Bus namespace, and then created the queue if it didn’t exist already.

@Configuration
public class SbConfig {

 @Bean
 ServiceBusContract serviceBusContract() {

   //grab env variable that comes from binding CF app to the Azure service
   String vcap = System.getenv("VCAP_SERVICES");

   //parse the JSON in the environment variable
   JsonParser jsonParser = JsonParserFactory.getJsonParser();
   Map<String, Object> jsonMap = jsonParser.parseMap(vcap);

   //create map of values for service bus creds
   Map<String,Object> creds = (Map<String,Object>)((List<Map<String, Object>>)jsonMap.get("seroter-azureservicebus")).get(0).get("credentials");

   //create service bus config object
   com.microsoft.windowsazure.Configuration config =
        ServiceBusConfiguration.configureWithSASAuthentication(
                creds.get("namespace_name").toString(),
                creds.get("shared_access_key_name").toString(),
                creds.get("shared_access_key_value").toString(),
                ".servicebus.windows.net");

   //create object used for interacting with service bus
   ServiceBusContract svc = ServiceBusService.create(config);
   System.out.println("created service bus contract ...");

   //check if queue exists
   try {
        ListQueuesResult r = svc.listQueues();
        List<QueueInfo> qi = r.getItems();
        boolean hasQueue = false;

        for (QueueInfo queueInfo : qi) {
          System.out.println("queue is " + queueInfo.getPath());

          //queue exist already?
          if(queueInfo.getPath().equals("demoqueue"))  {
                System.out.println("Queue already exists");
                hasQueue = true;
                break;
           }
         }

        if(!hasQueue) {
        //create queue because we didn't find it
          try {
            QueueInfo q = new QueueInfo("demoqueue");
            CreateQueueResult result = svc.createQueue(q);
            System.out.println("queue created");
          }
          catch(ServiceException createException) {
            System.out.println("Error: " + createException.getMessage());
          }
        }
    }
    catch (ServiceException findException) {
       System.out.println("Error: " + findException.getMessage());
     }
    return svc;
   }
}

Cool. Now I could connect to the Service Bus. All that was left was my actual web controller that returned views, and sent messages to the Service Bus. One of my operations returned the data collection view, and the other handled form submissions and sent messages to the queue via the @autowired ServiceBusContract object.

@SpringBootApplication
@Controller
public class SpringbootAzureConcourseApplication {

   public static void main(String[] args) {
     SpringApplication.run(SpringbootAzureConcourseApplication.class, args);
   }

   //pull in autowired bean with service bus connection
   @Autowired
   ServiceBusContract serviceBusContract;

   @GetMapping("/")
   public String showPaymentForm(Model m) {

      //add webpayment object to view
      m.addAttribute("webpayment", new WebPayment());

      //return view name
      return "webpayment";
   }

   @PostMapping("/")
   public String paymentSubmit(@ModelAttribute WebPayment webpayment) {

      try {
         //convert webpayment object to JSON to send to queue
         ObjectMapper om = new ObjectMapper();
         String jsonPayload = om.writeValueAsString(webpayment);

         //create brokered message wrapper used by service bus
         BrokeredMessage m = new BrokeredMessage(jsonPayload);
         //send to queue
         serviceBusContract.sendMessage("demoqueue", m);
         System.out.println("message sent");

      }
      catch (ServiceException e) {
         System.out.println("error sending to queue - " + e.getMessage());
      }
      catch (JsonProcessingException e) {
         System.out.println("error converting payload - " + e.getMessage());
      }

      return "paymentconfirm";
   }
}

With that, my microservice was done. Spring Boot makes it silly easy to crank out apps, and the Azure SDK was pretty straightforward to use.

Step 3 – Deploy and Test App

Developers use the “cf” command line interface to interact with Cloud Foundry environments. Running a “cf marketplace” command shows all the services advertised by registered service brokers. Since I added the Azure Service Broker to my environment, I instantiated an instance of the Service Bus service to my Cloud Foundry org. To tell the Azure Service Broker what to actually create, I built a simple JSON document that outlined the Azure resource group. region, and service.

{
  "resource_group_name": "pivotaldemorg",
  "namespace_name": "seroter-boot",
  "location": "westus",
  "type": "Messaging",
  "messaging_tier": "Standard"
}

By using the Azure Service Broker, I didn’t have to go into the Azure Portal for any reason. I could automate the entire lifecycle of a native Azure service. The command below created a new Service Bus namespace, and made the credentials available to any app that binds to it.

cf create-service seroter-azureservicebus default seroterservicebus -c sb.json

After running this, my PCF environment had a service instance (seroterservicebus) ready to be bound to an app. I also confirmed that the Azure Portal showed a new namespace, and no queues (yet).

2016-11-28-azure-boot-06

Awesome. Next, I added a “manifest” that described my Cloud Foundry app. This manifest specified the app name, how many instances (containers) to spin up, where to get the binary (jar) to deploy, and which service instance (seroterservicebus) to bind to.

---
applications:
- name: seroter-boot-azure
  memory: 256M
  instances: 2
  path: target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar
  buildpack: https://github.com/cloudfoundry/java-buildpack.git
  services:
    - seroterservicebus

By doing a “cf push” to my PCF-on-Azure environment, the platform took care of all the app packaging, container creation, firewall updates, DNS changes, log setup, and more. After a few seconds, I had a highly-available front end app bound to the Service Bus. Below that you can see I had an app started with two instances, and the service was bound to my new app.

2016-11-28-azure-boot-07

All that was left was to test it. I fired up the app’s default view, and filled in a few values to initiate a money transfer.

2016-11-28-azure-boot-08

After submitting, I saw that there was a new message in my queue. I built another Spring Boot app (to simulate an extension of my legacy “payments” system) that pulled from the queue. This app ran on my desktop and logged the message from the Azure Service Bus.

2016-11-28-azure-boot-09

That’s great. I added a mature, highly-available queue in between my cloud-native Java web app, and my existing line-of-business system. With this pattern, I could accept all kinds of new traffic without overloading the backend system.

Step 4 – Build Concourse Pipeline

We’re not done yet! I promised continuous delivery, and I deliver on my promises, dammit.

To build my deployment process, I used Concourse, a pipeline-oriented continuous integration and delivery tool that’s easy to use and amazingly portable. Instead of wizard-based tools that use fixed environments, Concourse uses pipelines defined in configuration files and executed in ephemeral containers. No conflicts with previous builds, no snowflake servers that are hard to recreate. And, it has a great UI that makes it obvious when there are build issues.

I downloaded a Vagrant virtual machine image with Concourse pre-configured. Then I downloaded the lightweight command line interface (called Fly) for interacting with pipelines.

My “build and deploy” process consisted of four files: bootpipeline.yml that contained the core pipeline, build.yml which set up the Java build process, build.sh which actually performs the build, and secure.yml which holds my credentials (and isn’t checked into GitHub).

The build.sh file clones my GitHub repo (defined as a resource in the main pipeline) and does a maven install.

#!/usr/bin/env bash

set -e -x

git clone resource-seroter-repo resource-app

cd resource-app

mvn clean

mvn install

The build.yml file showed that I’m using the Maven Docker image to build my code, and points to the build.sh file to actually build the app.

---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: maven
    tag: latest

inputs:
  - name: resource-seroter-repo

outputs:
  - name: resource-app

run:
  path: resource-seroter-repo/ci/build.sh

Finally, let’s look at my build pipeline. Here, I defined a handful of “resources” that my pipeline interacts with. I’ve got my GitHub repo, an Amazon S3 bucket to store the JAR file, and my PCF-on-Azure environment. Then, I have two jobs: one that builds my code and puts the result into S3, and another that takes the JAR from S3 (and manifest from GitHub) and pushes to PCF on Azure.

---
resources:
# resource for my GitHub repo
- name: resource-seroter-repo
  type: git
  source:
    uri: https://github.com/rseroter/springboot-azure-concourse.git
    branch: master
#resource for my S3 bucket to store the binary
- name: resource-s3
  type: s3
  source:
    bucket: spring-demo
    region_name: us-west-2
    regexp: springboot-azure-concourse-(.*).jar
    access_key_id: {{s3-key-id}}
    secret_access_key: {{s3-access-key}}
# resource for my Cloud Foundry target
- name: resource-azure
  type: cf
  source:
    api: {{cf-api}}
    username: {{cf-username}}
    password: {{cf-password}}
    organization: {{cf-org}}
    space: {{cf-space}}

jobs:
- name: build-binary
  plan:
    - get: resource-seroter-repo
      trigger: true
    - task: build-task
      privileged: true
      file: resource-seroter-repo/ci/build.yml
    - put: resource-s3
      params:
        file: resource-app/target/springboot-azure-concourse-0.0.1-SNAPSHOT.jar

- name: deploy-to-prod
  plan:
    - get: resource-s3
      trigger: true
      passed: [build-binary]
    - get: resource-seroter-repo
    - put: resource-azure
      params:
        manifest: resource-seroter-repo/manifest-ci.yml

I was now ready to deploy my pipeline and see the magic.

After spinning up the Concourse Vagrant box, I hit the default URL and saw that I didn’t have any pipelines. NOT SURPRISING.

2016-11-28-azure-boot-10

From my Terminal, I used Fly CLI commands to deploy a pipeline. Note that I referred again to the “secure.yml” file containing credentials that get injected into the pipeline definition at deploy time.

fly -t lite set-pipeline --pipeline azure-pipeline --config bootpipeline.yml --load-vars-from secure.yml

In a second or two, a new (paused) pipeline popped up in Concourse. As you can see below, this tool is VERY visual. It’s easy to see how Concourse interpreted my pipeline definition and connected resources to jobs.

2016-11-28-azure-boot-11

I then un-paused the pipeline with this command:

fly -t lite unpause-pipeline --pipeline azure-pipeline

Immediately, the pipeline started up, retrieved my code from GitHub, built the app within a Docker container, dropped the result into S3, and deployed to PCF on Azure.

2016-11-28-azure-boot-12

After Concourse finished running the pipeline, I checked the PCF Application Manager UI and saw my new app up and running. Think about what just happened: I didn’t have to muck with any infrastructure or open any tickets to get an app from dev to production. Wonderful.

2016-11-28-azure-boot-14

The way I built this pipeline, I didn’t version the JAR when I built my app. In reality, you’d want to use the semantic versioning resource to bump the version on each build. Because of the way I designed this, the second job (“deploy to PCF”) won’t fire automatically after the first build, since there technically isn’t a new artifact in the S3 bucket. A cool side effect of this is that I could constantly do continuous integration, and then choose to manually deploy (clicking the “+” button below) when the company was ready for the new version to go to production. Continuous delivery, not deployment.

2016-11-28-azure-boot-13

Wrap Up

Whew. That was a big demo. But in the scheme of things, it was pretty straightforward. I used some best-of-breed services from Azure within my Java app, and then pushed that app to Pivotal Cloud Foundry entirely through automation. Now, every time I check in a code change to GitHub, Concourse will automatically build the app. When I choose to, I take the latest build and tell Concourse to send it to production.

magic

A platform like PCF helps companies solve their #1 problem with becoming software-driven: improving their deployment pipeline. Try to keep your focus on apps not infrastructure, and make sure that whatever platform you use, you focus on sustainable operations at scale!

Advertisements



Categories: Cloud, Cloud Foundry, DevOps, General Architecture, Messaging, Microservices, Microsoft Azure, Pivotal, Windows Azure Service Bus

Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) while trying to deploy a BizTalk Solution from Visual Studio

Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) while trying to deploy a BizTalk Solution from Visual Studio

Continuing with “Errors and Warnings, Causes and Solutions”, let’s talk today about a classic (and annoying) one: “Error 87 Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) Error 88 at Microsoft.BizTalk.Gac.Fusion.IAssemblyCache.InstallAssembly(AssemblyCacheInstallFlag flags, String manifestFilePath, IntPtr referenceData) at Microsoft.BizTalk.Gac.Gac.InstallAssembly(String assemblyPathname, Boolean force) at Microsoft.BizTalk.Deployment.BizTalkAssembly.GacInstall(String assemblyLocation) at Microsoft.BizTalk.Deployment.BizTalkAssembly.PrivateDeploy(String server, String database, String assemblyPathname, String applicationName) at Microsoft.BizTalk.Deployment.BizTalkAssembly.Deploy(Boolean […]
Blog Post by: Sandro Pereira

An error occurred while attempting to connect to a remove SQL Server database: The local MS DTC detected that the MS DTC on  has the same unique identity as the local MS DTC

An error occurred while attempting to connect to a remove SQL Server database: The local MS DTC detected that the MS DTC on has the same unique identity as the local MS DTC

Today I’m returning to one of my favorites topics no not transformations, this time is all about “Errors and Warnings, Causes and Solutions”. Today I encountered the following issue when I was trying to connect to a remove SQL Server database: “The local MS DTC detected that the MS DTC on <server name> has […]
Blog Post by: Sandro Pereira

Assigning an Integration Account to an Azure Logic App inside Visual Studio

Assigning an Integration Account to an Azure Logic App inside Visual Studio

I have been working heads down for a few weeks now with Windows Azure Logic Apps.  While I have worked with them off and on for over a year now, it is amazing how far things have evolved in such a small amount of time.  You can put together a rather complex EDI scenario in just a few hours with no up front hardware and licensing costs. 

I have been creating Logic Apps both using the web designer and using Visual Studios 2015. 

Recently I was trying to use the Transform Shape that is part of Azure Integration Accounts (still in Technical Preview).  I was able to set all the properties and manually enter a map name  Then I ran into issues. 

I found if I switched to code view I was not able to get back to the Designer without manually removing the Transform Shape.  I kept getting the following error:  The ‘inputs’ of workflow run action ‘Transform_XML’ of type ‘Xslt’ is not valid. The workflow must be associated with an integration account to use the property ‘integrationAccount’.

What I was missing was setting the Integration Account for this Logic App.  Using the web interface, it’s very easy to set the Integration Account.  But I looked all over the JSON file and Visual Studios for how to set the Integration Account for a Logic App inside Visual Studios.

With the help of Jon Fancy, it turns out it is super simple.  It is just like an Orchestration property.

To set the Integration Account for a Logic App inside Visual Studios do the following:

1. Ensure you have an Integration Account already created inside the subscription and Azure Location.

2. Make sure you set the Integration Account BEFORE trying to use any shaped that depend on it, like the Transform Shape.

3. Click anyplace in the white space of the Visual Studio Logic App.

4. Look inside the Property Windows for the Integration Account selection windows.

5. Select the Integration Account you want to use and save your Logic App.

It’s that simple! 

Enjoy.

Installing (and Configure) BizTalk Server 2016 in a Standalone Machine (whitepaper)

Installing (and Configure) BizTalk Server 2016 in a Standalone Machine (whitepaper)

Microsoft announced on October 27, 2016 the release of Microsoft BizTalk Server 2016 – the 10th major release of BizTalk Server -and, as usual with previous versions, I updated my installation and configuration manual for BizTalk Server 2016. This whitepaper will explain in detail – a step-by-step guideline – how to install and configure Microsoft […]
Blog Post by: Sandro Pereira

Announcing the private preview out of the new SAP Connector for Logic Apps

We are happy to announce the private preview of the new SAP Connector for Logic Apps. If you have an installation of SAP and want to utilize Logic Apps to create integration scenarios using RFC, TRFC, BAPI or IDOCs, this is your opportunity. The nominations for the private preview are currently open via this survey….
Blog Post by: BizTalk Team

Microsoft Teams for Integration Teams

Microsoft Teams for Integration Teams

Today we have so many technologies available when it comes to developing integration solutions. In some ways things are a lot easier and in other ways things are harder. One thing is for sure that in technology there has been a lot of change. For many organisations one thing that definitely has not changed is the challenges they face with the non-technical side of integration projects. You know for most companies, the technology you use for the implementation of the project isn’t that important when it comes to the decisive factor that determines success from failure. If you choose vendor A or vendor B, as long as your team know how to use the technology they will usually be able to build stuff successfully. With that said the thing organisations struggle with still is “how do we get the technology people to build something to do what we want it to do” and the IT organisation then has the challenge of how to live with that solution through its life span.

These are not technology problems, they are problems about communication, collaboration, documentation and allowing people the time to do stuff properly.

In my opinion two of the most common organisational challenges facing integration teams within a business today are:

  1. Shitty Requirements

Whenever I meet people around the world who are doing integration the one thing that seems to be a common challenge is that generally integration projects start off with a 1 line requirement. “I want to get this data from here to there”.

  1. Lack of Knowledge Sharing

The world of an IT department is generally a chaotic place so the idea of giving people time to do stuff properly is never really a thing for many organisations. Think of poor developers who barely finish writing code for one solution and then they are shipped to the next project and the department is generally surviving because of all of the information in people’s heads.

For many organisations the thing that is really needed is the ability to collaborate around projects in a way that brings people together and artefacts and information into one place. In some ways this is a big culture shift for some organisations and for others the problem is the lack of tooling. For quite a while now I have been a fan of combining TFS for source code, work items, and other stuff with Confluence and a few other tools but the challenge around the tooling is often licensing, procurement processes and the fragmented nature of using a number of different tools. Recently however I have been playing with Microsoft Teams and I think this is a really good package which I can see helping a lot of organisations. First off there are many ways your company could use it, but in this post I would like to talk about how it could help an integration team. Before going any further here are a couple of links which are useful:

How can my Integration Team Use Teams

First off in Microsoft Teams you could create a team and include people in your integration team. I would recommend not storing sensitive data in the teams area because what you want to do is open the transparency of your team so that your business users can work with you. Include your team members but your stakeholders and key business contacts should be included too. These are the people you will need to capture information from and you want them contributing to the team.

In terms of structuring your team in MS Teams I went for something like shown in the below picture.

Under the team you have Channels. I am thinking of using Channels for the following reasons:

  • One for architecture related to the integration platform
  • One for infrastructure related to the integration platform
  • 1 Channel per interface or integration solution you develop

You may also choose to put in channels for guidance and training and other stuff like that.

What’s in a Channel

The cool thing about a channel is you have a few customization options about what you can have in the channel. Out of the box you get the following:

  • Conversations – This is a bit like a slack/yammer style conversation thread
  • Files – This is a place to upload documents related to the channel
  • Notes – This is a one note work book for the channel

Those are some really handy things, you can also add other tabs to your channel like the below graphic:

My first thoughts for this are that you could use a SharePoint side as a tab to link to a site where you might store any sensitive stuff. You could also use planner as a light weight task board of to do stuff related to the channel. You could maybe link to Team Services for more complex planning.

In general the basic channel provides a way to have conversations, documents and stuff in a single place for a related context. Halleluiah, if we could have no more projects managed via email then the world would be a far less stressful place.

Channel Per Interface?

I mentioned above a few general channels for the bigger areas such as architecture and infrastructure, but one of the biggest wins could be a channel per interface. Imagine we had an interface which did a B2B style integration with a partner to send a list of customer marketing preferences so they could do out sources marketing for us. Think how many companies you may have worked with who may have delivered such an interface and they will often have an interface catalogue but it is usually just a spreadsheet list of interfaces they have (or more often it doesn’t exist), but if you asked the question “tell me everything about this interface”. Well I would guess in Average Company Inc, the answer would be to make you sit with 1 person who is the subject matter expert on it, they 2 more people who are stakeholders and know a bit about it. If your lucky there might also be some documents but I bet they get emailed to you and there are probably a few other documents which kind of say the same stuff but in a different way.

With MS Teams having 1 channel per interface means this list can be out interface catalogue, but it can also be the holder for everything about that interface. Lets have a look at what we could do:

Conversations

First off with conversations, imagine all discussion about the interface happens in 1 place. No more email threads. The conversation would still be available 2 years in the future when the original people on the project have left and the new people can see the history of discussion around the interface. Below shows some example conversations. Given this is just me but the example comments are from a real project. In many projects the history of the journey of how a project/interface got from start to implementation is a goldmine of knowledge which often leaves the organisation when the project is over. This can be avoided by using conversations.

Files

Files provides a simple place for any documentation which relates to the interface. Ideally any internal documentation produced by the team would be in the One Note notes which we will talk about in a minute. Often documentation is produced before your team is involved or gets supplied by vendors and its often a challenge to find where to keep it. This file store with the interface is a great option.

Below is an example:

You might ask why MS Teams and why not SharePoint. First off im not a big fan of documents. They are often old and obsolete and incorrect. I much prefer the wiki, one note and confluence style of approach. That said documents do still exist on projects. Keeping them close to the context where they are used just means they don’t get lost or forgotten about. I think using SharePoint if you need the added security etc that it brings is fair enough but for many cases its probably a bit over the top and just adds more steps to maintaining effective documentation.

Notes

Having a One Note workbook in the team is really cool. Im a big fan of using this for elaborating on the interface, flushing out requirements and then maintaining this for the support team and dev team for the long term. One Note encourages it to be light weight and effective documentation. This is a view of how we can use it. The page structure could look like below:

I think this is a minimum set of pages which will help you structure your information effectively.

The high level requirements page can just be a table of requirements which are teased out of stakeholders and taken from conversations in the team space. It might look like below:

The features and scenarios page would help us to write gherkin style stories of what we want the interface to do. These stories should be simple enough for everyone in the team to understand.

Next we might have message specifications. They could be json, xml, flat file, edi, etc. The key thing is to include sample messages and definitions of the messages so we know data is in the correct formats.

When it comes to the architecture element of your interface, I am a fan of the context, containers and components approach as a lightweight way of expressing the architecture of an interface. Although the diagrams below probably could be flushed out a bit more they will do for this example. In the One Note page I can start with some simple pen drawn diagrams to illustrate the key points. This is shown below.

Later when the project starts to stabilise I might choose to draw the diagrams in a more formal way using Visio or Lucidchart but certainly early in the project you spend lots of time redrawing the diagram as things evolve so lets keep it simple and use pen. You can open the One Note page in the full One Note client to get the richer drawing experience.

In the interface design section we can again elaborate on the interface further and include some specifics on the implementation. Again in the early stages I can just use pen drawn diagrams if I want and later replace them.

In the code and deployment pages id simple document what it does and how to deploy it. Im also a fan of using videos so we can do a video walk through of the code and upload it to the files section and provide a link to watch a walk through.

In the support page this will be a 2 way set of documentation between your ops/DevOps team and everyone else who is a stakeholder around support aspects of the interface. Everyone should be able to contribute from things developers learn in development and ops people also learn post go live. An example is below:

The notes should really be the living documentation to support the interface through its lifecycle.

Plan

I like the idea of being able to have planning options associated with the interface. I have some options here. First off for a higher level plan I can link to a Team Services project and see this at team level, but another option I really like is Planner. If your not familiar with it then this is a feature in Office 365 which is a bit like Trello. It gives me a basic task board and if I consider this to be at interface level it’s a great way to keep an eye on tasks at that level. You could include delivery tasks, bugs, technical debt clean up and loads of things specific to this interface. I think this is especially important post go live for the initial release of the interface as it gives you a place to keep tasks that may not be done until some future time as an optimisation activity.

In the below picture it shows the simple task board for Planner created directly from our Team channel

Power BI

One of the challenges of changing the culture is how to get people in contributing to the team, one of the best ways to do this is to connect and reporting or MI related to the interface to the Team channel. MS Teams lets you have multiple Power BI tabs in the channel and you can then bring in team dashboards. In the below picture I have chosen to bring in a UAT and Production dashboard for the interface as this lets the team see how things are performing in test and live.

I mean how cool is that!!

What about Cross Functional Teams

If you watched me talk at Integrate in 2016 you may remember how I talked about how organisations are changing from having centralised integration teams to cross functional teams which means those doing integration are all over the organisation. Well the reality of it is that that approach takes all of the non technical challenges organisations face and makes them worse. Possibly having multiple teams doing their own thing in their own way.

With MS Teams you can treat the Integration Team as a virtual team which is comprised of people who are in different delivery teams. What you want is them working in a manner with respect to integration that is aligned and allows for teams to create, change and disappear without knowledge leaving the organisation or your interfaces becoming orphaned. Using MS Teams to collaborate as a virtual team is a great way to support your organisation doing cross functional team based development while allowing your integration specialists to have visibility and to apply governance across those teams.

Conclusion

Im really excited about how I think MS Teams could be applied by integration teams to help solve some of the problems we face with many customers and organisations which fall into the culture, communication and collaboration space which lets face it has a much bigger impact on the overall success of your project than anything technical will.

Id love to hear how others are looking to use it.