My Pluralsight course—Getting Started with Concourse—is now available!

My Pluralsight course—Getting Started with Concourse—is now available!

Design software that solves someone’s “job to be done“, build it, package it, ship it, collect feedback, learn, and repeat. That’s the dream, right? For many, shipping software is not fun. It’s downright awful. Too many tickets, too many handoffs, and too many hours waiting. Continuous integration and delivery offer some relief, as you keep producing tested, production-ready artifacts. Survey data shows that we’re not all adopting this paradigm as fast as we should. I figured I’d do my part by preparing and delivering a new video training course about Concourse.

I’ve been playing a lot with Concourse recently, and published a 3-part blog series on using it to push .NET Core apps to Kubernetes. It’s an easy-to-use CI system with declarative pipelines and stateless servers. Concourse runs jobs on Windows or Linux, and works with any programming language you use.

My new hands-on Pluralsight course is ~90 minutes long, and gives you everything you need to get comfortable with the platform. It’s made up of three modules. The first module looks at key concepts, the Concourse architecture, and user roles, and we set up our local environment for development.

The second module digs deep into the primitives of Concourse: tasks, jobs and resources. I explain how to configure each, and then we go hands on with each. There are aspects that took me a while to understand, so I worked hard to explain these well!

The third and final module looks at pipeline lifecycle management and building manageable pipelines. We explore troubleshooting and more.

Believe it or not, this is my 20th course with Pluralsight. Over these past 8 years, I’ve switched job roles many times, but I’ve always enjoyed learning new things and sharing that information with others. Pluralsight makes that possible for me. I hope you enjoy this new course, and most importantly, start doing CI/CD for more of your workloads!

Building an Azure-powered Concourse pipeline for Kubernetes  – Part 3: Deploying containers to Kubernetes

Building an Azure-powered Concourse pipeline for Kubernetes – Part 3: Deploying containers to Kubernetes

So far in this blog series, we’ve set up our local machine and cloud environment, and built the initial portion of a continuous delivery pipeline. That pipeline, built using the popular OSS tool Concourse, pulls source code from GitHub, generates a Docker image that’s stored in Azure Container Registry, and produces a tarball that’s stashed in Azure Blob Storage. What’s left? Deploying our container image to Azure Kubernetes Service (AKS). Let’s go.

Generating AKS credentials

Back in blog post one, we set up a basic AKS cluster. For Concourse to talk to AKS, we need credentials!

From within the Azure Portal, I started up an instance of the Cloud Shell. This is a hosted Bash environment with lots of pre-loaded tools. From here, I used the AKS CLI to get the administrator credentials for my cluster.

az aks get-credentials --name seroter-k8s-cluster --resource-group demos --admin

This command generated a configuration file with URLs, users, certificates, and tokens.

I copied this file locally for use later in my pipeline.

Creating a role-binding for permission to deploy

The administrative user doesn’t automatically have rights to do much in the default cluster namespace. Without explicitly allowing permissions, you’ll get some gnarly “does not have access” errors when doing most anything. Enter role-based access controls. I created a new rolebinding named “admin” with admin rights in the cluster, and mapped to the existing clusterAdmin user.

kubectl create rolebinding admin --clusterrole=admin --user=clusterAdmin --namespace=default

Now I knew that Concourse could effectively interact with my Kubernetes cluster.

Giving AKS access to Azure Container Registry

Right now, Azure Container Registry (ACR) doesn’t support an anonymous access strategy. Everything happens via authenticated users. The Kubernetes cluster needs access to its container registry, so I followed these instructions to connect ACR to AKS. Pretty easy!

Creating Kubernetes deployment and service definitions

Concourse is going to apply a Kubernetes deployment to create pods of containers in the cluster. Then, Concourse will apply a Kubernetes service to expose my pod with a routable endpoint.

I created a pair of configurations and added them to the ci folder of my source code.

The deployment looks like:

apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: demo-app
   namespace: default
   labels:
     app: demo-app
 spec:
   replicas: 1
   template:
     metadata:
       labels:
         app: demo-app
     spec:
       containers:
       - name: demo-app
         image: myrepository.azurecr.io/seroter-api-k8s:latest
         imagePullPolicy: Always
         ports:
         - containerPort: 8080
       restartPolicy: Always 

This is a pretty basic deployment definition. It points to the latest image in the ACR and deploys a single instance (replicas: 1).

My service is also fairly simple, and AKS will provision the necessary Azure Load Balancer and public IP addresses.

 apiVersion: v1
 kind: Service
 metadata:
   name: demo-app
   namespace: default
   labels:
     app: demo-app
 spec:
   selector:
     app: demo-app
   type: LoadBalancer
   ports:
     - name: web
       protocol: TCP
       port: 80
       targetPort: 80 

I now had all the artifacts necessary to finish up the Concourse pipeline.

Adding Kubernetes resource definitions to the Concourse pipeline

First, I added a new resource type to the Concourse pipeline. Because Kubernetes isn’t a baked-in resource type, we need to pull in a community definition. No problem. This one’s pretty popular. It’s important than the Kubernetes client and server are expecting the same Kubernetes version, so I set the tag to match my AKS version.

resource_types:
- name: kubernetes
  type: docker-image
  source:
    repository: zlabjp/kubernetes-resource
    tag: "1.13"

Next, I had to declare my resource itself. It has references to the credentials we generated earlier.

resources:
- name: azure-kubernetes-service
  type: kubernetes
  icon: azure
  source:
    server: ((k8s-server))
    namespace: default
    token: ((k8s-token))
    certificate_authority: |
      -----BEGIN CERTIFICATE-----
      [...]
      -----END CERTIFICATE-----

There are a few key things to note here. First, the “server” refers to the cluster DNS server name in the credentials file. The “token” refers to the token associated with the clusterAdmin user. For me, it’s the last “user” called out in the credentials file. Finally, let’s talk about the certificate authority. This value comes from the “certificate-authority-data” entry associated with the cluster DNS server. HOWEVER, this value is base64 encoded, and I needed a decoded value. So, I decoded it, and embedded it as you see above.

The last part of the pipeline? The job!

jobs:
- name: run-unit-tests
  [...]
- name: containerize-app
  [...]
- name: package-app
  [...]
- name: deploy-app
  plan:
  - get: azure-container-registry
    trigger: true
    passed:
    - containerize-app
  - get: source-code
  - get: version
  - put: azure-kubernetes-service
    params:
      kubectl: apply -f ./source-code/seroter-api-k8s/ci/deployment.yaml -f ./source-code/seroter-api-k8s/ci/service.yaml
  - put: azure-kubernetes-service
    params:
      kubectl: |
        patch deployment demo-app -p '{"spec":{"template":{"spec":{"containers":[{"name":"demo-app","image":"myrepository.azurecr.io/seroter-api-k8s:'$(cat version/version)'"}]}}}}' 

Let’s unpack this. First, I “get” the Azure Container Registry resource. When it changes (because it gets a new version of the container), it triggers this job. It only fires if the “containerize app” job passes first. Then I get the source code (so that I can grab the deployment.yaml and service.yaml files I put in the ci folder), and I get the semantic version.

Next I “put” to the AKS resource, twice. In essence, this resource executes kubectl commands. The first command does a kubectl apply for both the deployment and service. On the first run, it provisions the pod and exposes it via a service. However, because the container image tag in the deployment file is to “latest”, Kubernetes actually won’t retrieve new images with that tag after I apply a deployment. So, I “patched” the deployment in a second “put” step and set the deployment’s image tag to the semantic version. This triggers a pod refresh!

Deploy and run the Concourse pipeline

I deployed the pipeline as a new revision with this command:

fly -t rs set-pipeline -c azure-k8s-final.yml -p azure-k8s-final

I unpaused the pipeline and watched it start up. It quickly reached and completed the “deploy to AKS” stage.

But did it actually work? I jumped back into the Azure Cloud Shell to check it out. First, I ran a kubectl get pods command. Then, a kubectl get services command. The first showed our running pod, and the second showed the external IP assigned to my pod.

I also issued a request to that URL in the browser, and got back my ASP.NET Core API results.

Also to prove that my “patch” command worked, I ran the kubectl get deployment demo-app –output=yaml command to see which container image my deployment referenced. As you can see below, it no longer references “latest” but rather, a semantic version number.

With all of these settings, I now have a pipeline that “just works” whenever I updated my ASP.NET Core source code. It tests the code, packages it up, and deploys it to AKS in seconds. I’ve added all the pipelines we created here to GitHub so that you can easily try this all out.

Whatever CI/CD tool you use, invest in automating your path to production.

Building an Azure-powered Concourse pipeline for Kubernetes  – Part 2: Packaging and containerizing code

Building an Azure-powered Concourse pipeline for Kubernetes – Part 2: Packaging and containerizing code

Let’s continuously deliver an ASP.NET Core app to Kubernetes using Concourse. In part one of this blog series, I showed you how to set up your environment to follow along with me. It’s easy; just set up Azure Container Registry, Azure Storage, Azure Kubernetes Service, and Concourse. In this post, we’ll start our pipeline by pulling source code, running unit tests, generating a container image that’s stored in Azure Container Registry, and generating a tarball for Azure Blob Storage.

We’re building this pipeline with Concourse. Concourse has three core primitives: tasks, jobs, and resources. Tasks form jobs, jobs form pipelines, and state is stored in resources. Concourse is essentially stateless, meaning there are no artifacts on the server after a build. You also don’t register any plugins or extensions. Rather, the pipeline is executed in containers that go away after the pipeline finishes. Any state — be it source code or Docker images — resides in durable resources, not Concourse itself.

Let’s start building a pipeline.

Pulling source code

A Concourse pipeline is defined in YAML. Concourse ships with a handful of “known” resource types including Amazon S3, git, and Cloud Foundry. There are dozens and dozens of community ones, and it’s not hard to build your own. Because my source code is stored in GitHub, I can use the out-of-the-box resource type for git.

At the top of my pipeline, I declared that resource.

---
resources:
- name: source-code
  type: git
  icon: github-circle
  source:
    uri: https://github.com/rseroter/seroter-api-k8s
    branch: master

I’ve gave the resource a name (“source-code”) and identified where the code lives. That’s it! Note that when you deploy a pipeline, Concourse produces containers that “check” resources on a schedule for any changes that should trigger a pipeline.

Running unit tests

Next up? Build a working version of a pipeline that does something. Specifically, it should execute unit tests. That means we need to define a job.

A job has a build plan. That build plan contains any of three things: get steps (to retrieve a resource), put steps (to push something to a resource), and task steps (to run a script). Our job below has one get step (to retrieve source code), and one task (to execute the xUnit tests).

jobs:
- name: run-unit-tests
  plan:
  - get: source-code
    trigger: true
  - task: first-task
    config: 
      platform: linux
      image_resource:
        type: docker-image
        source: {repository: mcr.microsoft.com/dotnet/core/sdk}
      inputs:
      - name: source-code
      run:
          path: sh
          args:
          - -exec
          - |
            dotnet test ./source-code/seroter-api-k8s/seroter-api-k8s.csproj 

Let’s break it down. First, my “plan” gets the source-code resource. And because I set “trigger: true” Concourse will kick off this job whenever it detects a change in the source code.

Next, my build plan has a “task” step. Tasks run in containers, so you need to choose a base image that runs the user-defined script. I chose the Microsoft-provided .NET Core image so that I’d be confident it had all the necessary .NET tooling installed. Note that my task has an “input.” Since tasks are like functions, they have inputs and outputs. Anything I input into the task is mounted into the container and is available to any scripts. So, by making the source-code an input, my shell script can party on the source code retrieved by Concourse.

Finally, I embedded a short script that invokes the “dotnet test” command. If I were being responsible, I’d refactor this embedded script into an external file and reference that file. But hey, this is easier to read.

This is now a valid pipeline. In the previous post, I had you install the fly CLI to interact with Concourse. From the fly CLI, I deploy pipelines with the following command:

fly -t rs set-pipeline -c azure-k8s-rev1.yml -p azure-k8s-rev1

That command says to use the “rs” target (which points to a given Concourse instance), use the YAML file holding the pipeline, and name this pipeline azure-k8s-rev1. It deployed instantly, and looked like this in the Concourse web dashboard.

After unpausing the pipeline so that it came alive, I saw the “run unit tests” job start running. It’s easy to view what a job is doing, and I saw that it loaded the container image from Microsoft, mounted the source code, ran my script and turned “green” because all my tests passed.

Nice! I had a working pipeline. Now to generate a container image.

Producing and publishing a container image

A pipeline that just run tests is kinda weird. I need to do something when tests pass. In my case, I wanted to generate a Docker image. Another of the built-in Concourse resource types is “docker-image” which generates a container image and puts it into a registry. Here’s the resource definition that worked with Azure Container Registry:

resources:
- name: source-code
  [...]
- name: azure-container-registry
  type: docker-image
  icon: docker
  source:
    repository: myrepository.azurecr.io/seroter-api-k8s
    tag: latest
    username: ((azure-registry-username))
    password: ((azure-registry-password))

Where do you get those Azure Container Registry values? From the Azure Portal, they’re visible under “Access keys.” I grabbed the Username and one of the passwords.

Next, I added a new job to the pipeline.

jobs:
- name: run-unit-tests
  [...]
- name: containerize-app
  plan:
  - get: source-code
    trigger: true
    passed:
    - run-unit-tests
  - put: azure-container-registry
    params:
      build: ./source-code
      tag_as_latest: true

What’s this job doing? Notice that I “get” the source code again. I also set a “passed” attribute meaning this will only run if the unit test step completes successfully. This is how you start chaining jobs together into a pipeline! Then I “put” into the registry. Recall from the first blog post that I generated a Dockerfile from within Visual Studio for Mac, and here, I point to it. The resource does a “docker build” with that Dockerfile, tags the resulting image as the “latest” one, and pushes to the registry.

I pushed this as a new pipeline to Concourse:

fly -t rs set-pipeline -c azure-k8s-rev2.yml -p azure-k8s-rev2

I now had something that looked like a pipeline.

I manually triggered the “run unit tests” job, and after it completed, the “containerize app” job ran. When that was finished, I checked Azure Container Registry and saw a new repository one with image in it.

Generating and storing a tarball

Not every platform wants to run containers. BLASPHEMY! BURN THE HERETIC! Calm down. Some platforms happily take your source code and run it. So our pipeline should also generate a single artifact with all the published ASP.NET Core files.

I wanted to store this blob in Azure Storage. Since Azure Storage isn’t a built-in Concourse resource type, I needed to reference a community one. No problem finding one. For non-core resources, you have to declare the resource type in the pipeline YAML.

resource_types:
- name: azure-blobstore
  type: docker-image
  source:
    repository: pcfabr/azure-blobstore-resource

A resource type declaration is fairly simple; it’s just a type (often docker-image) and then the repo to get it from.

Next, I needed the standard resource definition. Here’s the one I created for Azure Storage:

name: azure-blobstore
  type: azure-blobstore
  icon: azure
  source:
    storage_account_name: ((azure-storage-account-name))
    storage_account_key: ((azure-storage-account-key))
    container: coreapp
    versioned_file: app.tar.gz

Here the “type” matches the resource type name I set earlier. Then I set the credentials (retrieved from the “Access keys” section in the Azure Portal), container name (pre-created in the first blog post), and the name of the file to upload. Regex is supported here too.

Finally, I added a new job that takes source code, runs a “publish” command, and creates a tarball from the result.

jobs:
- name: run-unit-tests
  [...]
- name: containerize-app
  [...]
- name: package-app
  plan:
  - get: source-code
    trigger: true
    passed:
    - run-unit-tests
  - task: first-task
    config:
      platform: linux
      image_resource:
        type: docker-image
        source: {repository: mcr.microsoft.com/dotnet/core/sdk}
      inputs:
      - name: source-code
      outputs:
      - name: compiled-app
      - name: artifact-repo
      run:
          path: sh
          args:
          - -exec
          - |
            dotnet publish ./source-code/seroter-api-k8s/seroter-api-k8s.csproj -o .././compiled-app
            tar -czvf ./artifact-repo/app.tar.gz ./compiled-app
            ls
  - put: azure-blobstore
    params:
      file: artifact-repo/app.tar.gz

Note that this job is also triggered when unit tests succeed. But it’s not connected to the containerization job, so it runs in parallel. Also note that in addition to an input, I also have outputs defined on the task. This generates folders that are visible to subsequent steps in the job. I dropped the tarball into the “artifact-repo” folder, and then “put” that file into Azure Blob Storage.

I deployed this pipeline as yet another revision:

fly -t rs set-pipeline -c azure-k8s-rev3.yml -p azure-k8s-rev3

Now this pipeline’s looking pretty hot. Notice that I have parallel jobs that fire after I run unit tests.

I once again triggered the unit test job, and watched the subsequent jobs fire. After the pipeline finished, I had another updated container image in Azure Container Registry and a file sitting in Azure Storage.

Adding semantic version to the container image

I could stop there and push to Kubernetes (next post!), but I wanted to do one more thing. I don’t like publishing Docker images with the “latest” tag. I want a real version number. It makes sense for many reasons, not the least of which is that Kubernetes won’t pick up changes to a container if the tag doesn’t change! Fortunately, Concourse has a default resource type for semantic versioning.

There are a few backing stores for the version number. Since Concourse is stateless, we need to keep the version value outside of Concourse itself. I chose a git backend. Specifically, I added a branch named “version” to my GitHub repo, and added a single file (no extension) named “version”. I started the version at 0.1.0.

Then, I ensured that my GitHub account had an SSH key associated with it. I needed this so that Concourse could write changes to this version file sitting in GitHub.

I added a new resource to my pipeline definition, referencing the built-in semver resource type.

- name: version  
  type: semver
  source:
    driver: git
    uri: git@github.com:rseroter/seroter-api-k8s.git
    branch: version
    file: version
    private_key: |
        -----BEGIN OPENSSH PRIVATE KEY-----
        [...]
        -----END OPENSSH PRIVATE KEY-----

In that resource definition, I pointed at the repo URI, branch, file name, and embedded the private key for my account.

Next, I updated the existing “containerization” job to get the version resource, use it, and then update it.

jobs:
- name: run-unit-tests
  [...] 
- name: containerize-app
  plan:
  - get: source-code
    trigger: true
    passed:
    - run-unit-tests
  - get: version
    params: {bump: minor}
  - put: azure-container-registry
    params:
      build: ./source-code
      tag_file: version/version
      tag_as_latest: true
  - put: version
    params: {file: version/version}
- name: package-app
  [...]

First, I added another ‘get” for version. Notice that its parameter increments the number by one minor version. Then, see that the “put” for the container registry uses “version/version” as the tag file. This ensures our Docker image is tagged with the semantic version number. Finally, notice I “put” the incremented version file back into GitHub after using it successfully.

I deployed a fourth revision of this pipeline using this command:

fly -t rs set-pipeline -c azure-k8s-rev4.yml -p azure-k8s-rev4

You see the pipeline, post-execution, below. The “version” resource comes into and out of the “containerize app” job.

With the pipeline done, I saw that the “version” value in GitHub was incremented by the pipeline, and most importantly, our Docker image has a version tag.

In this blog post, we saw how to gradually build up a pipeline that retrieves source and prepares it for downstream deployment. Concourse is fun and easy to use, and its extensibility made it straightforward to deal with managed Azure services. In the final blog post of this series, we’ll take pipeline-generated Docker image and deploy it to Azure Kubernetes Service.

Concourse series links

Building an Azure-powered Concourse pipeline for Kubernetes  – Part 1: Setup

Building an Azure-powered Concourse pipeline for Kubernetes – Part 1: Setup

Isn’t it frustrating to build great software and helplessly watch as it waits to get deployed? We don’t just want to build software in small batches, we want to ship it in small batches. This helps us learn faster, and gives our users a non-stop stream of new value.

I’m a big fan of Concourse. It’s a continuous integration platform that reflects modern cloud-native values: it’s open source, container-native, stateless, and developer-friendly. And all pipeline definitions are declarative (via YAML) and easily source controlled. I wanted to learn how build a Concourse pipeline that unit tests an ASP.NET Core app, packages it up and stashes a tarball in Azure Storage, creates a Docker container and stores it in Azure Container Registry, and then deploy the app to Azure Kubernetes Service. In this three part blog series, we’ll do just that! Here’s the final pipeline:

This first posts looks at everything I did to set up the scenario.

My ASP.NET Core web app

I used Visual Studio for Mac to build a new ASP.NET Core Web API. I added NuGet package dependencies to xunit and xunit.runner.visualstudio. The API controller is super basic, with three operations.

[Route("api/[controller]")]
[ApiController]
public class ValuesController : ControllerBase
{
    [HttpGet]
    public ActionResult<IEnumerable<string&gt;&gt; Get()
    {
        return new string[] { "value1", "value2" };
    }

    [HttpGet("{id}")]
    public string Get(int id)
    {
        return "value1";
    }

    [HttpGet("{id}/status")]
    public string GetOrderStatus(int id)
    {
        if (id &gt; 0 &amp;&amp; id <= 20)
        {
            return "shipped";
        }
        else
        {
            return "processing";
        }
    }
}

I also added a Testing class for unit tests.

    public class TestClass
    {
        private ValuesController _vc;

        public TestClass()
        {
            _vc = new ValuesController();
        }

        [Fact]
        public void Test1()
        {
            Assert.Equal("value1", _vc.Get(1));
        }

        [Theory]
        [InlineData(1)]
        [InlineData(3)]
        [InlineData(9)]
        public void Test2(int value)
        {
            Assert.Equal("shipped", _vc.GetOrderStatus(value));
        }
    }

Next, I right-clicked my project and added “Docker Support.”

What this does is add a Docker Compose project to the solution, and Dockerfile to the project. Due to relative paths and such, if you try and “docker build” from directly within the project directory containing the Docker file, Docker gets angry. It’s meant to be invoked from the parent directory with a path to the project’s directory, like:

docker build -f seroter-api-k8s/Dockerfile .

I wasn’t sure if my pipeline could handle that nuance when containerizing my app, so just went ahead and moved the generated Dockerfile to the parent directory like in the screenshot below. From here, I could just execute the docker build command.

You can find the complete project up on my GitHub.

Instantiating an Azure Container Registry

Where should we store our pipeline-created container images? You’ve got lots of options. You could use the Docker Hub, self-managed OSS projects like VMware’s Harbor, or cloud-specific services like Azure Container Registry. Since I’m trying to use all-things Azure, I chose the latter.

It’s easy to set up an ACR. Once I provided the couple parameters via the Azure Dashboard, I had a running, managed container registry.

Provisioning an Azure Storage blob

Container images are great. We may also want the raw published .NET project package for archival purposes, or to deploy to non-container runtimes. I chose Azure Storage for this purpose.

I created a blob storage account named seroterbuilds, and then a single blob container named coreapp. This isn’t a Docker container, but just a logical construct to hold blobs.

Creating an Azure Kubernetes Cluster

It’s not hard to find a way to run Kubernetes. I think my hair stylist sells a distribution. You can certainly spin up your own vanilla server environment from the OSS bits. Or run it on your desktop with minikube. Or run an enterprise-grade version anywhere with something like VMware PKS. Or run it via managed service with something like Azure Kubernetes Service (AKS).

AKS is easy to set up, and I provided the version (1.13.9), node pool size, service principal for authentication, and basic HTTP routing for hosted containers. My 3-node cluster was up and running in a few minutes.

Starting up a Concourse environment

Finally, Concourse. If you visit the Concourse website, there’s a link to a Docker Compose file you can download and start up via docker-compose up. This starts up the database, worker, and web node components needed to host pipelines.

Once Concourse is up and running, the web-based Dashboard is available on localhost:8080.

From there you can find links (bottom left) to downloads for the command line tool (called fly). This is the primary UX for deploying and troubleshooting pipelines.

With fly installed, we create a “target” that points to our environment. Do this with the following statement. Note that I’m using “rs” (my initials) as the alias, which gets used for each fly command.

fly -t rs login -c http://localhost:8080

Once I request a Concourse login (default username is “test” and password is “test”), I’m routed to the dashboard to get a token, which gets loaded automatically into the CLI.

At this point, we’ve got a functional ASP.NET Core app, a container registry, an object storage destination, a managed Kubernetes environment, and a Concourse. In the next post, we’ll build the first part of our Azure-focused pipeline that reads source code, runs tests, and packages the artifacts.

Concourse series links

Which of the 295,680 platform combinations will you create on Microsoft Azure?

Which of the 295,680 platform combinations will you create on Microsoft Azure?

Microsoft Azure isn’t a platform. Like every other public cloud, it’s an environment and set of components you’ll use to construct your platform. It’s more like an operating system that you build on, versus some integrated product you just start using. And that’s exciting. Which platform will you create? Let’s look at what a software platform is, how I landed on 295,680 unique configurations — 10,036,224,000 configurations if you add a handful of popular 3rd party products — and the implications of all this.

What’s a software platform?

Let’s not overthink this. When I refer to a software platform, I’m thinking of the technologies that come together to help you deploy, run, and manage software. This applies whether we’re talking about stuff in your data center or in the public cloud. It applies to serverless systems or good ol’ virtual machines.

I made a map of the non-negotiable capabilities. One way or another, you’re stitching these things together. You need an app runtime (e.g. VMs, FaaS, containers), databases, app deployment tools, infrastructure/config management, identity systems, networking, monitoring, and more. Pretty much all your running systems depend on this combination of things.

How many platform combinations are there in Microsoft Azure?

I’m not picking on Microsoft here; you face the same decision points in every cloud. Each customer realistically, each tenant in your account chooses among the various services to assemble the platform they want. Let’s first talk about only using Microsoft’s first-party services. So, assume you aren’t using any other commercial or OSS products to build and run your systems. Unlikely, but hey, work with me here.

For math purposes, let’s calculate unique combinations by assuming your platform uses one thing from each category. The exception? App runtimes and databases. I think it’s more likely you’re using at least a pair of runtimes (e.g. Azure App Service AND Azure Kubernetes Service, or, Windows VMs AND Linux VMs), and a pair of databases. You may be using more, but I’m being conservative.

295,680 unique platforms you can build on Azure using only native services. Ok, that’s a lot. Now that said, I suspect it’s VERY unlikely you’ll only use native cloud services to build and run software. You might love using Terraform to deploy infrastructure. Or MongoDB Atlas for a managed MongoDB environment. Kong may offer your API gateway of choice. Maybe Prometheus is your standard for monitoring. And don’t forget about all the cool managed services like Twilio, Auth0, or Algolia. The innovation outside of any one cloud will always be greater than the innovation within that cloud. You want in on some of that action! So what happens if I add just a few leading non-Azure services to your platform?

Yowza. 10 billion platform combinations. You can argue with my math or approach, but hopefully you get my point.

And again, this really isn’t about any particular cloud. You can end up in the same place on AWS, Digital Ocean, or Alibaba. It’s simply the result of all the amazing choices we have today.

Who cares?

If you’re working at a small company, or simply an individual using cloud services, this doesn’t really matter. You’ll choose the components that help you deliver software best, and go with it.

Where this gets interesting is in the enterprise. You’ve got many distinct business units, lots of existing technology in use, and different approaches to using cloud computing. Done poorly, your cloud environment stands to be less secure, harder to maintain, and more chaotic than anything you have today. Done well, your cloud environment will accelerate time-to-market and lower your infrastructure costs so that you can spend more time building software your customers love.

My advice? Decide on your tenancy model up-front, and purposely limit your choices.

In the public cloud, you can separate user pools (“tenants”) in at least three different ways:

  1. Everyone in one account. Some companies start this way, and use role-based access or other in-account mechanisms (e.g. resource groups) to silo individual groups or apps. On the positive side, this model is easier to audit and developers can easily share access to services. Challenges here include a bigger blast radius for mistakes, and greater likelihood of broad processes (e.g. provisioning or policy changes) that slow individuals down.
  2. Separate-but-equal sub accounts. I’ve seen some large companies use child accounts or subscriptions for each individual tenant. Each tenant’s account is set up in the same way, with access to the same services. Every tenant owns their own resources, but they operate a standard “platform.” On the plus side, this makes it easier to troubleshoot problems across the org, and enforce a consistent security profile. It also makes engineer rotation easier, since each team has the same stack. On the downside, this model doesn’t account for unique needs of each team and may force suboptimal tech choices in the name of consistency.
  3. Independent sub accounts. A few weeks ago, I watched my friend Brian Chambers explain that each of 30 software teams at Chick-fil-A has their own AWS account, with a recommended configuration that tenants can use, modify, or ignore. Here, every tenant can do their own thing. One of the benefits is that each group can cater their platform to their needs. If a team wants to go entirely serverless, awesome. Another may be all in on Kubernetes. The downside of this model is that you can’t centralize things like security patching, and you can end up with snowflake environments that increase your costs over time.

Finally, it’s wise to purposely limit choices. I wrote about this topic last month. Facing too many choices can paralyze you, and all that choice adds only incremental benefits. For mature categories like databases, pick two and stop there. If it’s an emerging space, offer more freedom until commoditization happens. Give your teams some freedom to build their platform on the public cloud, but put some guardrails in place.

Connecting your Java microservices to each other? Here’s how to use Spring Cloud Stream with Azure Event Hubs.

Connecting your Java microservices to each other? Here’s how to use Spring Cloud Stream with Azure Event Hubs.

You’ve got microservices. Great. They’re being continuous delivered. Neato. Ok … now what? The next hurdle you may face is data processing amongst this distributed mesh o’ things. Brokered messaging engines like Azure Service Bus or RabbitMQ are nice choices if you want pub/sub routing and smarts residing inside the broker. Lately, many folks have gotten excited by stateful stream processing scenarios and using distributed logs as a shared source of events. In those cases, you use something like Apache Kafka or Azure Event Hubs and rely on smart(er) clients to figure out what to read and what to process. What should you use to build these smart stream processing clients?

I’ve written about Spring Cloud Stream a handful of times, and last year showed how to integrate with the Kafka interface on Azure Event Hubs. Just today, Microsoft shipped a brand new “binder” for Spring Cloud Stream that works directly with Azure Event Hubs. Event processing engines aren’t useful if you aren’t actually publishing or subscribing to events, so I thought I’d try out this new binder and see how to light up Azure Event Hubs.

Setting Up Microsoft Azure

First, I created a new Azure Storage account. When reading from an Event Hubs partition, the client maintains a cursor. This cursor tells the client where it should start reading data from. You have the option to store this cursor server-side in an Azure Storage account so that when your app restarts, you can pick up where you left off.

There’s no need for me to create anything in the Storage account, as the Spring Cloud Stream binder can handle that for me.

Next, the actual Azure Event Hubs account! First I created the namespace. Here, I chose things like a name, region, pricing tier, and throughput units.

Like with the Storage account, I could stop here. My application will automatically create the actual Event Hub if it doesn’t exist. In reality, I’d probably want to create it first so that I could pre-define things like partition count and message retention period.

Creating the event publisher

The event publisher takes in a message via web request, and publishes that message for others to process. The app is a Spring Boot app, and I used the start.spring.io experience baked into Spring Tools (for Eclipse, Atom, and VS Code) to instantiate my project. Note that I chose “web” and “cloud stream” dependencies.

With the project created, I added the Event Hubs binder to my project. In the pom.xml file, I added a reference to the Maven package.

 <dependency>
  <groupId>com.microsoft.azure</groupId>
  <artifactId>spring-cloud-azure-eventhubs-stream-binder</artifactId>
  <version>1.1.0.RC5</version>
</dependency>

Now before going much farther, I needed a credentials file. Basically, it includes all the info needed for the binder to successfully chat with Azure Event Hubs. You use the az CLI tool to generate it. If you don’t have it handy, the easiest option is to use the Cloud Shell built into the Azure Portal.

From here, I did az list to show all my Azure subscriptions. I chose the one that holds my Azure Event Hub and copied the associated GUID. Then, I set that account as my default one for the CLI with this command:

az account set -s 11111111-1111-1111-1111-111111111111

With that done, I issued another command to generate the credential file.

az ad sp create-for-rbac --sdk-auth > my.azureauth

I opened up that file within the Cloud Shell, copied the contents, and pasted the JSON content into a new file in the resources directory of my Spring Boot app.

Next up, the code. Because we’re using Spring Cloud Stream, there’s no specific Event Hubs logic in my code itself. I only use Spring Cloud Stream concepts, which abstracts away any boilerplate configuration and setup. The code below shows a simple REST controller that takes in a message, and publishes that message to the output channel. Behind the scenes, when my app starts up, Boot discovers and inflates all the objects needed to securely talk to Azure Event Hubs.

 @EnableBinding(Source.class)
@RestController
@SpringBootApplication
public class SpringStreamEventhubsProducerApplication {

public static void main(String[] args) {
SpringApplication.run(SpringStreamEventhubsProducerApplication.class, args);
}

@Autowired
private Source source;

@PostMapping("/messages")
public String postMsg(@RequestBody String msg) {

this.source.output().send(new GenericMessage<>(msg));
return msg;
}
}

How simple is that? All that’s left is the application properties used by the app. Here, I set a few general Spring Cloud Stream properties, and a few related to the Event Hubs binder.

 #point to credentials
spring.cloud.azure.credential-file-path=my.azureauth
#get these values from the Azure Portal
spring.cloud.azure.resource-group=demos
spring.cloud.azure.region=East US
spring.cloud.azure.eventhub.namespace=seroter-event-hub

#choose where to store checkpoints
spring.cloud.azure.eventhub.checkpoint-storage-account=serotereventhubs

#set the name of the Event Hub
spring.cloud.stream.bindings.output.destination=seroterhub

#be lazy and let the app create the Storage blobs and Event Hub
spring.cloud.azure.auto-create-resources=true

With that, I had a working publisher.

Creating the event subscriber

It’s no fun publishing messages if no one ever reads them. So, I built a subscriber. I walked through the same start.spring.io experience as above, this time ONLY choosing the Cloud Stream dependency. And then added the Event Hubs binder to the pom.xml file of the created project. I also copied the my.azureauth file (containing our credentials) from the publisher project to the subscriber project.

It’s criminally simple to pull messages from a broker using Spring Cloud Stream. Here’s the full extent of the code. Stream handles things like content type transformation, and so much more.

 @EnableBinding(Sink.class)
@SpringBootApplication
public class SpringStreamEventhubsConsumerApplication {

public static void main(String[] args) {
SpringApplication.run(SpringStreamEventhubsConsumerApplication.class, args);
}

@StreamListener(Sink.INPUT)
public void handleMessage(String msg) {
System.out.println("message is " + msg);
}
}

The final step involved defining the application properties, including the Storage account for checkpointing, and whether to automatically create the Azure resources.

 #point to credentials
spring.cloud.azure.credential-file-path=my.azureauth
#get these values from the Azure Portal
spring.cloud.azure.resource-group=demos
spring.cloud.azure.region=East US
spring.cloud.azure.eventhub.namespace=seroter-event-hub

#choose where to store checkpoints
spring.cloud.azure.eventhub.checkpoint-storage-account=serotereventhubs

#set the name of the Event Hub
spring.cloud.stream.bindings.input.destination=seroterhub
#set the consumer group
spring.cloud.stream.bindings.input.group=system3

#read from the earliest point in the log; default val is LATEST
spring.cloud.stream.eventhub.bindings.input.consumer.start-position=EARLIEST

#be lazy and let the app create the Storage blobs and Event Hub
spring.cloud.azure.auto-create-resources=true

And now we have a working subscriber.

Testing this thing

First, I started up the producer app. It started up successfully, and I can see in the startup log that it created the Event Hub automatically for me after connecting.

To be sure, I checked the Azure Portal and saw a new Event Hub with 4 partitions.

Sweet. I called the REST endpoint on my app three times to get a few messages into the Event Hub.

Now remember, since we’re dealing with a log versus a queuing system, my consumers don’t have to be online (or even registered anywhere) to get the data at their leisure. I can attach to the log at any time and start reading it. So that data is just hanging out in Event Hubs until its retention period expires.

I started up my Spring Boot subscriber app. After a couple moments, it connected to Azure Event Hubs, and read the three entries that it hadn’t ever seen before.

Back in the Azure Portal, I checked and saw a new blob container in my Storage account, with a folder for my consumer group, and checkpoints for each partition.

If I sent more messages into the REST endpoint, they immediately appeared in my subscriber app. What if I defined a new consumer group? Would it read all the messages from the beginning?

I stopped the subscriber app, changed the application property for “consumer group” to “system4” and restarted the app. After Spring Cloud Stream connected to each partition, it pumped out whatever it found, and responded immediately to any new entries.

Whether you’re building a change-feed listener off of Cosmos DB, sharing data between business partners, or doing data processing between microservices, you’ll probably be using a broker. If it’s an event bus like Azure Event Hubs, you now have an easy path with Spring Cloud Stream.

Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

Want to yank configuration values from your .NET Core apps? Here’s how to store and access them in Azure and AWS.

Creating new .NET apps, or modernizing existing ones? If you’re following the 12-factor criteria, you’re probably keeping your configuration out of the code. That means not stashing feature flags in your web.config file, or hard-coding connection strings inside your classes. So where’s this stuff supposed to go? Environment variables are okay, but not a great choice; no version control or access restrictions. What about an off-box configuration service? Now we’re talking. Fortunately AWS, and now Microsoft Azure, offer one that’s friendly to .NET devs. I’ll show you how to create and access configurations in each cloud, and as a bonus, throw out a third option.

.NET Core has a very nice configuration system that makes it easy to read configuration data from a variety of pluggable sources. That means that for the three demos below, I’ve got virtually identical code even though the back-end configuration stores are wildly different.

AWS

Setting it up

AWS offers a parameter store as part of the AWS Systems Manager service. This service is designed to surface information and automate tasks across your cloud infrastructure. While the parameter store is useful to support infrastructure automation, it’s also a handy little place to cram configuration values. And from what I can tell, it’s free to use.

To start, I went to the AWS Console, found the Systems Manager service, and chose Parameter Store from the left menu. From here, I could see, edit or delete existing parameters, and create new ones.

Each parameter gets a name and value. For the name, I used a “/” to define a hierarchy. The parameter type can be a string, list of strings, or encrypted string.

The UI was smart enough that when I went to go add a second parameter (/seroterdemo/properties/awsvalue2), it detected my existing hierarchy.

Ok, that’s it. Now I was ready to use it my .NET Core web app.

Using from code

Before starting, I installed the AWS CLI. I tried to figure out where to pass credentials into the AWS SDK, and stumbled upon some local introspection that the SDK does. Among other options, it looks for files in a local directory, and those files get created for you when you install the AWS CLI. Just a heads up!

I created a new .NET Core MVC project, and added the Amazon.Extensions.Configuration.SystemsManager package. Then I created a simple “Settings” class that holds the configuration values we’ll get back from AWS.

public class Settings
{
public string awsvalue { get; set; }
public string awsvalue2 { get; set; }
}

In the appsettings.json file, I told my app which AWS region to use.

{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"AWS": {
"Profile": "default",
"Region": "us-west-2"
}
}

In the Program.cs file, I updated the web host to pull configurations from Systems Manager. Here, I’m pulling settings that start with /seroterdemo.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration(builder =>
{
builder.AddSystemsManager("/seroterdemo");
})
.UseStartup<Startup>();
}

Finally, I wanted to make my configuration properties available to my app code. So in the Startup.cs file, I grabbed the configuration properties I wanted, inflated the Settings object, and made it available to the runtime container.

public void ConfigureServices(IServiceCollection services)
{
services.Configure<Settings>(Configuration.GetSection("properties"));

services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
}

Last step? Accessing the configuration properties! In my controller, I defined a private variable that would hold a local reference to the configuration values, pulled them in through the constructor, and then grabbed out the values in the Index() operation.

        private readonly Settings _settings;

public HomeController(IOptions<Settings> settings)
{
_settings = settings.Value;
}

public IActionResult Index()
{
ViewData["configval"] = _settings.awsvalue;
ViewData["configval2"] = _settings.awsvalue2;

return View();
}

After updating my View to show the two properties, I started up my app. As expected, the two configuration values showed up.

What I like

You gotta like that price! AWS Systems Manager is available at no cost, and there appears to be no cost to the parameter store. Wicked.

Also, it’s cool that you have an easily-visible change history. You can see below that the audit trail shows what changed for each version, and who changed it.

The AWS team built this extension for .NET Core, and they added capabilities for reloading parameters automatically. Nice touch!

Microsoft Azure

Setting it up

Microsoft just shared the preview release of the Azure App Configuration service. This managed service is specifically created to help you centralize configurations. It’s brand new, but seems to be in pretty good shape already. Let’s take it for a spin.

From the Microsoft Azure Portal, I searched for “configuration” and found the preview service.

I named my resource seroter-config, picked a region and that was it. After a moment, I had a service instance to mess with. I quickly added two key-value combos.

That was all I needed to do to set this up.

Using from code

I created another new .NET Core MVC project and added the Microsoft.Extensions.Configuration.AzureAppConfiguration package. Once again I created a Settings class to hold the values that I got back from the Azure service.

public class Settings
{
public string azurevalue1 { get; set; }
public string azurevalue2 { get; set; }
}

Next up, I updated my Program.cs file to read the Azure App Configuration. I passed the connection string in here, but there are better ways available.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) => {
var settings = config.Build();
config.AddAzureAppConfiguration("[con string]");
})
.UseStartup<Startup>();
}

I also updated the ConfigureServices() operation in my Startup.cs file. Here, I chose to only pull configurations that started with seroterdemo:properties.

 public void ConfigureServices(IServiceCollection services)
{
//added
services.Configure<Settings>(Configuration.GetSection("seroterdemo:properties"));

services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
}

To read those values in my controller, I’ve got just about the same code as in the AWS example. The only difference was what I called my class members!

private readonly Settings _settings;

public HomeController(IOptions<Settings> settings)
{
_settings = settings.Value;
}

public IActionResult Index()
{
ViewData["configval"] = _settings.azurevalue1;
ViewData["configval2"] = _settings.azurevalue2;

return View();
}

I once again updated my View to print out the configuration values, and not shockingly, it worked fine.

What I like

For a new service, there’s a few good things to like here. The concept of labels is handy, as it lets me build keys that serve different environments. See here that I created labels for “qa” and “dev” on the same key.

I saw a “compare” feature which looks handy. There’s also a simple search interface here too, which is valuable.

Pricing isn’t yet available, no I’m not clear as to how I’d have to pay for this.

Spring Cloud Config

Setting it up

Both of the above service are quite nice. And super convenient if you’re running in those clouds. You might also want a portable configuration store that offers its own pluggable backing engines. Spring Cloud Config makes it easy to build a config store backed by a file system, git, GitHub, Hashicorp Vault, and more. It’s accessible via HTTP/S, supports encryption, is fully open source, and much more.

I created a new Spring project from start.spring.io. I chose to include the Spring Cloud Config Server and generate the project.

Literally all the code required is a single annotation (@EnableConfigServer).

 @EnableConfigServer
@SpringBootApplication
public class SpringBlogConfigServerApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBlogConfigServerApplication.class, args);
}
}

In my application properties, I pointed my config server to the location of the configs to read (my GitHub repo), and which port to start up on.

server.port=8888
spring.cloud.config.server.encrypt.enabled=false
spring.cloud.config.server.git.uri=https://github.com/rseroter/spring-demo-configs

My GitHub repo has a configuration file called blogconfig.properties with the following content:

With that, I started up the project, and had a running configuration server.

Using from code

To talk to this configuration store from my .NET app, I used the increasingly-popular Steeltoe library. These packages, created by Pivotal, bring microservices patterns to your .NET (Framework or Core) apps.

For the last time, I created a .NET Core MVC project. This time I added a dependency to Steeltoe.Extensions.Configuration.ConfigServerCore. Again, I added a Settings class to hold these configuration properties.

public class Settings
{
public string property1 { get; set; }
public string property2 { get; set; }
public string property3 { get; set; }
public string property4 { get; set; }
}

In my appsettings.json, I set my application name (to match the config file’s name I want to access) and URI of the config server.

{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"spring": {
"application": {
"name": "blogconfig"
},
"cloud": {
"config": {
"uri": "http://localhost:8888"
}
}
}
}

My Program.cs file has a “using” statement for the Steeltoe.Extensions.Configuration.ConfigServer package, and then used the “AddConfigServer” operation to add the config server as a source.

public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.AddConfigServer()
.UseStartup<Startup>();
}

I once again updated the Startup.cs file to load the target configurations into my typed object.

public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});

services.Configure<Settings>(Configuration);
}

My controller pulled the configuration object, and I used it to yank out values to share with the View.

public HomeController(IOptions<Settings> mySettings) {
_mySettings = mySettings.Value;
}
Settings _mySettings {get; set;}

public IActionResult Index()
{
ViewData["configval"] = _mySettings.property1;
return View();
}

Updating the view, and starting the .NET Core app yielded the expected results.

What I like

Spring Cloud Config is a very mature OSS project. You can deliver this sort of microservices machinery along with your apps in your CI/CD pipelines — these components are software that you ship versus services that need to be running — which is powerful. It offers a variety of backends, OAuth2 for security, encryption/decryption of values, and much more. It’s a terrific choice for a consistent configuration store on every infrastructure.

But realistically, I don’t care which of the above you use. Just use something to extract environment-specific configuration settings from your .NET apps. Use these robust external stores to establish some rigor around these values, and make it easier to share configurations, and keep them in sync across all of your application instances.

Advertisements

Categories: .NET, AWS, Cloud, General Architecture, Microservices, Microsoft Azure, Pivotal, Spring

Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

Go “multi-cloud” while *still* using unique cloud services? I did it using Spring Boot and MongoDB APIs.

What do you think of when you hear the phrase “multi-cloud”? Ok, besides stupid marketing people and their dumb words. You might think of companies with on-premises environments who are moving some workloads into a public cloud. Or those who organically use a few different clouds, picking the best one for each workload. While many suggest that you get the best value by putting everything on one provider, that clearly isn’t happening yet. And maybe it shouldn’t. Who knows. But can you get the best of each cloud while retaining some portability? I think you can.

One multi-cloud solution is to do the lowest-common-denominator thing. I really don’t like that. Multi-cloud management tools try to standardize cloud infrastructure but always leave me disappointed. And avoiding each cloud’s novel services in the name of portability is unsatisfying and leaves you at a competitive disadvantage. But why should we choose the cloud (Azure! AWS! GCP!) and runtime (Kubernetes! VMs!) before we’ve even written a line of code? Can’t we make those into boring implementation details, and return our focus to writing great software? I’d propose that with good app frameworks, and increasingly-standard interfaces, you can create great software that runs on any cloud, while still using their novel services.

In this post, I’ll build a RESTful API with Spring Boot and deploy it, without code changes, to four different environments, including:

  1. Local environment running MongoDB software in a Docker container.
  2. Microsoft Azure Cosmos DB with MongoDB interface.
  3. Amazon DocumentDB with MongoDB interface.
  4. MongoDB Enterprise running as a service within Pivotal Cloud Foundry

Side note: Ok, so multi-cloud sounds good, but it seems like a nightmare of ops headaches and nonstop dev training. That’s true, it sure can be. But if you use a good multi-cloud app platform like Pivotal Cloud Foundry, it honestly makes the dev and ops experience virtually the same everywhere. So, it doesn’t HAVE to suck, although there are still going to be challenges. Ideally, your choice of cloud is a deploy-time decision, not a design-time constraint.

Creating the app

In my career, I’ve coded (poorly) with .NET, Node, and Java, and I can say that Spring Boot is the fastest way I’ve seen to build production-quality apps. So, I chose Spring Boot to build my RESTful API. This API stores and returns information about cloud databases. HOW VERY META. I chose MongoDB as my backend database, and used the amazing Spring Data to simplify interactions with the data source.

From start.spring.io, I created a project with dependencies on spring-boot-starter-data-rest (auto-generated REST endpoints for interacting with databases), spring-boot-starter-data-mongodb (to talk to MongoDB), spring-boot-starter-actuator (for “free” health metrics), and spring-cloud-cloudfoundry-connector (to pull connection details from the Cloud Foundry environment). Then I opened the project and created a new Java class representing a CloudProvider.

package seroter.demo.cloudmongodb;

import org.springframework.data.annotation.Id;

public class CloudProvider {
        
        @Id private String id;
        
        private String providerName;
        private Integer numberOfDatabases;
        private Boolean mongoAsService;
        
        public String getProviderName() {
                return providerName;
        }
        
        public void setProviderName(String providerName) {
                this.providerName = providerName;
        }
        
        public Integer getNumberOfDatabases() {
                return numberOfDatabases;
        }
        
        public void setNumberOfDatabases(Integer numberOfDatabases) {
                this.numberOfDatabases = numberOfDatabases;
        }
        
        public Boolean getMongoAsService() {
                return mongoAsService;
        }
        
        public void setMongoAsService(Boolean mongoAsService) {
                this.mongoAsService = mongoAsService;
        }
}

Thanks to Spring Data REST (which is silly powerful), all that was left was to define a repository interface. If all I did was create an annotate the interface, I’d get full CRUD interactions with my MongoDB collection. But for fun, I also added an operation that would return all the clouds that did (or did not) offer a MongoDB service.

package seroter.demo.cloudmongodb;

import java.util.List;

import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;

@RepositoryRestResource(collectionResourceRel = "clouds", path = "clouds")
public interface CloudProviderRepository extends MongoRepository<CloudProvider, String> {
        
        //add an operation to search for a specific condition
        List<CloudProvider> findByMongoAsService(Boolean mongoAsService);
}

That’s literally all my code. Crazy.

Run using Dockerized MongoDB

To start this test, I wanted to use “real” MongoDB software. So I pulled the popular Docker image and started it up on my local machine:

docker run -d -p 27017:27017 --name serotermongo mongo

When starting up my Spring Boot app, I could provide database connection info (1) in an app.properties file, or, as (2) input parameters that require nothing in the compiled code package itself. I chose the file option for readability and demo purposes, which looked like this:

#local configuration
spring.data.mongodb.uri=mongodb://0.0.0.0:27017
spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

After starting the app, I issued a base request to my API via Postman. Sure enough, I got a response. As expected, no data in my MongoDB database. Note that Spring Data automatically creates a database if it doesn’t find the one specified, so the “demodb” now existed.

I then issued a POST command to add a record to MongoDB, and that worked great too. I got back the URI for the new record in the response.

I also tried calling that custom “search” interface to filter the documents where “mongoAsService” is true. That worked.

So, running my Spring Boot REST API with a local MongoDB worked fine.

Run using Microsoft Azure Cosmos DB

Next up, I pointed this application to Microsoft Azure. One of the many databases in Azure is Cosmos DB. This underrated database offers some pretty amazing performance and scale, and is only available from Microsoft in their cloud. NO PROBLEM. It serves up a handful of standard interfaces, including Cassandra and MongoDB. So I can take advantage of all the crazy-great hosting features, but not lock myself into any of them.

I started by visiting the Microsoft Azure portal. I chose to create a new Cosmos DB instance, and selected which API (SQL, Cassandra, Gremlin, MongoDB) I wanted.

After a few minutes, I had an instance of Cosmos DB. If I had wanted to, I could have created a database and collection from the Azure portal, but I wanted to confirm that Spring Data would do it for me automatically.

I located the “Connection String” properties for my new instance, and grabbed the primary one.

With that in hand, I went back to my application.properties file, commented out my “local” configuration, and added entries for the Azure instance.

#local configuration
#spring.data.mongodb.uri=mongodb://0.0.0.0:27017
#spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

#azure cosmos db configuration
spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
spring.data.mongodb.database=demodb

I could publish this app to Azure, but because it’s also easy to test it locally, I just started up my Spring Boot REST API again, and pinged the database. After POSTing a new record to my endpoint, I checked the Azure portal and sure enough, saw a new database and collection with my “document” in it.

Here, I’m using a super-unique cloud database but don’t need to manage my own software to remain “portable”, thanks to Spring Boot and MongoDB interfaces. Wicked.

Run using Amazon DocumentDB

Amazon DocumentDB is the new kid in town. I wrote up an InfoQ story about it, which frankly inspired me to try all this out.

Like Azure Cosmos DB, this database isn’t running MongoDB software, but offers a MongoDB-compatible interface. It also offers some impressive scale and performance capabilities, and could be a good choice if you’re an AWS customer.

For me, trying this out was a bit of a chore. Why? Mainly because the database service is only accessible from within an AWS private network. So, I had to properly set up a Virtual Private Cloud (VPC) network and get my Spring Boot app deployed there to test out the database. Not rocket science, but something I hadn’t done in a while. Let me lay out the steps here.

First, I created a new VPC. It had a single public subnet, and I added two more private ones. This gave me three total subnets, each in a different availability zone.

Next, I switched to the DocumentDB console in the AWS portal. First, I created a new subnet group. Each DocumentDB cluster is spread across AZs for high availability. This subnet group contains both the private subnets in my VPC.

I also created a parameter group. This group turned off the requirement for clients to use TLS. I didn’t want my app to deal with certs, and also wanted to mess with this capability in DocumentDB.

Next, I created my DocumentDB cluster. I chose an instance class to match my compute and memory needs. Then I chose a single instance cluster; I could have chosen up to 16 instances of primaries and replicas.

I also chose my pre-configured VPC and the DocumentDB subnet group I created earlier. Finally, I set my parameter group, and left default values for features like encryption and database backups.

After a few minutes, my cluster and instance were up and running. While this console doesn’t expose the ability to create databases or browse data, it does show me health metrics and cluster configuration details.

Next, I took the connection string for the cluster, and updated my application.properties file.

#local configuration
#spring.data.mongodb.uri=mongodb://0.0.0.0:27017
#spring.data.mongodb.database=demodb

#port configuration
server.port=${PORT:8080}

#azure cosmos db configuration
#spring.data.mongodb.uri=mongodb://seroter-mongo:<password>@seroter-mongo.documents.azure.com:10255/?ssl=true&replicaSet=globaldb
#spring.data.mongodb.database=demodb

#aws documentdb configuration
spring.data.mongodb.uri=mongodb://seroter:<password>@docdb-2019-01-27-00-20-22.cluster-cmywqx08yuio.us-west-2.docdb.amazonaws.com:27017
spring.data.mongodb.database=demodb

Now to deploy the app to AWS. I chose Elastic Beanstalk as the application host. I selected Java as my platform, and uploaded the JAR file associated with my Spring Boot REST API.

I had to set a few more parameters for this app to work correctly. First, I set a SERVER_PORT environment variable to 5000, because that’s what Beanstalk expects. Next, I ensured that my app was added to my VPC, provisioned a public IP address, and chose to host on the public subnet. Finally, I set the security group to the default one for my VPC. All of this should ensure that my app is on the right network with the right access to DocumentDB.

After the app was created in Beanstalk, I queried the endpoint of my REST API. Then I created a new document, and yup, it was added successfully.

So again, I used a novel, interesting cloud-only database, but didn’t have to change a lick of code.

Run using MongoDB in Pivotal Cloud Foundry

The last place to try this app out? A multi-cloud platform like PCF. If you did use something like PCF, the compute layer is consistent regardless of what public/private cloud you use, and connectivity to data services is through a Service Broker. In this case, MongoDB clusters are managed by PCF, and I get my own cluster via a Broker. Then my apps “bind” to that cluster.

First up, provisioning MongoDB. PCF offers MongoDB Enterprise from Mongo themselves. To a developer, this looks like a database-as–a-service because clusters are provisioned, optimized, backed up, and upgraded via automation. Via the command line or portal, I could provision clusters. I used the portal to get myself happy little instance.

After giving the service a name, I was set. As with all the other examples, no code changes were needed. I actually removed any MongoDB-related connection info from my application.properties file because that spring-cloud-cloudfoundry-connector dependency actually grabs the credentials from the environment variables set by the service broker.

One thing I *did* create for this environment — which is entirely optional — is a Cloud Foundry manifest file. I could pass these values into a command line instead of creating a declarative file, but I like writing them out. These properties simply tell Cloud Foundry what to do with my app.

---
applications:
- name: boot-seroter-mongo
  memory: 1G
  instances: 1
  path: target/cloudmongodb-0.0.1-SNAPSHOT.jar
  services:
  - seroter-mongo

With that, I jumped to a terminal, navigated to a directory holding that manifest file, and typed cf push. About 25 seconds later, I had a containerized, reachable application that connected to my MongoDB instance.

Fortunately, PCF treats Spring Boot apps special, so it used the Spring Boot Actuator to pull health metrics and more. Above, you can see that for each instance, I saw extra health information for my app, and, MongoDB itself.

Once again, I sent some GET requests into my endpoint, saw the expected data, did a POST to create a new document, and saw that succeed.

Wrap Up

Now, obviously there are novel cloud services without “standard” interfaces like the MongoDB API. Some of these services are IoT, mobile, or messaging related —although Azure Event Hubs has a Kafka interface now, and Spring Cloud Stream keeps message broker details out of the code. Other unique cloud services are in emerging areas like AI/ML where standardization doesn’t really exist yet. So some applications will have a hard coupling to a particular cloud, and of course that’s fine. But increasingly, where you run, how you run, and what you connect to, doesn’t have to be something you choose up front. Instead, first you build great software. Then, you choose a cloud. And that’s pretty cool.

Advertisements

Categories: AWS, Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, Microsoft Azure, Pivotal, Spring

2018 in Review: Reading and Writing Highlights

2018 in Review: Reading and Writing Highlights

Well, that was quite a year. Maybe for you too. I ended up speaking at a half-dozen events, delivered 2+ new Pluralsight courses, helped organize a conference, kept writing news and such for InfoQ.com, blogged a bit, was granted the Microsoft MVP award again, and wrote a book. At work, I played a small part in helping Pivotal become a public company, and somehow got promoted to Vice President.

For the 11th year in a row, I thought it’d be fun to list out what I enjoyed writing and reading this year.

Things I Wrote

I kept up a decent pace of writing this year across InfoQ, my personal blog, and the Pivotal blog. Here are a few things I enjoyed writing.

[Book] Modernizing .NET Applications. I’ve told anyone that would listen that I’d never write another book. Clearly, I’m a filthy liar. But I couldn’t pass up the opportunity this year, and it gave me a chance to write down some things bouncing around in my head.

[InfoQ] Recap of AWS re:Invent 2018 Announcements. re:Invent is such an industry-defining event each year. It was a lot of work to read through the announcements and synthesize it all into one piece. I think it turned out ok.

[InfoQ] The InfoQ eMag: Testing Your Distributed (Cloud) Systems. I need to do more of these, but here’s a collection of articles I commissioned and curated into a single downloadable asset.

[Blog] 10 Characteristics of a Successful Internal IT Champion. Being a champion for change can be a challenging experience. Especially in IT departments. Here’s some advice for you.

[Blog] Wait, THAT runs on Pivotal Cloud Foundry? This was a series of five blog posts in five days that looked an unexpected workloads that run on Pivotal’s flagship platform. I look for any excuse to keep up my coding chops!

[Blog] Creating a continuous integration pipeline in Concourse for a test-infused ASP.NET Core app. Whoever gets good code to production fastest, wins! Here, it was fun to play with one of my favorite continuous integration tools (Concourse) and my favorite programming framework (.NET).

[Blog] How to use the Kafka interface of Azure Event Hubs with Spring Cloud Stream. Microsoft’s been sticking standard facades in front of their proprietary services, so I thought it’d be useful to try one out.

[Pivotal Blog] You Deserve a Continuously Integrated Platform. Here’s Why It Matters. Build it, or buy it? It’s an age-old debate. In this post, I explained why certain things (like app platforms) should be bought from those who have expertise at building them.

[Pivotal Blog] You’re Investing In .NET, and so Are We. Pivotal Is Now a Corporate Sponsor of The .NET Foundation. I was happy to see Pivotal take a bigger step to help .NET developers. This post looks at what we’re doing about it, besides just throwing money at the .NET Foundation.

Things I Read

Over the course of 2018, I read 35 books on a variety of topics. Here’s a subset of the ones that stood out to me the most.

Grant, by Ron Chernow. This book is a monster in size, but maybe the best thing I read in 2018. Meticulously research but entirely readable, the book walks through U.S. Grant’s incredible life. Unbelievable highs, crushing lows.

Thanks for the Feedback: The Science and Art of Receiving Feedback Well, by Douglas Stone and Sheila Heen. Safe to say this was the most impactful business-y book that I read in 2018. While attention was paid to effectively giving feedback, the bulk of the book was about receiving feedback. I’ve read very little about that, and this book completely changed my thinking on the topic.

Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, by Nichole Forsgren (@nicolefv), Jez Humble (@jezhumble), and Gene Kim (@RealGeneKim). Clearly put, this book explains the findings and methods behind a long-standing DevOps survey. Besides laying out their methods, Forsgren and team thoroughly explain the key capabilities that separate the best from the worst. If you’ve been lacking the ammunition to initiate a software revolution at your company, it’s time to purchase and devour this book.

Bearing the Cross: Martin Luther King, Jr., and the Southern Christian Leadership Conference, by David Garrow. I was historically ignorant about this time period, and needed to correct that. What a powerful book about a complex, flawed, courageous individual who carried an unparalleled burden while under constant scrutiny. Well-written, eye-opening book.

Discrimination and Disparities, by Thomas Sowell. I followed up the previous book with this one. Wanted to hear more on this topic. Sowell is an insightful economist, and explores the role that discrimination does, and doesn’t, play in disparities we see around us. He attempts to escape emotional appeals and find rational answers.

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems, by Martin Kleppmann (@martinkl). This should be foundational reading for anyone designing or building distributed systems. It’s wonderfully written, comprehensive, and extremely useful. Buy this book.

Streaming Systems: The What, Where, When, and How of Large-Scale Data Processing, by Tyler Akidau (@takidau), Slava Chernyak, Reuven Lax (@reuvenlax). This book has some complex ideas, but it’s a must-read for those shifting their mindset from static data to data in motion. The authors have an approachable writing style, that makes even the most dense topics relatable.

The Boys in the Boat: Nine Americans and Their Epic Question for Gold at the 1936 Berlin Olympics, by Daniel James Brown (@DJamesBrown). I think this was my favorite book of 2018. The author made me care deeply about the characters, and the writing was legitimately thrilling during the races. It was also insightful to read about a Seattle that was barely in the public consciousness at the time.

Meg: A Novel of Deep Terror, by Steve Alten (@meg82159). I admittedly didn’t know about this book series until the movie came out in 2018, but who doesn’t love giant sharks? The book wasn’t the same as the movie; it was better. Fun read.

The Messy Middle: Finding Your Way Through the Hardest and Most Crucial Parts of Any Bold Venture, by Scott Belsky (@scottbelsky). Good book. Really, a series of short, advice-filled chapters. Belsky deftly uses his own experiences as an entrepreneur and investor to help the reader recognize the leader’s role during the highs and lows of a company/project/initiative.

Martin Luther: The Man Who Rediscovered God and Changed the World, by Eric Metaxas (@ericmetaxas). I read a book about a freakin’ prehistoric shark, yet this book about a 16th century monk had some of the most thrilling moments I read this year. My knowledge of Martin Luther was limited to “something something Reformation” and “something something 95 theses”, but this book helped me recognize his massive contribution to the world. We owe many of the freedoms we have today to Luther’s courageous stand.

Bad Blood: Secrets and Lies in a Silicon Valley Startup, by John Carreyrou (@JohnCarreyrou). Wow, what a book. I was peripherally aware of this company (Theranos) and storyline already, but this book is an eye-opening account of what really happened. I’m not even sure there’s a “good intentions” angle here, as it just seems like bad people doing bad things.

Five Stars: The Communication Secrets to Get from Good to Great, by Carmine Gallo (@carminegallo). If you’re wondering why your career is stagnant, there’s a chance it’s because you’re a subpar communicator. Gallo makes the case that moving others to action is the quintessential skill of this century. His book uses tons of anecdotes to drive the point home, and he sprinkles in plenty of actionable advice.

The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change, by Camille Fournier (@skamille). This is a book aimed at new managers of technical teams. There’s some general advice in here that’s useful for managers of any type of team, but it’s most helpful if you lead (or want to empathize with) a tech team. Fournier’s experience shines through, in this easy-to-read book.

How Christianity Changed the World, by Alvin J. Schmidt. Good book that outlined the massive impact of Christianity on Western society. Regardless of what you believe (or don’t!), it’s impressive to recognize that Christianity elevated the value of human life (by bringing an end to infanticide, child abandonment, gladiator battles) and women’s rights, introduced orphanages, old-age homes, and hospitals, erected the first universities, colleges, and public school systems, and ushered in a new era of art, science, and government. Wild stuff.

To Sell is Human: The Surprising Truth About Moving Others, by Daniel H. Pink (@DanielPink). Most of us are selling something, whether we recognize it or not. Pink lays out the reality, and why our ability to move others is key to success. And importantly, he offers some legitimately useful ways to make that happen.

The F*It List: Lessons fro a Human Crash Test Dummy, by Eric Byrnes (@byrnes22). I knew Byrnes, the colorful professional baseball player. I did not know Eric Byrnes, the uber-athlete and ultra-marathoner. He’s got a refreshing approach to life, and wrote a compelling autobiography. Thanks to @wearsy for the recommendation.

Essentialism: The Disciplined Pursuit of Less, by Greg McKeown (@GregoryMcKeown). This book stands to have the biggest impact on my life in 2019. It’s about constantly stopping and evaluating whether we’re working on the right things. On purpose. I’m giving myself permission to be more selective in 2019, and will be more mindful about where I can make the biggest impact.

The Interstellar Age: Inside the Forty-Year Voyager Mission, by Jim Bell (@Jim_Bell). Thanks to this book and the movie “The Farthest”, I became obsessed with the Voyager missions. I even did a couple talks about it to tech audiences this year. Bell’s book is detailed, exciting, and inspirational.

The Industrial Revolutionaries: The Making of the Modern World, 1776-1914, by Gavin Weightman. I read about how artificial intelligence will upend the world and economy as we know it, so I thought it’d be smart to look at the last world-changing revolution. This was a fascinating book and I learned a lot. It still blows me away that many comforts of our current life are fairly recent inventions.

Playing to Win: How Strategy Really Works, by A.G. Lafley and Roger L. Martin. The authors call out five questions at the heart of the business strategy: what’s your winning aspiration, where to play, how to win, what capabilities must be in place, and what management systems are required. I found the book engaging, relevant, and motivational.

The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers, by Ben Horowitz (@bhorowitz). This book stressed me out, but in a good way. Horowitz doesn’t tiptoe around the tension of building and sustaining something. The book is absolutely full of useful advice that really landed for me.

High Output Management, by Andy Grove. Best business book that I read all year. It’s also one that Horowitz refers to a lot in his book (above). Every manager should read this. Grove focuses on value and outcomes, and lays out what good management looks like.

The Problem of God: Answering a Skeptic’s Challenges to Christianity, by Mark Clark (@markaclark). Mark’s a mesmerizing speaker that I’ve heard in-person a few times. Here, he makes a well-rounded, logical, compelling case for Christian faith.

When: The Scientific Secrets of Perfect Timing, by Daniel H Pink (@DanielPink). Sheesh, two Pink books in the same year? The man writes a good tome, what can I say? This is a book about timing. What time of day we should perform certain activities. When and how to “start” things, when to end. And how to synchronize teams. As always, an engaging read that made me rethink my approaches.

Showstopper!: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, by G Pascal Zachary. If you didn’t think a story about creating Windows NT could be compelling, THINK AGAIN. Part history lesson and part adventure tale, this book provides a fascinating behind-the-scenes featuring some characters you’ll recognize.
I probably say this every year, but sincerely, thank you all for being on this journey with me. Each day, I learn so much from the people (virtually and physically) around me. Let’s all have a 2019 where we learn a lot and make those around us better.

Advertisements

Categories: Year in Review

Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

Platforms should run on Kubernetes, apps should run on PaaS. That simple heuristic seems to resonate with the companies I talk to. When you have access to both environments, it makes sense to figure out what runs where. PaaS is ideal when you have custom code and want an app-aware environment that wires everything together. It’s about velocity, and straightforward Day 2 management. Kubernetes is a great choice when you have closely coordinated, distributed components with multiple exposed network ports and a need to access to infrastructure primitives. You know, a platform! Things like databases, message brokers, and hey, integration platforms. In this post, I see what it takes to get a platform up and running on Azure’s new Kubernetes service.

While Kubernetes itself is getting to be a fairly standard component, each public cloud offers it up in a slightly different fashion. Some clouds manage the full control plane, others don’t. Some are on the latest version of Kubernetes, others aren’t. When you want a consistent Kubernetes experience in every infrastructure pool, you typically use an installable product like Pivotal Container Service (PKS).  But I’ll be cloud-specific in this demo, since I wanted to take Azure Kubernetes Service (AKS) for a spin. And we’ll use Spring Cloud Data Flow as our “platform” to install on AKS.

To start with, I went to the Azure Portal and chose to add a new instance of AKS. I was first asked to name my cluster, choose a location, pick a Kubernetes version, and set my initial cluster size.

For my networking configuration, I turned on “HTTP application routing” which gives me a basic (non-production grade) ingress controller. Since my Spring Cloud Data Flow is routable and this is a basic demo, it’ll work fine.

After about eleven minutes, I had a fully operational Kubernetes cluster.

Now, this is a “managed” service from Microsoft, but they definitely show you all the guts of what’s stood up to support it. When I checked out the Azure Resource Group that AKS created, it was … full. So, this is apparently the hooves and snouts of the AKS sausage. It’s there, but I don’t want to know about it.

The Azure Cloud Shell is a hidden gem of the Microsoft cloud. It’s a browser-based shell that’s stuffed with powerful components. Instead of prepping my local machine to talk to AKS, I just used this. From the Azure Portal, I spun up the Shell, loaded my credentials to the AKS cluster, and used the kubectl command to check out my nodes.

Groovy. Let’s install stuff. Spring Cloud Data Flow (SCDF) makes it easy to build data pipelines. These pipelines are really just standalone apps that get stitched together to form a sequential data processing pipeline. SCDF is a platform itself; it’s made up of a server, Redis node, MySQL node, and messaging broker (RabbitMQ, Apache Kafka, etc). It runs atop a number of different engines, including Cloud Foundry or Kubernetes. Spring Cloud Data Flow for Kubernetes has simple instructions for installing it via Helm.

I issued a Helm command from the Azure Cloud Shell (as Helm is pre-installed there) and in moments, had SCDF deployed.

When it finished, I saw that I had new Kubernetes pods running, and a load balancer service for routing traffic to the Data Flow server.

SCDF offers up a handful of pre-built “apps” to bake into pipelines, but the real power comes from building your own apps. I showed that off a few weeks ago, so for this demo, I’ll keep it simple. This streaming pipeline simply takes in an HTTP request, and drop the payload into a log file. THRILLING!

The power of a platform like SCDF comes out during deployment of a pipeline. See here that I chose Kubernetes as my underlying engine, created a load balancer service (to make my HTTP component routable) via a property setting, and could have optionally chose different instance counts for each component in the pipeline. Love that.

If you have GUI-fatique, you can always set these deploy-time properties via free text. I won’t judge you.

After deploying my streaming pipeline, I saw new pods shows up in AKS: one pod for each component of my pipeline.

I ran the kubectl get services command to confirm that SCDF built out a load balancer service for the HTTP app and assigned a public IP.

SCDF reads runtime information from the underlying engine (AKS, in this case) and showed me that my HTTP app was running, and its URL.

I spun up Postman and sent a bunch of JSON payloads to the first component of the SCDF pipeline running on AKS. 

I then ran a kubectl logs [log app’s pod name] command to check the logs of the pipeline component that’s supposed to write logs.

And that’s it. In a very short period of time, I stood up a Kubernetes cluster, deployed a platform on top of it, and tested it out. AKS makes this fairly easy, and the fact that it’s vanilla Kubernetes is nice. When using public cloud container-as-a-service products or installable software that runs everywhere, consider Kubernetes a great choice for running platforms. 

Advertisements

Categories: Cloud, DevOps, Microservices, Microsoft Azure, OSS, Pivotal, Spring