Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

Deploying a platform (Spring Cloud Data Flow) to Azure Kubernetes Service

Platforms should run on Kubernetes, apps should run on PaaS. That simple heuristic seems to resonate with the companies I talk to. When you have access to both environments, it makes sense to figure out what runs where. PaaS is ideal when you have custom code and want an app-aware environment that wires everything together. It’s about velocity, and straightforward Day 2 management. Kubernetes is a great choice when you have closely coordinated, distributed components with multiple exposed network ports and a need to access to infrastructure primitives. You know, a platform! Things like databases, message brokers, and hey, integration platforms. In this post, I see what it takes to get a platform up and running on Azure’s new Kubernetes service.

While Kubernetes itself is getting to be a fairly standard component, each public cloud offers it up in a slightly different fashion. Some clouds manage the full control plane, others don’t. Some are on the latest version of Kubernetes, others aren’t. When you want a consistent Kubernetes experience in every infrastructure pool, you typically use an installable product like Pivotal Container Service (PKS).  But I’ll be cloud-specific in this demo, since I wanted to take Azure Kubernetes Service (AKS) for a spin. And we’ll use Spring Cloud Data Flow as our “platform” to install on AKS.

To start with, I went to the Azure Portal and chose to add a new instance of AKS. I was first asked to name my cluster, choose a location, pick a Kubernetes version, and set my initial cluster size.

For my networking configuration, I turned on “HTTP application routing” which gives me a basic (non-production grade) ingress controller. Since my Spring Cloud Data Flow is routable and this is a basic demo, it’ll work fine.

After about eleven minutes, I had a fully operational Kubernetes cluster.

Now, this is a “managed” service from Microsoft, but they definitely show you all the guts of what’s stood up to support it. When I checked out the Azure Resource Group that AKS created, it was … full. So, this is apparently the hooves and snouts of the AKS sausage. It’s there, but I don’t want to know about it.

The Azure Cloud Shell is a hidden gem of the Microsoft cloud. It’s a browser-based shell that’s stuffed with powerful components. Instead of prepping my local machine to talk to AKS, I just used this. From the Azure Portal, I spun up the Shell, loaded my credentials to the AKS cluster, and used the kubectl command to check out my nodes.

Groovy. Let’s install stuff. Spring Cloud Data Flow (SCDF) makes it easy to build data pipelines. These pipelines are really just standalone apps that get stitched together to form a sequential data processing pipeline. SCDF is a platform itself; it’s made up of a server, Redis node, MySQL node, and messaging broker (RabbitMQ, Apache Kafka, etc). It runs atop a number of different engines, including Cloud Foundry or Kubernetes. Spring Cloud Data Flow for Kubernetes has simple instructions for installing it via Helm.

I issued a Helm command from the Azure Cloud Shell (as Helm is pre-installed there) and in moments, had SCDF deployed.

When it finished, I saw that I had new Kubernetes pods running, and a load balancer service for routing traffic to the Data Flow server.

SCDF offers up a handful of pre-built “apps” to bake into pipelines, but the real power comes from building your own apps. I showed that off a few weeks ago, so for this demo, I’ll keep it simple. This streaming pipeline simply takes in an HTTP request, and drop the payload into a log file. THRILLING!

The power of a platform like SCDF comes out during deployment of a pipeline. See here that I chose Kubernetes as my underlying engine, created a load balancer service (to make my HTTP component routable) via a property setting, and could have optionally chose different instance counts for each component in the pipeline. Love that.

If you have GUI-fatique, you can always set these deploy-time properties via free text. I won’t judge you.

After deploying my streaming pipeline, I saw new pods shows up in AKS: one pod for each component of my pipeline.

I ran the kubectl get services command to confirm that SCDF built out a load balancer service for the HTTP app and assigned a public IP.

SCDF reads runtime information from the underlying engine (AKS, in this case) and showed me that my HTTP app was running, and its URL.

I spun up Postman and sent a bunch of JSON payloads to the first component of the SCDF pipeline running on AKS. 

I then ran a kubectl logs [log app’s pod name] command to check the logs of the pipeline component that’s supposed to write logs.

And that’s it. In a very short period of time, I stood up a Kubernetes cluster, deployed a platform on top of it, and tested it out. AKS makes this fairly easy, and the fact that it’s vanilla Kubernetes is nice. When using public cloud container-as-a-service products or installable software that runs everywhere, consider Kubernetes a great choice for running platforms. 

Advertisements

Categories: Cloud, DevOps, Microservices, Microsoft Azure, OSS, Pivotal, Spring

My new book on modernizing .NET applications is now available!

My new book on modernizing .NET applications is now available!

I might be the first person to write a technical book because of peer pressure. Let me back up. 

I’m fortunate to be surrounded by smart folks at Pivotal. Many of them write books. We usually buy copies of them to give out at conferences. After one of conferences in May, my colleague Nima pointed out that folks wanted a book about .NET. He then pushed all the right buttons to motivate me.

So, I signed a contract with O’Reilly Media in June, started writing in July, and released the book yesterday.

Modernizing .NET Applications is a 100-page book that for now, is free from Pivotal. At some point soon, O’Reilly will put it on Safari (and other channels). So what’s in this book, before you part with your hard-earned email address?

Chapter 1 looks at why app modernization actually matters. I define “modernization” and give you a handful of reasons why you should do it. Chapter 2 offers an audit of what .NET software you’re running today, and why you’re motivated to upgrade it. Chapter 3 takes a quick look at the types of software your stakeholders are asking you to create now. Chapter 4 defines “cloud-native” and explains why you should care. I also define some key characteristics of cloud-native software and what “good” looks like. 

Chapter 5 helps you decide between using the .NET Framework or .NET Core for your applications. Then in Chapter 6, I lay out the new anti-patterns for .NET software and what things you have to un-learn. Chapter 7 calls out some of the new components that you’ll want to introduce to your modernized .NET apps. Chapter 8 helps you decide where you should run your .NET apps, with an assessment of all the various public/private software abstractions to choose from. Chapter 9 digs into five specific recipes you should follow to modernize your apps. These include event storming, externalized configuration, remote session stores, token-based security schemes, and apps on pipelines. Finally, Chapter 10 leaves you with some next steps.

I’ve had the pleasure/pain of writing books before, and have held off doing it again since our tech information consumption patterns have changed. But, it seems like there’s still a hunger for long-form content, and I’m passionate about .NET developers. So, I invested in a topic I care about, and hopefully wrote a book in a way that you find enjoyable to read.

Go check it out, and tell me what you think!

Advertisements

Categories: .NET, Cloud, Cloud Foundry, DevOps, Microsoft Azure, Pivotal

Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

Wait, THAT runs on Pivotal Cloud Foundry? Part 5 – .NET Framework apps

Looking for a host suitable for .NET Framework apps? Windows Server virtual machines are almost your only option. The only public cloud PaaS product that offers a higher abstraction than virtual machines is Azure’s App Service. And that’s not really meant to run an entire enterprise portfolio. So … what to do? Don’t say “switch to .NET Core and run on all the Linux-based platforms” because that’s cheating. What can you do today? The best option you don’t know about is Pivotal Cloud Foundry (PCF). In this post, I’ll show you how to easily deploy and operate .NET apps in PCF on any infrastructure.

This is part five of a five part series. Hopefully you’ve enjoyed my exploration of workloads you might not expect to see on a cloud-native platform like PCF.

About PAS for Windows

Quickly, I want to tell you about Pivotal Application Service (PAS) for Windows. Recall that PCF is really made up of two software abstractions atop a sophisticated infrastructure management platform (BOSH): Pivotal Application Service (for apps) and Pivotal Container Service (for raw containers). PAS for Windows extends PAS with managed Windows Server instances. As an operator, you can deploy, patch, upgrade, and operate Windows Server instances entirely through automation. For developers, you get a on-demand, scalable host that supports remote debugging and much more. I feel pretty safe saying that this is better than whatever you’re doing today for Windows workloads!

PAS for Windows extends PAS and uses all the same machinery

Deploying a WCF application to PCF

Let’s do this. First, I confirmed that I had a Windows “stack” available to me. In my PCF environment, I ran a cf stacks command.

Yup, all good. I created a new Windows Communication Foundation (WCF) application targeting .NET Framework 4.0. All of your apps aren’t using the latest framework, so why should my sample? Note that you can run all types of classic .NET projects in PCF: ASP.NET Web Forms, MVC, Web API, WCF, console, and more.

My WCF service doesn’t need to change at all to run in PCF. To publish to PCF, I just need to provide a set of command line parameters, or, write a manifest with those parameters. My manifest looked like this:

---
applications:
- name: blog-demo-wcf
memory: 256M
instances: 1
buildpack: hwc_buildpack
stack: windows2016
env:
betaflag: on

There’s a buildpack just for .NET apps on Windows and all I have to do is push the code itself. About fifteen seconds after typing cf push, my WCF service was packaged up and loaded into a Windows Server container.

Browsing the endpoint returned that familiar page of WCF service metadata. 

Operating your .NET app on PCF

It’s one thing to deploy an app, it’s another thing to manage it. PCF makes that pretty easy. After deploying a .NET app, I see some helpful metadata. It shows me the stack, buildpack, and any environment variables visible to the app.

How long does it take you to get a new instance of your .NET app into production today? Weeks? Months? I just scaled up from one to three Windows container instances in less than ten seconds. I just love that.

Any app written in any language gets access to the same set of PCF functionality. Your .NET Framework apps get built-in log aggregation, metrics and monitoring, autoscaling, and more. All in a multi-tenant environment. And with straightforward access to anything in the marketplace through the Service Broker interface. Want your .NET Framework app to talk to Azure’s Cosmos DB or Google Cloud Spanner? Just use the broker.

Oh, and don’t forget that because PAS for Windows uses legit Windows Server containers, each app instance gets its own copy of the file system, registry, and GAC. You can see this by SSH-ing into the container. Yes, I said you could SSH in. It’s just a cf ssh command.

That’s a full Windows file system, and I can even spin up Powershell in there. Crazy times.

Advertisements

Categories: .NET, Cloud, Cloud Foundry, DevOps, Pivotal, WCF/WF

Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

Wait, THAT runs on Pivotal Cloud Foundry? Part 4 – Data pipelines

Streaming is all the rage! No, not binge-watching Arrested Development on Netflix. Rather, I mean data stream processing: ingesting and handling infinite datasets. Instead of chewing through a nightly or weekly batch of records, you’re doing near real-time processing. Done correctly, this helps you improve data quality and make faster decisions. But how do you arrange the sequence of steps to process that data? Data pipelines! In this post, I’ll show you that this is yet another unexpected workload that runs pretty darn well on Pivotal Cloud Foundry (PCF).

So far in this series, we’ve looked at other workloads ranging from Docker images to batch jobs.

Let’s build a pipeline that processes a stream of shipment data that flows out of a relational database, gets enriched with additional info, and finally gets written to a log.

Spinning up Spring Cloud Data Flow on PCF

You could do streaming a few ways in PCF. You could manually deploy a PCF-managed instance of RabbitMQ, Solace PubSub+, or Apache Kafka. Or connect to a cloud-based broker like Azure Service Bus or Google Pub/Sub through a Service Broker. Any of those options give you a messaging backbone, but a data pipeline often involves a sequence of orchestrated steps. One turnkey solution that combines lightweight messaging with smart orchestration is Spring Cloud Data Flow (SCDF).

While it’s not that challenging to install SCDF yourself, PCF bundles it all up into a single package. All it takes is deploying the “Data Flow Server” from the PCF marketplace.

After BOSH built and deployed the Spring Cloud Data Flow server and dependent services (database, Redis cache, RabbitMQ instance), I also provisioned an instance of PostgreSQL from Crunchy Data. This is the source to my data stream.

That was easy.  From this screen on PCF Apps Manager, I could click through and log into the SCDF dashboard. From here, I loaded all the Spring Cloud Stream App Starters. These are “just” Spring Boot apps, but we can use these to build data streams. We can build our own apps to, but it’s great to pre-load these starters. Note that everything I’m doing with this dashboard you can also do with a CLI.

With that, I had everything I needed to build out my data pipeline. 

Building and deploying a data pipeline

Before building my pipeline, I wanted to prep my PostgreSQL database. To do this, I built a simple ASP.NET Core app that created a data table and added records. I deployed this to PCF, bound it to the Crunchy Data instance, and now had a way to instantiate my relational database and add rows.

I wanted to enrich data as part of my data pipeline. When a “shipment” record comes out of PostgreSQL, it has an identifier for which warehouse it came from. I wanted to use that ID to look up the US state associated with the warehouse. I could try and use an out-of-the-box App Starter to do it, or just build my own. I chose the latter. What’s wicked is these are just Spring Cloud Stream apps. I created a new app from start.spring.io, created a POJO that represents a “warehouse shipment”, added an annotation and a method, and assembled the jar file. No other configurations needed! 

@EnableBinding(Processor.class)
@SpringBootApplication
public class DemoPipelineEnricherApplication {

  public static void main(String[] args) {
     SpringApplication.run(DemoPipelineEnricherApplication.class, 
  args);
  }

  @StreamListener(Processor.INPUT)
  @SendTo(Processor.OUTPUT)
  public shipment EnrichShipment(shipment s) {
    switch(s.warehouse_id) {
    case 400:
        s.warehouse_location="CA";
        break;
    case 401:
        s.warehouse_location="WA";
        break;
    case 402:
        s.warehouse_location="TX";
        break;
    case 403:
        s.warehouse_location="FL";
        break;
    }
    return s;
  }
}

To make this app available to my new data pipeline, I needed to register it with the SCDF server. That means the jar file needed to be visible to the server. I uploaded the jar file to GitHub (better choices include the Maven repo, or another legit artifact repository) and registered it:

It’s pipeline time! I designed a pipeline that started with a JDBC source, sent the individual rows to my “enricher” app, and then routed the results to the application log. For fun, I also tapped that result stream to count how many messages came in for each US state.

The pipeline definition is something you can add to source control and version like any other deployment artifact. My pipeline looks like:

warehouse-stream=jdbc
--spring.datasource.username='[username]'
--spring.datasource.url='jdbc:postgresql://[url]:5432/shipments'
--jdbc.max-rows-per-poll=5 --jdbc.query='SELECT * FROM WarehouseShipments WHERE
is_read=FALSE' --jdbc.update='UPDATE WarehouseShipments SET is_read=TRUE WHERE
is_read=FALSE;' --spring.datasource.password='[password]' |
demo-enricher | log 

What’s cool is that after creating the stream, I had all sorts of deployment options for each app in the pipeline. That means that each app could have its own instance count and resource allocation. Much better than coarsely scaling the whole pipeline when just one component needs to scale! 

After deploying the streams, I saw the underlying Spring Boot apps deployed to my PCF environment. SCDF is pretty sophisticated but still an easy-to-use platform!

I continually added records to my PostgreSQL database, and saw them immediately stream through SCDF on PCF. Each individual message got enriched with additional details before printing out to the log.

In this post, we saw that data pipelines have a natural home in PCF. Spring Cloud Data Flow is an ideal replacement for heavyweight ESB products in certain scenarios, and a replacement for ETL in others. Give it a try on PCF, Kubernetes, or other runtimes.

Advertisements

Categories: Cloud, Cloud Foundry, OSS, Pivotal, Spring

Wait, THAT runs on Pivotal Cloud Foundry? Part 3 – Background, batch, and scheduled jobs

Wait, THAT runs on Pivotal Cloud Foundry? Part 3 – Background, batch, and scheduled jobs

So far in this series of posts, we’ve seen that Pivotal Cloud Foundry (PCF) runs a lot more than just web applications. Not every app has a user-facing front-end component. Some of your systems run in the background or on a schedule and perform a variety of important tasks. In this post, I’ll take a look at how to deploy background workers, on-demand batch tasks, and scheduled jobs.

This is the third in a five part series of posts:

Deploying and running background workers

Pivotal Cloud Foundry makes it easy to run workers that don’t have a routable address. These background jobs might listen to a database and respond to data changes, or respond to messages in a work queue. Let’s demonstrate the latter. 

I built a .NET Core console app that’s responsible for pulling “loan” records from RabbitMQ and processing them. You can built these background jobs is any programming language supported by Cloud Foundry.

What’s nice is that background jobs have access to all the useful PCF capabilities that web apps do. One such capability? Service Brokers! Devs love using Service Brokers to provision and access backing services. My background job needs access to RabbitMQ and I don’t want to hard-code any connection details. No big deal. I first spun up an on-demand RabbitMQ instance via the PCF Service Broker.

My .NET Core app uses the Steeltoe Service Connector (and the RabbitMQ .NET Client) to load service broker connection info and talk to my instance.

static void Main(string[] args){            
//pull service broker configuration
var builder = new ConfigurationBuilder()
.AddEnvironmentVariables()
.AddCloudFoundry();

var configuration = builder.Build();
//get our fully loaded service
var services = new ServiceCollection();
services.AddRabbitMQConnection(configuration);
var provider = services.BuildServiceProvider();
ConnectionFactory f = provider.GetService<ConnectionFactory>();

//connect to RMQ
using (var connection = f.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "loans", durable: true, exclusive: false, autoDelete: false, arguments: null);
var consumer = new EventingBasicConsumer(channel);

//fire up when a new message comes in
consumer.Received += (model, ea) => {
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine("[x] Received loan data: {0}", message);
};
channel.BasicConsume(queue: "loans", autoAck: true, consumer: consumer);
Console.ReadLine();
}
}

Apps deployed to Cloud Foundry are typically accompanied by a YAML manifest. You can provide the parameters on the CLI, but versioned, source-controlled manifests are a better way to go. For these background jobs, the manifests are simple. Note two key things: the no-route parameter is “true” so that we don’t get a route assigned, and the health-check-type is set to “process” so that the orchestrator monitors process availability and doesn’t try to ping a non-existent web endpoint. Also notice that I bound my app to the previously-created RabbitMQ service instance.

---  
applications:
- name: core-demo-background
memory: 256M
no-route: true
health-check-type: process
services:
- seroter-rmq

After a quick cf push, my background app was running, and bound to the RabbitMQ instance.

This job quietly sits and waits for work to do. What’s neat is this can also take advantage of PCF’s autoscale capability, and scale by monitoring RabbitMQ queue depth, for example. For now, one instance is plenty. I logged into RabbitMQ and sent in a couple sample “loan” messages.

Sure enough, when I viewed the aggregated application logs for my background job, I saw the content of each read message printed out. 

These sorts of workers are a useful part of most systems, and PCF offers a resilient, manageable place to run them.

Deploying and running on-demand batch tasks

How many useful, random scripts do your system administrators have sitting around? You know, the ones that create users, reset demo environments, or purge FTP shares. Instead of having those scripts buried on administrator desktops, you can run these one-off batch jobs in PCF.

I created another .NET Core console application. This one pretends to sweep expired files from a shared folder. I deployed this application to PCF with a –no-start command since I want to trigger it on demand.

cf push --no-start

Now, to trigger the job, I need to know the start command. This depends on how you deployed it. Since I used the .NET Core buildpack, I want to start up the app one time to discover how PCF starts up the app.

That command showed me where the .NET Core executable lives in the container. I stopped the app again, and switched over the “Tasks” view in the PCF Apps Manager interface. I can do all these things via the CLI as well, but I’m a sucker for a nice UX. There’s a “run task” button that lets me define a one-off task definition.

Here I gave the task a name, pasted the start command I found above, and that was it! When I hit, “run”, PCF instantiated a new container instance and shut down the container when the task was complete. And that’s what I saw. There was a log entry indicating a successful job run, and the application logs showed the output of the task. Nice!

This is a great option for one-off jobs and scripts. Consolidate them in PCF, and get all the availability and auditing you need.

Deploying and running scheduled jobs

Finally, some of those one-off jobs may not be as one-off as you thought! Instead of asking your admin to trigger a task once a day to purge expired files, how about you schedule the job to run on a schedule? 

PCF also offers a scheduling component to trigger tasks (or API calls!) on a recurring basis. On the same “tasks” tab of the PCF Apps Manager UX, there’s a “jobs” section for scheduled tasks. Besides giving the job a name and a command (the same as the task command above), you enter a Cron expression for the schedule itself. The expression is in a MIN HOUR DAY-OF-MONTH MONTH DAY-OF-WEEK format. For example “15 * ? * * *” means you should run the job every 15 minutes, and “30 10 * * 5” means you should run the job at 10:30am every Friday. My job below is set to run every minute.

We’re all building lots of web apps nowadays, but you have lots of need for event-driven or scheduled background work. PCF may surprise you as an entirely suitable platform for those workloads.

Advertisements

Categories: .NET, Cloud, Cloud Foundry, Messaging, Pivotal

Wait, THAT runs on Pivotal Cloud Foundry? Part 2 – TCP-routable services

Wait, THAT runs on Pivotal Cloud Foundry? Part 2 – TCP-routable services

Platform-as-a-Service products typically run web apps. That is, apps that accept HTTP traffic and listen on ports 80, 8080 or 443. As you survey the landscape today, you’ll find that’s still the case in the most popular public cloud application runtimes. That’s not a bad thing, but sometimes you have workloads with different routing needs. In this post, I’m going to demonstrate TCP Routing in Pivotal Cloud Foundry (PCF), and show Redis running directly in the platform.

As a reminder, this is the 2nd post in a series about “unexpected” workloads running on PCF.

  • Part 1 – Deploying and running Docker images
  • Part 2 – Setting up TCP routable services
  • Part 3 – Running batch and scheduled jobs
  • Part 4 – Configuring data streaming apps
  • Part 5 – Deploying .NET Framework apps to Windows Server

About TCP Routing in PCF

TCP Routing has been part of Cloud Foundry for two years now. Basically, TCP Routing lets your app handle traffic over non-HTTP TCP protocols. This is valuable for custom-built apps or packaged software that communicate with binary payloads or specialized transports.

By default, custom-built apps are set to always listen on port 8080 in Cloud Foundry. The buildpack process (mentioned in part 1 of the series) configures that, although you can change this behavior. Even if your app does listen on port 8080, TCP Routing makes it easy to expose a non-HTTP port to the outside world via network address translation.

Source: https://docs.cloudfoundry.org/adminguide/enabling-tcp-routing.html

Assuming your Cloud Foundry admins configured TCP Routing in your environment(s), you can set up this type of per-app routing entirely via self-service.

Deploying a TCP routable workload

Instead of demonstrating with an app I wrote myself, I thought it’d be more fun to deploy a well-known software product. Enter Redis! Redis is a wildly-popular key-value store, and there are many ways to install it. One of the easiest options is the Docker image. Note that Redis typically exposes access over port 6379. When deploying Docker images to Cloud Foundry, the port defined in the EXPOSE directive is what’s actually exposed by Cloud Foundry app container. I didn’t know that until this week!

After logging into my PCF environment, I ran the cf domains command to see what routable domains were available to me.

I’ve got the “standard” domain for my regular web apps (here, apps.pcfone.io), a domain for TCP routing (tcp.apps.pcfone.io) and one for private traffic (apps.internal) that we’ll mess with shortly.

I started by pushing a Redis image to PCF. I’m purposely using the –no-route command to ensure it doesn’t get a default web route in the apps.pcfone.io domain.

cf push redisdocker --docker-image redis -i 1 -m 256M --no-route -u process

After about ten seconds, the container is up and running. Notice however, that it’s currently not routable.

Let’s change that. Now, because all apps sit behind the same edge router and TCP routes don’t have a path component, I can’t have two apps listening on the same TCP port. So, there’s a good chance that the default Redis port fo 6379 is already in use somewhere. That’s cool; we can tell PCF to assign a random port at the edge route that forwards traffic to port 6379 on the app container.

cf map-route redisdocker tcp.apps.pcfone.io --random-port

The result? I get a TCP route assigned on port 10011.

Again, note that the app container is still listening on 6379, because that’s what was set by the Docker image at deploy time. But through network address translation, the external facing port is a different value. Let’s prove that Redis is actually running and addressable.

I spun up the redis-cli and issued a command.

Ok, clearly it’s reachable via the public Internet over a non-HTTP connection. That’s neat. I did a LITTLE more with Redis than that, by also adding and retrieving a key.

With this pattern, my apps running in PCF (or anywhere) can send requests to PCF-hosted software that handles all kinds of payloads and protocols. But what if you don’t want these workloads to be Internet accessible?

Setting up private TCP routing

The above demo is cool, but you might not like having your cache, MQTT bus, or whatever, exposed to public traffic. This is where the relatively-new container-to-container networking is pretty darn neat.

By default, app instances in Cloud Foundry talk to each other through the shared router. That’s not awful, but for performance reasons, or to access private services, you may want to communicate directly with another app container. With polyglot service discovery now part of PCF, it’s easy to do this via DNS, versus hard-coded container addresses. Let me show you.

First, I removed the publicly-accessible TCP route from my Redis instance.

Now, you can no longer reach it. Next up, I wanted to map my Redis instance to the apps.internal domain that’s ONLY accessible within a Cloud Foundry.

cf map-route redisdocker apps.internal --hostname redisdocker

Because we’re not dealing with any extra NAT action, I can directly hit Redis on port 6379. I built a Node.js app that connects to Redis, adds a key, and reads a key. I set the connection details to the internal domain and standard port.

var options = {  host: "redisdocker.apps.internal",  port: 6379}
var redis = require("redis"), client = redis.createClient(options);

Then I pushed this app to PCF with a –no-start command so that I could set up connectivity between my app and Redis. Apps can’t automatically reach other apps on the apps.internal domain unless we give permission. It’s easy to do.  Via the Cloud Foundry CLI, I can create, delete, and list network policies. A network policy determines which apps can directly talk to each other (without going through the router), over which port and protocol.

cf add-network-policy demo-app --destination-app redisdocker --protocol tcp --port 6379

Notice that in that command, all I said was that one app (demo-app) could talk to another app (redisdocker). I didn’t have to map IP addresses, or anything like that. As app instances scale in and out, there’s no need to change the policies to reflect that. That’s a considerate UX.

After executing the above command, my Node.js app (demo-app) could “see” the redisdocker app instance. And notice that I’ve allowed traffic to the default Redis port, 6379.

With that policy in place, I loaded the Node.js app, and it directly routed requests over port 6379 to my Redis instance.

Unlike most PaaS-like products, PCF offers TCP routing over non-HTTP channels. While you may still (wisely) choose to run certain workloads—clustered services, apps that need multiple IPs exposed per container, or workloads with complex persistence needs—in an environment outside of PCF, it’s useful to know that you can leverage PCF to host and orchestrate a wide variety of publicly or privately routable workloads. Keep an eye out tomorrow for the next post, where we investigate batch jobs.

Advertisements

Categories: Cloud, Cloud Foundry, Node.js, Pivotal

Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

Wait, THAT runs on Pivotal Cloud Foundry? Part 1 – Docker images

When I say “PaaS” what comes to mind? If you’re like most people I talk to, you think of public cloud platforms for modern web apps. So I’ll forgive you if you didn’t realize that things are different now!

The first generation of PaaS products had a few things in common. They were public cloud only. You had to build apps with the runtime constraints in mind. They only ran statelesss web apps. Linux was the only runtime. When Cloud Foundry first came out, it checked most of those boxes. But over the years, Pivotal Cloud Foundry (PCF) evolved to do much more.

Many people still think of those first-generation PaaS constraints when considering PCF, and specifically, the Pivotal Application Service (PAS). So, I thought it’d be fun to look at non-traditional workloads. In this brief five-part series, I’m going to show off the following scenarios:

  • Part 1 – Deploying and running Docker images
  • Part 2 – Setting up TCP routable services
  • Part 3 – Running batch and scheduled jobs
  • Part 4 – Configuring data streaming apps
  • Part 5 – Deploying .NET Framework apps to Windows Server

Deploying and running Docker images

Most Cloud Foundry users depend on buildpacks. Developers push source code, and the buildpack pulls in dependencies, frameworks, and runtimes, then builds a tarball that’s deployed as an OCI-compatible container in Cloud Foundry.  One major benefit of the buildpacks model is that the platform brings the root file system to your app. You’re not responsible for finding secure base images or maintaining that “layer” of the stack. But all that said, some folks like using Docker images as their packaging unit whether manually created (don’t do that) or as the output from a continuous integration pipeline.

It doesn’t matter if Cloud Foundry builds the container or you send in a Docker image, it’s all treated the same by the platform. At runtime, the orchestrator executes all containers using runC, the same spec used by Docker and Kubernetes. Let’s see this in action.

You can try this for free on Pivotal Web Services if you don’t have a Cloud Foundry available. I’m using a different environment, but they all behave the same. That’s the point! After you cf login to Cloud Foundry, it’s time to push a container.

How about we start with a Node.js web app. Here’s an Express app built by the folks at Bitnami. We can actually push this to Cloud Foundry with a single command.

cf push nodedocker --docker-image bitnami/node-example:0.0.1 -i 2 -m 128M

In that command, notice a couple things. First, I’m using the –docker-image flag. Since I’m hitting a public image in the public Docker Hub, no credentials or anything are needed. PCF also works with private images, and private registries. Otherwise, it’s a standard command that asks for a single instance, and 128M of memory for each instance. Within ten seconds, you’ll have two routable instances ready to process traffic.

Seriously. That’s amazing. And PCF doesn’t “mess with” the image. Whatever layers are in your Docker image are what run in Cloud Foundry. One thing PCF *does* do is volume mount a directory that contains a unique certificate for the container. This regularly-rotated credential (up to hourly!) is used for things like mTLS. You can see it by SSH-ing into the container and doing printenv or browsing the file system. Yes, you can actually SSH into containers whether built by the platform or via Docker images. No black boxes here.

Deploying an app’s only half the story. Does PCF treat the running app the same way if it was packaged as a Docker image? Yup. Jumping to the PCF Apps Manager UX, you see our running app.

If you look closely, you see that we indicate the app type, in this case, that it’s from a Docker image.

More importantly, the platform bestows all the operational goodness on this app as any other. For example, all the logs from each app instance are collected and aggregated.

You can add environment variables. Configure auto-scaling. Monitor app and container health metrics. Bind to marketplace services. All the things that make PCF a great runtime for apps make it a great runtime for apps packaged as Docker images.

So try it out yourself. If you’re building custom apps, PCF is a great destination regardless of how you want to ship code. Stay tuned tomorrow for fun network routing demonstration.

Advertisements

Categories: Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, Node.js

Four Spring Cloud Projects That You Should Be Using

Four Spring Cloud Projects That You Should Be Using

Since I’ve moved up to the Seattle-area three years ago, there’s been a hole in my life. No longer. The Habit just opened up around the corner from me. I missed that place! While most of the attention (including my own) is on the Charburger, they actually have a pretty deep menu. I thought about that this week when looking at the latest Spring Cloud portfolio. While it’s used millions of times per month by Java developers —and usage grew 137% over the past year alone—Spring Cloud is best known for its Config Server and packaging of NetflixOSS tech. You know, things for service discovery, load balancing, circuit breakers, etc. THEY DESERVE THE GLORY. But there are four other interesting packages that you shouldn’t overlook.

Spring Cloud Stream

It’s no secret that I’m a big fan of this library. It abstracts away all the complexity of dealing with message brokers like RabbitMQ and Apache Kafka. Spring Cloud Stream has a straightforward programming model that makes it simple to do complex things. Content-based routing? Dead letter queuing? Content-type conversions? Partitioned processing, even on brokers that don’t natively support it? You get all that.

I’ve seen more and more companies move away from the heavy, centralized ESB and towards a federated messaging model. With Stream, you can use your choice of message broker, but make it a late-binding decision for developers. And if you’re doing event processing and want to chain a series of action together, Spring Cloud Stream works great with Spring Cloud Data Flow.

If you want to dig in, check out my Pluralsight course that has a whole module on Spring Cloud Stream. Or just go build something!

Spring Cloud Contract

Consumer-driven contracts are a fresh take on testing APIs. You know the classic way we share API info: create an API and expose operations and payloads for teams to model and test against. With consumer-driven contracts, the API creator builds and tests their service against a set of consumer expectations. But this has traditionally been a bit difficult to pull off. Enter Spring Cloud Contract.

It’s described in the docs as moving “TDD to the level of software architecture.” It does this by “covering a range of options for writing tests, publishing them as assets, and asserting that a contract is kept by producers and consumers.”

You write contracts in Groovy or YAML, and testing stubs get generated and used by both producers and consumers. This enables fast feedback for both side. What’s cool about these generated testing stubs (and the associated “stub runner”) is that you can mock complex distributed systems with a few code annotations. This includes the messaging layer as well. Powerful stuff, and a big deal for today’s software developers.

Read this insightful interview with Spring Cloud Contract’s creator, Marcin Grzejszczak. And check out the docs for all the gory details.

Spring Cloud Gateway

This one’s pretty new, so I’ll excuse you if you haven’t heard of it. BUT ONLY THIS ONCE! Spring Cloud Gateway gives you a powerful API gateway based on Spring components.

Use (built-in or custom) route predicates to determine how requests are handled. Built-in ones include datetime (before, after, or between), cookies, headers, host, method, path, query, and more. Combine them to get whatever behavior you need. You can also modify incoming or outgoing traffic. Add headers, integrate with Hystrix for circuit breaker behavior, check a rate limiter, do directors, among other things. As you’d expect, this is quite extensible and scalable.

API gateways form an important part of your architecture, and having a bunch of mini-gateways deployed (instead of a single, monolithic one) might give you extra flexibility. Check out the docs and try it out.

Spring Cloud Function

Unless you’ve awoken from a three-year slumber, you’re probably familiar with “serverless” tech. Spring Cloud Function is a pretty wicked (generally available) framework that does a few things you might not expect.

It’s not just about making Spring Boot friendly to function platforms like AWS Lambda or Azure Functions. That’s there (see AWS and Azure adapter guidance). But besides providing a consistent programming model across clouds, it also has stuff you need to run it standalone.

Decorate your code with annotations that result in HTTP or stream-processing endpoints getting attached to your function. Your function can take part in messaging as a source, processor (takes data in, publishes data out), or sink. Or it can get be a standalone web app that gets activated upon HTTP request. Neato. Read the docs and see how easy it is to get started.

Spring Cloud is a pretty unique collection of projects, and the Spring team is constantly upgrading and improving them. The whole point is to make it simple to incorporate proven distributed systems pattern in your apps. From what I can tell, it’s achieving that mission.

Advertisements



Categories: Cloud, DevOps, General Architecture, Messaging, Spring

Creating a continuous integration pipeline in Concourse for a test-infused ASP.NET Core app

Creating a continuous integration pipeline in Concourse for a test-infused ASP.NET Core app

Trying to significantly improve your company’s ability to build and run good software? Forget Docker, public cloud, Kubernetes, service meshes, Cloud Foundry, serverless, and the rest of it. Over the years, I’ve learned the most important place you should start: continuous integration and delivery pipelines. Arguably, “apps on pipeline” is the most important “transformation” metric to track. Not “deploys per day” or “number of microservices.” It’s about how many apps you’ve lit up for repeatable, automated deployment. That’s a legit measure of how serious you are about being responsive and secure.

All this means I needed to get smarter with Concourse, one of my favorite tools for CI (and a little CD). I decided to build an ASP.NET Core app, and continuously integrate and deliver it to a Cloud Foundry environment running in AWS. Let’s go!

First off, I needed an app. I spun up a new ASP.NET Core Web API project with a couple REST endpoints. You can grab the source code here. Most of my code demos don’t include tests because I’m in marketing now, so YOLO, but a trustworthy pipeline needs testable code. If you’re a .NET dev, xUnit is your friend. It’s maintained by my friend Brad, so I basically chose it because of peer pressure. My .csproj file included a few references to bring xUnit into my project:

  • “Microsoft.NET.Test.Sdk” Version=”15.7.0″
  • “xunit” Version=”2.3.1″
  • “xunit.runner.visualstudio” Version=”2.3.1″

Then, I created a class to hold the tests for my web controller. I included one test with a basic assertion, and another “theory” with an input data set. These are comically simple, but prove the point!

   public class TestClass {
        private ValuesController _vc;
public TestClass() {
            _vc = new ValuesController();
        }

        [Fact]
        public void Test1(){
            Assert.Equal("pivotal", _vc.Get(1));
        }

        [Theory]
        [InlineData(1)]
        [InlineData(3)]
        [InlineData(20)]
        public void Test2(int value) {
            Assert.Equal("public", _vc.GetPublicStatus(value));
        }
    }

When I ran dotnet test against the above app, I got an expected error because the third inline data source led to a test failure, since my controller only returns “public” companies when the input value is between 1 and 10. Commenting the offending inline data source led to a successful test run.

2018.06.06-concourse-02

Ok, the app was done. Now, to put it on a pipeline. If you’ve ever used shameful swear words when wrangling your CI server, maybe it’s worth joining all the folks who switched to Concourse. It’s a pretty straightforward OSS tool that uses a declarative model and containers for defining and running pipelines, respectively. Getting started is super simple. If you’re running Docker on your desktop, that’s your easiest route. Just grab this Docker Compose file from the Concourse GitHub repo. I renamed mine to docker-compose.yml, jumped into a Terminal session, switched to the folder holding this YAML file, and ran docker compose up -d. After a second or two, I had a PostgreSQL server (for state) and a Concourse server. PROVE IT, you say. Hit localhost:8080, and you’ll see the Concourse dashboard.

2018.06.06-concourse-01

Besides this UX, we interface with Concourse via a CLI tool called fly. I downloaded it from here. I then used fly to add my local environment as a “target” to manage. Instead of plugging in the whole URL every time I interacted with Concourse, I created an alias (“rs”) using fly -t rs login -c http://localhost:8080. If you get a warning to sync your version of fly with your version of Concourse, just enter fly -t rs sync and it gets updated. Neato.

Next up? The pipeline. Pipelines are defined in YAML and are made up of resources and jobs. One of the great things about a declarative model, is that I can run my CI tests against any Concourse by just passing in this (source-controlled) pipeline definition. No point-and-ciick configurations, no prerequisite components to install. Love it. First up, I defined a couple resources. One was my GitHub repo, the second was my target Cloud Foundry environment. In the real world, you’d externalize the Cloud Foundry credentials, and call out to files to build the app, etc. For your benefit, I compressed to a single YAML file.

resources:
- name: seroter-source
  type: git
  source:
    uri: https://github.com/rseroter/xunit-tested-dotnetcore
    branch: master
- name: pcf-on-aws
  type: cf
  source:
    api: https://api.run.pivotal.io
    skip_cert_check: false
    username: XXXXX
    password: XXXXX
    organization: seroter-dev
    space: development

Those resources tell Concourse where to get the stuff it needs to run the jobs. The first job used the GitHub resource to grab the source code. Then it used the Microsoft-provided Docker image to run the dotnet test command.

jobs:
- name: aspnetcore-unit-tests
  plan:
    - get: seroter-source
      trigger: true
    - task: run-tests
      privileged: true
      config:
        platform: linux
        inputs:
        - name: seroter-source
        image_resource:
            type: docker-image
            source:
              repository: microsoft/aspnetcore-build
        run:
            path: sh
            args:
            - -exc
            - |
                cd ./seroter-source
                dotnet restore
                dotnet test

Concourse isn’t really a CD tool, but it does a nice basic job of getting code to a defined destination. The second job deploys the code to Cloud Foundry. It also uses the source code resource and only fires if the test job succeeds. This ensures that only fully-tested code makes its way to the hosting environment. If I were being more responsible, I’d take the results of the test job, drop it into an artifact repo, and then use that artifact for deployment. But hey, you get the idea!

jobs:
- name: aspnetcore-unit-tests
  [...]
- name: deploy-to-prod
  plan:
    - get: seroter-source
      trigger: true
      passed: [aspnetcore-unit-tests]
    - put: pcf-on-aws
      params:
        manifest: seroter-source/manifest.yml

That was it! I was ready to deploy the pipeline (pipeline.yml) to Concourse. From the Terminal, I executed fly -t rs set-pipeline -p test-pipeline -c pipeline.yml. Immediately, I saw my pipeline show up in the Concourse Dashboard.

2018.06.06-concourse-03

After I unpaused my pipeline, it fired up automatically.

2018.06.06-concourse-05

Remember, my job specified a Microsoft-provided container for building the app. Concourse started this job by downloading the Docker image.

2018.06.06-concourse-04

After downloading the image, the job kicked off the dotnet test command and confirmed that all my tests passed.

2018.06.06-concourse-06

Terrific. Since my next job was set to trigger when the first one succeeded, I immediately saw the “deploy” job spin up.

2018.06.06-concourse-07

This job knew how to publish content to Cloud Foundry, and used the provided parameters to deploy the app in a few seconds. Note that there are other resource types if you’re not a Cloud Foundry user. Nobody’s perfect!

2018.06.06-concourse-08

The pipeline run was finished, and I confirmed that the app was actually deployed.

2018.06.06-concourse-09

Finished? Yes, but I wanted to see a failure in my pipeline! So, I changed my xUnit tests and defined inline data that wouldn’t pass. After committing code to GitHub, my pipeline kicked off automatically. Once again it was tested in the pipeline, and this time, failed. Because it failed, the next step (deployment) didn’t happen. Perfect.

2018.06.06-concourse-10

If you’re looking for a CI tool that people actually like using, check out Concourse. Regardless of what you use, focus your energy on getting (all?) apps on pipelines. You don’t do it because you have to ship software every hour, as most apps don’t need it. It’s about shipping whenever you need to, with no drama. Whether you’re adding features or patching vulnerabilities, having pipelines for your apps means you’re actually becoming a customer-centric, software-driven company.

Advertisements



Categories: .NET, ASP.NET Web API, Cloud, Cloud Foundry, DevOps, Docker, General Architecture, Microservices, OSS, Pivotal

Here’s where you’ll find me over the Summer

Here’s where you’ll find me over the Summer

I mean, you’ll mainly find me in Seattle, where I actually live. But, I’m also speaking on a variety of topics at a few shows over the next few months, and thought I’d point those out.

If the thing that connects your other things together isn’t resilient, you’re in trouble. In this talk, I’ll take a look at some core availability patterns for application/data integration, and then review how to configure Azure’s integration services for high availability. This is always a terrific show with compelling speakers, and the hosts at BizTalk360 always do a bang-up job putting it on.

I know a few things about product ownership, such as how to be a good product owner, and a bad one. Primarily because I’ve been both. In this talk at the “big Agile” show, I’ll look at the role and what a product owner should do. I’m starting to work on this presentation now, and will likely list 10+ things that you should do, and a few things to avoid. This will be my second time speaking at this conference, and I enjoy hearing from so many folks focused on software teams and getting code to production.

I’ve been geeking out on space exploration books and movies lately, and thought it’d be fun to translate the lessons from an iconic NASA mission to the everyday challenges faced by software engineers. Here, I’ll show how some of the key ideas applied by NASA engineers reinforce some of the best practices when designing complex software systems. I didn’t attend the inaugural edition of this conference last year, but I’m jazzed to be part of it this time around.

I hope I’ll see you at some of these! If you’ll be at any one of them (or all three, if you’re my stalker!), do let me know.

Advertisements



Categories: BizTalk, Cloud, DevOps, General Architecture, Messaging, Microservices, Microsoft Azure