Creating an Azure VM Scale Set from a legacy, file-sharing, ASP.NET app

Creating an Azure VM Scale Set from a legacy, file-sharing, ASP.NET app

In an ideal world, all your apps have good test coverage, get deployed continuously via pipelines, scale gracefully, and laugh in the face of component failure. That is decidedly not the world we live in. Yes, cloud-native apps are the goal for many, but that’s not what most people have stashed in their data center. Can those apps take some advantage of cloud platforms? For example, what if I had a classic ASP.NET Web Forms app that depends on local storage, but needs better scalability? I could refactor the app—and that might be the right thing to do—or do my best to take advantage of VM-level scaling options in the public cloud. In this demo, I’ll take the aforementioned app, and get it running Azure VM Scale Sets without any code changes.

I’ve been messing with Azure VM Scale Sets as part of a new Pluralsight course that I’m almost done building. The course is all about creating highly-available architectures on Microsoft Azure. Scale Sets make it easy to build and manage fleets of identical virtual machines. In our case here, I want to take an ASP.NET app and throw it into a Scale Set. This exercise requires four steps:

  1. Create and configure a Windows virtual machine in Microsoft Azure. Install IIS, deploy the app, and make sure everything works.
  2. Turn the virtual machine into an image. Sysprep the machine and create an image in Azure for the Scale Set to use.
  3. Create the Azure VM Scale Set. Run a command, watch it go. Configure the load balancer to route traffic to the fleet.
  4. Create a custom extension to update the configuration on each server in the fleet. IIS gets weird on sysprep, so we need Azure to configure each existing (and new) server.

Ok, let’s do this.

Step 1: Create and configure a Windows virtual machine in Microsoft Azure.

While I could take a virtual machine from on-premises and upload it, let’s start from scratch and build a fresh environment.

First off, I went to the Microsoft Azure portal and initiated the build of a new Windows Server VM.

2018.04.17-azvmss-01

After filling out the required fields and triggering the build, I had a snazzy new VM after a few minutes. I clicked the “connect” button on the portal to get a local RDP file with connection details.

2018.04.17-azvmss-04

Before connecting the VM, I needed to set up a file share. This ASP.NET app reads files from a file location, then submits the content to an endpoint. If the app uses local storage, then that’s a huge problem for scalability. If that VM disappears, so does the data! So we want to use a durable network file share that a bunch of VMs can share. Fortunately, Azure has such a service.

I went into the Azure Portal and provisioned a new storage account, and then set up the file structure that my app expects.

2018.04.17-azvmss-03

How do I get my app to use this? My ASP.NET app gets its target file location from a configuration property in its web.config file. No need to chase down source code to use a network file share instead of local storage! We’ll get to that shortly.

With my storage set up, I proceeded to connect to my virtual machine. Before starting the RDP session, I added a link to my local machine so that I could transfer the app’s code to the server.

2018.04.17-azvmss-05

Once connected, I proceeded to install the IIS web server onto the box. I also made sure to add ASP.NET support to the web server, which I forget to do roughly 84% of the time.

2018.04.17-azvmss-07

Now I had a web server ready to go. Next up? Copying files over. Here, I just took content from a local folder and put it into the wwwroot folder on the server.

2018.04.17-azvmss-08

My app was almost ready to go, but I still needed to update the web.config to point to my Azure file storage.

2018.04.17-azvmss-09

Now, how does my app authenticate with this secure file share? There’s a few ways you could try and do it. I chose to create a local user with access to the file share, and run my web app in an application pool acting as that user. That user was named seroterpluralsight.

2018.04.17-azvmss-10

What are the credentials? The name of the user should be the name of the Azure storage account, and the user’s password is the account key.

2018.04.17-azvmss-11

Finally, I created a new IIS application pool (pspool) and set the identity to the serverpluralsight user.

2018.04.17-azvmss-12

With that, I started up the app, and sure enough, was able to browse the network file share without any issue.

2018.04.17-azvmss-13

Step 2: Turn the virtual machine into an image

The whole point of a Scale Set is that I have a scalable set of uniform servers. When the app needs to scale up, Azure just adds another identical server to the pool. So, I need a template!

Note: There are a couple ways to approach this feature. First, you could just build a Scale Set from a generic OS image, and then bootstrap it by running installers to prepare it for work. This means you don’t have to build and maintain a pre-built image. However, it also means it takes longer for the new server to become a useful member of the pool. Bootstrapping or pre-building images are both valid options. 

To create a template from a Windows machine, I needed to sysprep it. Doing this removes lots of user specific things, including mapped drives. So while I could have created a mapped drive from Azure File Storage and accessed files from the ASP.NET app that way, the drive goes away when I sysprep. I decided to just access the file share via the network path and not deal with a mapped drive.

2018.04.17-azvmss-14

With the machine now generalized and shut down, I returned to the Azure Portal and clicked the “capture” button. This creates an Azure image from the VM and (optionally) destroys the original VM.

2018.04.17-azvmss-15

Step #3: Create the Azure VM Scale Set

I now had everything needed to build the Scale Set. If you’re bootstrapping a server (versus using a pre-built image) you can create a Scale Set from the Azure Portal. Since I am using a pre-built image, I had to dip down to the CLI. To make it more fun, I used the baked-in Azure Cloud Shell instead of the console on my own machine. Before crafting the command to create the Scale Set, I grabbed the ID of the VM template. You can get this by copying the Resource ID from the Azure image page on the Portal.

2018.04.17-azvmss-16

With that ID, I put together the command for instantiating the Scale Set.


az vmss create -n psvmss -g pluralsight-practice --instance-count 2 --image /subscriptions/[subscription id]/resourceGroups/pluralsight-practice/providers/Microsoft.Compute/images/[image id] --authentication-type password --admin-username legacyuser --admin-password [password] --location eastus2 --upgrade-policy-mode Automatic --load-balancer ps-loadbalancer --backend-port 3389

Let’s unpack that. I specified a name for my Scale Set (“psvmss”) told it which resource group to add this to (“pluralsight-practice”), set a default number of VM instances, pointed it to my pre-built image, set password authentication for the VMs and provided credentials, set the geographic location, told the Scale Set to automatically apply changes, and defined a load balancer (“ps-loadbalancer”). After a few minutes, I had a Scale Set.

2018.04.17-azvmss-19

Neato. Once that Scale Set is in place, I could still RDP into individual boxes, but they’re meant to be managed as a fleet.

Step #4: Create a custom extension to update the configuration on each server in the fleet.

As I mentioned earlier, we’re not QUITE done yet. When you sysprep a Windows box that has an IIS app pool with a custom user, the server freaks out. Specifically, it still shows that user as the pool’s identity, but the password gets corrupted. Seems like a known thing. I could cry about it, or do something to fix it. Fortunately, Azure VMs (and Scale Sets) have the idea of “custom script extensions.” These are scripts that can apply to one or many VMs. In my case, what I needed was a script that reset the credentials of the application pool user.

First, I created a new Powershell script (“config-app-pool.ps1”) that set the pool’s identity.


Import-Module WebAdministration

Set-ItemProperty IIS:AppPoolspspool -name processModel -value @{userName="seroterpluralsight"; password="[password]";identitytype=3}

I uploaded that file to my Azure Storage account. This gives me a storage location that the Scale Set can use to retrieve these settings later.

Next, I went back to the Cloud Shell to create couple local files used by the extension command. First, I created a file called public-settings.json that stored the location of the above Powershell script.


{

"fileUris": ["https://seroterpluralsight.blob.core.windows.net/scripts/config-app-pool.ps1"]

}

Then I created a protected-settings.json file. These values get encrypted are only decrypted on the VM when the script runs.


{

"commandToExecute": "powershell -ExecutionPolicy Unrestricted -File config-app-pool.ps1", "storageAccountName": "seroterpluralsight", "storageAccountKey": "[account key]"

}

That file tells the extension what to actually do with the file it downloaded from Azure Storage, and what credentials to use to access Azure Storage.

Ok, now I could setup the extension. Once the extension is in place, it applies to every VM in the Scale Set now, or in the future.


az vmss extension set --resource-group pluralsight-practice --vmss-name psvmss --name customScriptExtension --publisher Microsoft.Compute --settings ./public-settings.json --protected-settings ./protected-settings.json

Note that if you’re doing this against Linux boxes, the “name” and “publisher” have different values.

That’s pretty much it. Once i extended the generated load balancer with rules to route on port 80, I had everything I needed.

2018.04.17-azvmss-20

After pinging the load balanced URL, I saw my “legacy” ASP.NET application served up from multiple VMs, all with secure access to the same file share. Terrific!

2018.04.17-azvmss-21

Long term, you’ll be better off refactoring many of your apps to take advantage of what the cloud offers. A straight up lift-and-shift often resembles transferring debt from one credit card to another. But, some apps don’t need many changes at all to get some incremental benefits from cloud, and Scale Sets could be a useful route for you.

2017 in Review: Reading and Writing Highlights

2017 in Review: Reading and Writing Highlights

kid-3What a fun year. Lots of things to be grateful for. Took on some more responsibility at Pivotal, helped put on a couple conferences, recorded a couple dozen podcast episodes, wrote news/articles/eMags for InfoQ.com, delivered a couple Pluralsight courses (DevOps, and Java related), received my 10th straight Microsoft MVP award, wrote some blog posts, spoke at a bunch of conferences, and added a third kid to the mix.

Each year, I like to recap some of the things I enjoyed writing and reading. Enjoy!

I swear that I’m writing as much as I ever have, but it definitely doesn’t all show up in one place anymore! Here are a few things I churned out that made me happy.

I plowed through thirty four books this year, mostly on my wonderful Kindle. As usual, I choose a mix of biographies, history, sports, religion, leadership, and mystery/thriller. Here’s a handful of the ones I enjoyed the most.

  • Apollo 8: The Thrilling Story of the First Mission to the Moon, by Jeffrey Kluger (@jeffreykluger). Brilliant storytelling about our race to the moon. There was a perfect mix of character backstory, science, and narrative. Really well done.
  • Boyd: The Fighter Pilot Who Changed the Art of War, by Robert Coram (@RobertBCoram). I had mixed feelings after finishing this. Boyd’s lessons on maneuverability are game-changing. His impact on the world is massive. But this well-written story also highlights a man obsessed; one who grossly neglected his family. Important book for multiple reasons.
  • The Game: Inside the Secret World of Major League Baseball’s Power Brokers, by Jon Pessah (@JonPessah). Gosh, I love baseball books. This one highlights the Bud Selig era as commissioner, the rise of steroid usage, complex labor negotiations, and the burst of new stadiums. Some amazing behind-the-scenes insight here.
  • Not Forgotten: The True Story of My Imprisonment in North Korea, by Kenneth Bae. One might think that an American held in captivity by North Koreans longer than anyone since the Korean War would be angry. Rather, Bae demonstrates sympathy and compassion for people who aren’t exposed to a better way. Good story.
  • Shoe Dog: A Memoir by the Creator of Nike, by Phil Knight (@NikeUnleash). I went and bought new Nikes after this. MISSION ACCOMPLISHED PHIL KNIGHT. This was a fantastic book. Knight’s passion and drive to get Blue Ribbon (later, Nike) off the ground was inspiring. People can create impactful businesses even if they don’t feel an intense calling, but there’s something special about those that do.
  • Dynasty: The Rise and Fall of the House of Cesar, by Tom Holland (). This is somewhat of a “part 2” from Holland’s previous work. Long, but engaging, this book tells the tale of the first five emperors. It’s far from a dry history book, as Holland does a admirable job weaving specific details into an overarching story. Books like this always remind me that nothing happens in politics today that didn’t already happen thousands of years ago.
  • Avenue of Spies: A True Story of Terror, Espionage, and One American Family’s Heroic Resistance in Nazi-Occupied Paris, by Alex Kershaw (). Would you protect the most vulnerable, even if your life was on the line as a result? Many during WWII faced that choice. This book tells the story of one family’s decision, the impact they had, and the hard price they paid.
  • Stalling for Time: My Life as an FBI Hostage Negotiator, by Gary Noesner. Fascinating book that explains the principles of hostage negotiation, but also lays out the challenge of introducing it to an FBI conditioned to respond with force. Lots of useful nuggets in here for people who manage complex situations and teams.
  • The Things Our Fathers Saw: The Untold Stories of the World War II Generation from Hometown, USA, by Matthew Rozell (). Intensely personal stories from those who fought in WWII, with a focus on the battles in the Pacific. Harrowing, tragic, inspiring. Very well written.
  • I Don’t Have Enough Faith to Be an Atheist, by Norman Geisler () and Frank Turek (). Why are we here? Where did we come from? This book outlines the beautiful intersection of objective truth, science, philosophy, history, and faith. It’s a compelling arrangement of info.
  • The Late Show, by Michael Connelly (). I’d read a book on kangaroo mating rituals if Connelly wrote it. Love his stuff. This new cop-thriller introduced a multi-dimensional lead character. Hopefully Connelly builds a new series of books around her.
  • The Toyota Way: 14 Management Principles from the World’s Greatest Manufacturer, by Jeffrey Liker. Ceremonies and “best practices” don’t matter if you have the wrong foundation. Liker’s must-read book lays out, piece by piece, the fundamental principles that help Toyota achieve operational excellence. Everyone in technology should read this and absorb the lessons. It puts weight behind all the DevOps and continuous delivery concepts we debate.
  • One Mission: How Leaders Build a Team of Teams, by Chris Fussell (). I read, and enjoyed, Team of Teams last year. Great story on the necessity to build adaptable organizations. The goal of this book is to answer *how* you create an adaptable organization. Fussell uses examples from both military and private industry to explain how to establish trust, create common purpose, establish a shared consciousness, and create spaces for “empowered execution.”
  • Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams (). What do Obama, Steve Jobs, Madonna, and Trump have in common? Remarkable persuasion skills, according to Adams. In his latest book, Adams deconstructs the 2016 election, and intermixes a few dozen persuasion tips you can use to develop more convincing arguments.
  • Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation, by Karen Martin () and Mike Osterling (). How does work get done, and are you working on things that matter? I’d suspect that most folks in IT can’t confidently answer either of those questions. That’s not the way IT orgs were set up. But I’ve noticed a change during the past year+, and there’s a renewed focus on outcomes. This book does a terrific job helping you understand how work flows, techniques for mapping it, where to focus your energy, and how to measure the success of your efforts.
  • The Five Dysfunctions of a Team, by Patrick Lencioni (). I’ll admit that I’m sometimes surprised when teams of “all stars” fail to deliver as expected. Lencioni spins a fictitious tale of a leader and her team, and how they work through the five core dysfunctions of any team. Many of you will sadly nod your head while reading this book, but you’ll also walk away with ideas for improving your situation.
  • Setting the Table: The Transforming Power of Hospitality in Business, by Danny Meyer (). How does your company make people feel? I loved Meyer’s distinction between providing a service and displaying hospitality in a restaurant setting, and the lesson is applicable to any industry. A focus on hospitality will also impact the type of people you hire. Great book that that leaves you hungry and inspired.
  • Extreme Ownership: How U.S. Navy SEALs Lead and Win, by Jocko Willink () and Leif Babin (). As a manager, are you ready to take responsibility for everything your team does? That’s what leaders do. Willink and Babin explain that leaders take extreme ownership of anything impacting their mission. Good story, with examples, of how this plays out in reality. Their advice isn’t easy to follow, but the impact is undeniable.
  • Strategy: A History, by Sir Lawrence Freedman (). This book wasn’t what I expected—I thought it’d be more about specific strategies, not strategy as a whole. But there was a lot to like here. The author looks at how strategy played a part in military, political, and business settings.
  • Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity, by Kim Scott (). I had a couple hundred highlights in this book, so yes, it spoke to me. Scott credibly looks at how to guide a high performing team by fostering strong relationships. The idea of “radical candor” altered my professional behavior and hopefully makes me a better boss and colleague.
  • The Lean Startup: How’s Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses, by Eric Ries (). A modern classic, this book walks entrepreneurs through a process for validated learning and figuring out the right thing to build. Ries sprinkles his advice with real-life stories as proof points, and offers credible direction for those trying to build things that matter.
  • Hooked: How to Build Habit-Forming Products, by Nir Eyal (). It’s not about tricking people into using products, but rather, helping people do things they already want to do. Eyal shares some extremely useful guidance for those building (and marketing) products that become indispensable.
  • The Art of Action: How Leaders Close the Gap between Plans, Actions, and Results, by Stephen Bungay. Wide-ranging book that covers a history of strategy, but also focuses on techniques for creating an action-oriented environment that delivers positive results.

Thank you all for spending some time with me in 2017, and I look forward to learning alongside you all in 2018.

Advertisements



Categories: General Architecture, .NET, Cloud, Cloud Foundry, Microsoft Azure, Messaging, OSS, Microservices, Pivotal, Spring

Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

Can’t figure out which SpringOne Platform sessions to attend? I’ll help you out.

Next week is SpringOne Platform (S1P). This annual conference is where developers from around the world learn about about Spring, Cloud Foundry, and modern architecture. It’s got a great mix of tech talks, product demos, and transformational case studies. Hear from software engineers and leaders that work at companies like Pivotal, Boeing, Mastercard, Microsoft, Google, FedEx, HCSC, The Home Depot, Comcast, Accenture, and more.

If you’re attending (and you are, RIGHT?!?), how do you pick sessions from the ten tracks over three days? I helped build the program, and thought I’d point out the best talks for each type of audience member.

The “multi-cloud enthusiast”

Your future involves multiple clouds. It’s inevitable. Learn all about the tech and strategies to make it more successful.

The “bleeding-edge developer”

The vast major of S1P attendees are developers who want to learn about the hottest technologies. Here are some highlights for them.

The “enterprise change agent”

I’m blown away by the number of real case studies at this show. If you’re trying to create a lasting change at your company, these are the talks that prep you for success.

The “ambitious operations pro”

Automation doesn’t spell the end of Ops. But it does change the nature of it. These are talks that forward-thinking operations folks want to attend to learn how to build and manage future tech.

The “modern app architect”

What a fun time to be an architect! We’re expected to deliver software with exceptional availability and scale. That requires a new set of patterns. You’ll learn them in these talks.

The “curious data pro”

How we collect, process, store, and retrieve data is changing. It has to. There’s more data, in more formats, with demands for faster access. These talks get you up to speed on modern data approaches.

The “plugged-in manager”

Any engineering lead, manager, or executive is going to spend considerable time optimizing the team, not building software. But that doesn’t mean you shouldn’t be up-to-date on what your team is working with. These talks will make you sound hip at the water cooler after the conference.

Fortunately all these sessions will be recorded and posted online. But nothing beats the in-person experience. If you haven’t bought a ticket, it’s not too late!

Trying Out Microsoft’s Spring Boot Starters

Trying Out Microsoft’s Spring Boot Starters

Do you build Java apps? If so, there’s a good chance you’re using Spring Boot. Web apps, event-driven apps, data processing apps, you name it, Spring Boot has libraries that help. It’s pretty great. I’m biased; I work for the company that maintains Spring, and taught two Pluralsight courses about Spring Cloud. But there’s no denying the momentum:

If a platform matters, it works with Spring Boot. Add Microsoft Azure to the list. Microsoft and Pivotal engineers created some Spring Boot “starters” for key Azure services. Starters make it simple to add jars to your classpath. Then, Spring Boot handles the messy dependency management for you. And with built-in auto-configuration, objects get instantiated and configured automatically. Let’s see this in action. I built a simple Spring Boot app that uses these starters to interact with Azure Storage and Azure DocumentDB (CosmosDB). 

I started at Josh Long’s second favorite place on the Internet: start.spring.io. Here, you can bootstrap a new project with all sorts of interesting dependencies, including Azure! I defined my app’s group and artifact IDs, and then chose three dependencies: web, Azure Storage, and Azure Support. “Azure Storage” brings the jars in for storage, and “Azure Support” activates other Azure services when you reference their jars.

2017.11.02-boot-01

I downloaded the resulting project and opened it in Spring Tool Suite. Then I added one new starter to my Maven POM file:

<dependency>
        <groupId>com.microsoft.azure</groupId>
        <artifactId>azure-documentdb-spring-boot-starter</artifactId>
</dependency>

That’s it. From these starters, Spring Boot pulls in everything our app needs. Next, it was time for code. This basic REST service serves up product recommendations. I wanted to store each request for recommendations as a log file in Azure Storage and a record in DocumentDB. I first modeled a “recommendations” that goes into DocumentDB. Notice the topmost annotation and reference to a collection.

package seroter.demo.bootazurewebapp;
import com.microsoft.azure.spring.data.documentdb.core.mapping.Document;

@Document(collection="items")
public class RecommendationItem {

        private String recId;
        private String cartId;
        private String recommendedProduct;
        private String recommendationDate;
        public String getCartId() {
                return cartId;
        }
        public void setCartId(String cartId) {
                this.cartId = cartId;
        }
        public String getRecommendedProduct() {
                return recommendedProduct;
        }
        public void setRecommendedProduct(String recommendedProduct) {
                this.recommendedProduct = recommendedProduct;
        }
        public String getRecommendationDate() {
                return recommendationDate;
        }
        public void setRecommendationDate(String recommendationDate) {
                this.recommendationDate = recommendationDate;
        }
        public String getRecId() {
                return recId;
        }
        public void setRecId(String recId) {
                this.recId = recId;
        }
}

Next I defined an interface that extends DocumentDbRepository.

package seroter.demo.bootazurewebapp;
import org.springframework.stereotype.Repository;
import com.microsoft.azure.spring.data.documentdb.repository.DocumentDbRepository;
import seroter.demo.bootazurewebapp.RecommendationItem;

@Repository
public interface RecommendationRepo extends DocumentDbRepository<RecommendationItem, String> {
}

Finally, I build the REST handler that talks to Azure Storage and Azure DocumentDB.  Note a few things. First, I have a pair of autowired variables. These reference beans created by Spring Boot and injected at runtime. In my case, they should be objects that are already authenticated with Azure and ready to go.

@RestController
@SpringBootApplication
public class BootAzureWebAppApplication {
        public static void main(String[] args) {
          SpringApplication.run(BootAzureWebAppApplication.class, args);
        }

        //for blob storage
        @Autowired
        private CloudStorageAccount account;

        //for Cosmos DB
        @Autowired
        private RecommendationRepo repo;

In the method that actually handles the HTTP POST, I first referenced the Azure Storage Blob containers and add a file there. I got to use the autowired CloudStorageAccount here. Next, I created a RecommendationItem object and loaded it into the autowired DocumentDB repo. Finally, I returned a message to the caller.

@RequestMapping(value="/recommendations", method=RequestMethod.POST)
public String GetRecommendedProduct(@RequestParam("cartId") String cartId) throws URISyntaxException, StorageException, IOException {

        //create log file and upload to an Azure Storage Blob
        CloudBlobClient client = account.createCloudBlobClient();
        CloudBlobContainer container = client.getContainerReference("logs");
        container.createIfNotExists();

        String id = UUID.randomUUID().toString();
        String logId = String.format("log - %s.txt", id);
        CloudBlockBlob blob = container.getBlockBlobReference(logId);
        //create the log file and populate with cart id
        blob.uploadText(cartId);

        //add to DocumentDB collection (doesn't have to exist already)
        RecommendationItem r = new RecommendationItem();
        r.setRecId(id);
        r.setCartId(cartId);
        r.setRecommendedProduct("Y777-TF2001");
        r.setRecommendationDate(new Date().toString());
        repo.save(r);

        return "Happy Fun Ball (Y777-TF2001)";
}

Excellent. Next up, creating the actual Azure services! From the Azure Portal, I created a new Resource Group called “boot-demos.” This holds all the assets related to this effort. I then added an Azure Storage account to hold my blobs.

2017.11.02-boot-02

Next, I grabbed the connection string to my storage account.

2017.11.02-boot-03

I took that value, and added it to the application.properties file in my Spring Boot app.

azure.storage.connection-string=DefaultEndpointsProtocol=https;AccountName=bootapplogs;AccountKey=[KEY VALUE]<span                           data-mce-type="bookmark"                              id="mce_SELREST_start"                                data-mce-style="overflow:hidden;line-height:0"                                style="overflow:hidden;line-height:0"                         ></span>;EndpointSuffix=core.windows.net

Since I’m also using DocumentDB (part of CosmosDB), I needed an instance of that as well.

2017.11.02-boot-04

Can you guess what’s next? Yes, it’s credentials. Specifically, I needed the URI and primary key associated with my Cosmos DB account.

2017.11.02-boot-05

I snagged those values and also put them into my application.properties file.

azure.documentdb.database=recommendations
azure.documentdb.key=[KEY VALUE]
azure.documentdb.uri=https://bootappdocs.documents.azure.com:443/

That’s it. Those credentials get used when activating the Azure beans, and my code gets access to pre-configured objects. After starting up the app, I sent in a POST request.

2017.11.02-boot-06

I got back a “recommended product”, but more importantly, I didn’t get an error! When I looked back at the Azure Portal, I saw two things. First, I saw a new log file in my newly created blob container.

2017.11.02-boot-07

Secondly, I saw a new database and document in my Cosmos DB account.

2017.11.02-boot-08

That was easy. Spring Boot apps, consuming Microsoft Azure services with no fuss.

Note that I let my app automatically create the Blob container and DocumentDB database. In real life you might want to create those ahead of time in order to set various properties and not rely on default values.

Bonus Demo – Running this app in Cloud Foundry

Let’s not stop there. While the above process was simple, it can be simpler. What if I don’t want to go to Azure to pre-provision resources? And what if I don’t want to manage credentials in my application itself? Fret not. That’s where the Service Broker comes in.

Microsoft created an Azure Service Broker for Cloud Foundry that takes care of provisioning resources and attaching those resources to apps. I added that Service Broker to my Pivotal Web Services (hosted Cloud Foundry) account.

2017.11.02-boot-09

When creating a service instance via the Broker, I needed to provide a few parameters in a JSON file. For the Azure Storage account, it’s just the (existing or new) resource group, account name, location, and type.

{
  "resourceGroup": "generated-boot-demo",
  "storageAccountName": "generatedbootapplogs",
  "location": "westus",
  "accountType": "Standard_LRS"
}

For DocumentDB, my JSON file called out the resource group, account name, database name, and location.

{
  "resourceGroup": "generated-boot-demo",
  "docDbAccountName": "generatedbootappdocs",
  "docDbName": "recommendations",
  "location": "westus"
}

Sweet. Now to create the services. It’s just a single command for each service.

cf create-service azure-documentdb standard bootdocdb -c broker-documentdb-config.json

cf create-service azure-storage standard bootstorage -c broker-storage-config.json<span                              data-mce-type="bookmark"                              id="mce_SELREST_start"                                data-mce-style="overflow:hidden;line-height:0"                                style="overflow:hidden;line-height:0"                         ></span>

To prove it worked, I snuck a peek at the Azure Portal, and saw my two new accounts.

2017.11.02-boot-10

Finally, I removed all the credentials from the application.properties file, packaged my app into a jar file, and added a Cloud Foundry manifest. This manifest tells Cloud Foundry where to find the deployable asset, and which service(s) to attach to. Note that I’m referencing the ones I just created.

---
applications:
- name: boot-azure-web-app
  memory: 1G
  instances: 1
  path: target/boot-azure-web-app-0.0.1-SNAPSHOT.jar
  services:
  - bootdocdb
  - bootstorage

With that, I ran a “cf push” and the app was deployed and started up by Cloud Foundry. I saw that it was successfully bound to each service, and the credentials for each Azure service were added to the environment variables. What’s awesome is that the Azure Spring Boot Starters know how to read these environment variables. No more credentials in my application package. My environments variables for this app in Cloud Foundry are shown here.

2017.11.02-boot-11

I called my service running in Cloud Foundry, and as before, I got a log file in Blob storage and a document in Document DB.

These Spring Boot Starters offer a great way to add Azure services to your apps. They work like any other Spring Boot Starter, and also have handy Cloud Foundry helpers to make deployment of those apps super easy. Keep an eye on Microsoft’s GitHub repo for these starters. More good stuff coming.

Advertisements



Categories: Cloud, Cloud Foundry, Microsoft Azure, Spring

Adding circuit breakers to your .NET applications

Adding circuit breakers to your .NET applications

Apps fail. Hardware fails. Networks fail. None of this should surprise you. As we build more distributed systems, these failures create unpredictability. Remote calls between components might experience latency, faults, unresponsiveness, or worse. How do you keep a failure in one component from creating a cascading failure across your whole environment?

In his seminal book Release It!, Michael Nygard introduced the “circuit breaker” software pattern. Basically, you wrap calls to downstream services, and watch for failure. If there are too many failures, the circuit “trips” and the downstream services isn’t called any longer. Or at least for a period of time until the service heals itself.

How do we use this pattern in our apps? Enter Hystrix from Netflix OSS. Released in 2012, this library executes each call on a separate thread, watches for failures in Java calls, invokes a fallback operation upon failure, trips a circuit if needed, and periodically checks to see if the downstream service is healthy. And it has a handy dashboard to visualize your circuits. It’s wicked. The Spring team worked with Netflix and created a easy-to-use version for Spring Boot developers. Spring Cloud Hystrix is the result. You can learn all about it in my most recent Pluralsight course.

But why do Java developers get to have all the fun? Pivotal released an open-source library called Steeltoe last year. This library brings microservices patterns to .NET developers. It started out with things like a Git-backed configuration store, and service discovery. The brand new update offers management endpoints and … an implementation of Hystrix for .NET apps. Note that this is for .NET Framework OR .NET Core apps. Everybody gets in on the action.

Let’s see how Steeltoe Hystrix works. I built an ASP.NET Core service, and than called it from a front-end app. I wrapped the calls to the service using Steeltoe Hystrix, which protects my app when failures occur.

Dependency: the recommendation service

This service returns recommended products to buy, based on your past purchasing history. In reality, it returns four products that I’ve hard-coded into a controller. LOWER YOUR EXPECTATIONS OF ME.

This is an ASP.NET Core MVC Web API. The code is in GitHub, but here’s the controller for review:

namespace core_hystrix_recommendation_service.Controllers
{
    [Route("api/[controller]")]
    public class RecommendationsController : Controller
    {
        // GET api/recommendations
        [HttpGet]
        public IEnumerable<Recommendations> Get()
        {
            Recommendations r1 = new Recommendations();
            r1.ProductId = "10023";
            r1.ProductDescription = "Women's Triblend T-Shirt";
            r1.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/charcoal_pivotal_grande_43987370-6045-4abf-b81c-b444e4c481bc_1024x1024.png?v=1503505687";

            Recommendations r2 = new Recommendations();
            r2.ProductId = "10040";
            r2.ProductDescription = "Men's Bring Back Your Weekend T-Shirt";
            r2.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/m2_1024x1024.png?v=1503525900";

            Recommendations r3 = new Recommendations();
            r3.ProductId = "10057";
            r3.ProductDescription = "H2Go Force Water Bottle";
            r3.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/Pivotal-Black-Water-Bottle_1024x1024.png?v=1442486197";

            Recommendations r4 = new Recommendations();
            r4.ProductId = "10059";
            r4.ProductDescription = "Migrating to Cloud Native Application Architectures by Matt Stine";
            r4.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/migrating_1024x1024.png?v=1458083725";

            return new Recommendations[] { r1, r2, r3, r4 };
        }
    }
}

Note that the dependency service has no knowledge of Hystrix or how the caller invokes it.

Caller: the recommendations UI

The front-end app calls the recommendation service, but it shouldn’t tip over just because the service is unavailable. Rather, bad calls should fail quickly, and gracefully. We could return cached or static results, as an example. Be aware that a circuit breaker is much more than fancy exception handling. One big piece is that each call executes in its own thread. This implementation of the bulkhead patterns prevents runaway resource consumption, among other things. Besides that, circuit breakers are also machinery to watch failures over time, and allow the failing service to recover before allowing more requests.

This ASP.NET Core app uses the mvc template. I’ve added the Steeltoe packages to the project. There are a few Nuget packages to choose from. If you’re running this in Pivotal Cloud Foundry, there’s a set of packages that make it easy to integrate with Hystrix dashboard embedded there. Here, let’s assume we’re running this app somewhere else. That means I need the base package “Steeltoe.CircuitBreaker.Hystrix” and “Steeltoe.CircuitBreaker.Hystrix.MetricsEvents” which gives me a stream of real-time data to analyze.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNet.WebApi.Client" Version="5.2.3" />
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration" Version="2.0.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix" Version="1.1.0" />
    <PackageReference Include="Steeltoe.CircuitBreaker.Hystrix.MetricsEvents" Version="1.1.0" />
  </ItemGroup>
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.0" />
  </ItemGroup>
</Project>

I built a class (“RecommendationService”) that calls the dependent service. This class inherits from HystrixCommand. There are a few ways to use these commands in calling code. I’m adding it to the ASP.NET Core service container, so my constructor takes in a IHystrixCommandOptions.

//HystrixCommand means no result, HystrixCommand<string> means a string comes back
public class RecommendationService: HystrixCommand<List<Recommendations>>
{
  public RecommendationService(IHystrixCommandOptions options):base(options) {
     //nada
  }

I’ve got inherited methods to use thanks to the base class. I call my dependent service by overriding Run (or RunAsync). If failure happens, the RunFallback (or RunFallbackAsync) is invoked and I just return some static data. Here’s the code:

protected override List<Recommendations> Run()
{
  var client = new HttpClient();
  var response = client.GetAsync("http://localhost:5000/api/recommendations").Result;

  var recommendations = response.Content.ReadAsAsync<List<Recommendations>>().Result;

  return recommendations;
}

protected override List<Recommendations> RunFallback()
{
  Recommendations r1 = new Recommendations();
  r1.ProductId = "10007";
  r1.ProductDescription = "Black Hat";
  r1.ProductImage = "https://cdn.shopify.com/s/files/1/0692/5669/products/hatnew_1024x1024.png?v=1458082282";

  List<Recommendations> recommendations = new List<Recommendations>();
  recommendations.Add(r1);

  return recommendations;
}

My ASP.NET Core controller uses the RecommendationService class to call its dependency. Notice that I’ve got an object of that type coming into my constructor. Then I call the Execute method (that’s part of the base class) to trigger the Hystrix-protected call.

public class HomeController : Controller
{
  public HomeController(RecommendationService rs) {
  this.rs = rs;
  }

  RecommendationService rs;

  public IActionResult Index()
  {
    //call Hystrix-protected service
    List<Recommendations> recommendations = rs.Execute();

    //add results to property bag for view
    ViewData["Recommendations"] = recommendations;

    return View();
  }

Last thing? Tying it all together. In the Startup.cs class, I added two things to the ConfigureServices operation. First, I added a HystrixCommand to the service container. Second, I added the Hystrix metrics stream.

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
  services.AddMvc();

  //add QueryCommand to service container, and inject into controller so it gets config values
  services.AddHystrixCommand<RecommendationService>("RecommendationGroup", Configuration);

  //added to get Metrics stream
  services.AddHystrixMetricsStream(Configuration);
}

In the Configure method, I added couple pieces to the application pipeline.

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
   if (env.IsDevelopment())
   {
     app.UseDeveloperExceptionPage();
   }
   else
   {
     app.UseExceptionHandler("/Home/Error");
   }

   app.UseStaticFiles();

   //added
   app.UseHystrixRequestContext();

   app.UseMvc(routes =>
   {
     routes.MapRoute(
       name: "default",
       template: "{controller=Home}/{action=Index}/{id?}");
   });

   //added
   app.UseHystrixMetricsStream();
}

That’s it. Notice that I took advantage of ASP.NET Core’s dependency injection, and known extensibility points. Nothing unnatural here.

You can grab the source code for this from my GitHub repo.

Testing the circuit

Let’s test this out. First, I started up the recommendation service. Pinging the endpoint proved that I got back four recommended products.

2017.09.21-steeltoe-01

Great. Next I started up the MVC app that acts as the front-end. Loading the page in the browser showed the four recommendations returned by the service.

2017.09.21-steeltoe-02

That works. No big deal. Now let’s turn off the downstream service. Maybe it’s down for maintenance, or just misbehaving. What happens?

2017.09.21-steeltoe-03

The Hystrix wrapper detected a failure, and invoked the fallback operation. That’s cool. Let’s see what Hystrix is tracking in the metrics stream. Just append /hystrix/hystrix.stream to the URL and you get a data stream that’s fully compatible with Spring Cloud Hystrix.

2017.09.21-steeltoe-04

Here, we see a whole bunch of data that Hystrix is tracking. It’s watching request count, error rate, and lots more. What if you want to change the behavior of Hystrix? Amazingly, the .NET version of Hystrix in Steeltoe has the same broad configuration surface that classic Hystrix does. By adding overrides to the appsettings.json file, you can tweak the behavior of commands, the thread pool, and more. In order to see the circuit actually open, I stretched the evaluation window (from 10 to 20 seconds), and reduced the error limit (from 20 to 3). Here’s what that looked like:

{
"hystrix": {
  "command": {
      "default": {
        "circuitBreaker": {
          "requestVolumeThreshold": 3
        },
        "metrics" : {
          "rollingStats": {
            "timeInMilliseconds" : 20000
          }
        }
      }
    }
  }
}

Restarting my service shows new threshold in the Hystrix stream. Super easy, and very powerful.

2017.09.21-steeltoe-05

BONUS: Using the Hystrix Dashboard

Look, I like reading gobs of JSON in the browser as much as the next person with too much free time. However, normal people like dense visualizations that help them make decisions quickly. Fortunately, Hystrix comes with an extremely data-rich dashboard that makes it simple to see what’s going on.

This is still a Java component, so I spun up a new project from start.spring.io and added a Hystrix Dashboard dependency to my Boot app. After adding a single annotation to my class, I spun up the project. The Hystrix dashboard asks for a metrics endpoint. Hey, I have one of those! After plugging in my stream URL, I can immediately see tons of info.

2017.09.21-steeltoe-06.png

As a service owner or operator, this is a goldmine. I see request volumes, circuit status, failure counts, number of hosts, latency, and much more. If you’ve got a couple services, or a couple hundred, visualizations like this are a life saver.

Summary

As someone who started out their career as a .NET developer, I’m tickled to see things like this surface. Steeltoe adds serious juice to your .NET apps and the addition of things like circuit breakers makes it a must-have. Circuit breakers are a proven way to deliver more resilient service environments, so download my sample apps and give this a spin right now!

Advertisements



Categories: .NET, Cloud, Microservices, Pivotal, Spring

Surprisingly simple messaging with Spring Cloud Stream

Surprisingly simple messaging with Spring Cloud Stream

You’ve got a lot of options when connecting microservices together. You could use service discovery and make direct calls. Or you might use a shared database to transfer work. But message brokers continue to be a popular choice. These range from single purpose engines like Amazon SQS or RabbitMQ, to event stream processors like Azure Event Hubs or Apache Kafka, all the way to sophisticated service buses like Microsoft BizTalk Server. When developers choose any one of those, they need critical knowledge to be use them effectively. How can you shrink the time to value and help developers be productive, faster? For Java developers, Spring Cloud Stream offers a valuable abstraction.

Spring Cloud Stream offers an interface for developers that requires no knowledge of the underlying broker. That broker, either Apache Kafka or RabbitMQ, gets configured by Spring Cloud Stream. Communication to and from the broker is also done via the Stream library.

What’s exciting to me is that all brokers are treated the same. Spring Cloud Stream normalizes behavior, even if it’s not native to the broker. For example, want a competing consumer model for your clients, or partitioned processing? Those concepts behave differently in RabbitMQ and Kafka. No problem. Spring Cloud Stream makes it work the same, transparently. Let’s actually try both of those scenarios.

Competing consumers through “consumer groups”

By default, Spring Cloud Stream sets up everything as a publish-subscribe relationship. This makes it easy to share data among many different subscribers. But what if you want multiple instances of one subscriber (for scale out processing)? One solution is consumer groups. These don’t behave the same in both messaging brokers. Spring Cloud Stream don’t care! Let’s build an example app using RabbitMQ.

Before writing code, we need an instance of RabbitMQ running. The most dead-simple option? A Docker container for it. If you’ve got Docker installed, the only thing you need to do is run the following command:

-docker run -d –hostname local-rabbit –name demo-rmq -p 15672:15672 -p 5672:5672 rabbitmq:3.6.11-management

2017.09.05-stream-02

After running that, I have a local cache of the image, and a running container with port mapping that makes the container accessible from my host.

How do we get messages into RabbitMQ? Spring Cloud Stream supports a handful of patterns. We could publish on a schedule, or on-demand. Here, let’s build a web app that publishes to the bus when the user issues a POST command to a REST endpoint.

Publisher app

First, build a Spring Boot application that leverages spring-cloud-starter-stream-rabbit (and spring-boot-starter-web). This brings in everything I need to use Spring Cloud Stream, and RabbitMQ as a destination.

2017.09.05-stream-01

Add a new class that acts as our REST controller. A simple @EnableBinding annotation lights this app up as a Spring Cloud Stream project. Here, I’m using the built-in “Source” interface that defines a single communication channel, but you can also build your own.

@EnableBinding(Source.class)
@RestController
public class BriefController {

In this controller class, add an @Autowired variable that references the bean that Spring Cloud Stream adds for the Source interface. We can then use this variable to directly publish to the bound channel! Same code whether talking to RabbitMQ or Kafka. Simple stuff.

@EnableBinding(Source.class)
@RestController
public class BriefController {
  //refer to instance of bean that Stream adds to container
  @Autowired
  Source mysource;

  //take in a message via HTTP, publish to broker
  @RequestMapping(path="/brief", method=RequestMethod.POST)
  public String publishMessage(@RequestBody String payload) {

    System.out.println(payload);

    //send message to channel
    mysource.output().send(MessageBuilder.withPayload(payload).build());

    return "success";
  }

Our publisher app is done, so all that’s left is some basic configuration. This configuration tells Spring Cloud Stream how to connect to the right broker. Note that we don’t have to tell Spring Cloud Stream to use RabbitMQ; it happens automatically by having that dependency in our classpath. No, all we need is connection info to our broker, an explicit reference to a destination (without it, the RabbitMQ exchange would be called “output”), and a command to send JSON.

server.port=8080

#rabbitmq settings for Spring Cloud Stream to use
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

spring.cloud.stream.bindings.output.destination=legalbriefs
spring.cloud.stream.default.contentType=application/json

Consumer app

This part’s almost too easy. Here, build a new Spring Boot application and only choose the spring-cloud-starter-stream-rabbit dependency.

In the default class, decorate it with @EnableBinding and use the built-in Sink interface. Then, all that’s left is to create a method to process any messages found in the broker. To do that, we decorate the operation with @StreamListener, and all the content type handling is done for us. Wicked.

@EnableBinding(Sink.class)
@SpringBootApplication
public class BlogStreamSubscriberDemoApplication {

  public static void main(String[] args) {
    SpringApplication.run(BlogStreamSubscriberDemoApplication.class, args);
  }

  @StreamListener(target=Sink.INPUT)
  public void logfast(String msg) {
    System.out.println(msg);
  }
}

The configuration for this app is straightforward. Like above, we have connection details for RabbitMQ. Also, note that the binding now references “input”, which was the name of the channel in the default “Sink” interface. Finally, observe that I used the SAME destination as the source, to ensure that Spring Cloud Stream wires up my publisher and subscriber successfully. For kicks, I didn’t yet add the consumer group settings.

server.port=0

#rabbitmq settings for Spring Cloud Stream to use
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest

spring.cloud.stream.bindings.input.destination=legalbriefs

Test the solution

Let’s see how this works. First, start up three instances of the subscriber app. I generated a jar file, and started up three instances in the shell.

2017.09.05-stream-04

When you start up these apps, Spring Cloud Stream goes to work. Log into the RabbitMQ admin console, and notice that one exchange got generated. This one, named “legalbriefs”, maps to the name we put in our configuration file.

2017.09.05-stream-12

We also have three queues that map to each of the three app instances we started up.

2017.09.05-stream-13

Nice! Finally, start up the publisher, and post a message to the /briefs endpoint.

2017.09.05-stream-05

What happens? As expected, each subscriber gets a copy of the message, because by default, everything happens in a pub/sub fashion.

2017.09.05-stream-06

Add consumer group configuration

We don’t want each instance to get a copy. Rather, we want these instances to share the processing load. Only one should get each message. In the subscriber app, we add a single line to our configuration file. This tells Spring Cloud Stream that all the instances form a single consumer group that share work.

#adds consumer group processing
spring.cloud.stream.bindings.input.group=briefProcessingGroup

After regenerating the subscriber jar file, and starting up each file, we see a different setup in RabbitMQ. What you see is a single, named queue, but three “consumers” of the queue.

2017.09.05-stream-07

Send in two different messages, and see that each is only processed by a single subscriber instance. This is a simple way to use a message broker to scale out processing.

2017.09.05-stream-08

Doing stateful processing using partitioning

Partitioning feels like a related, but different scenario than consumer groups. Partitions in Kafka introduce a level of parallel processing by writing data to different partitions. Then, each subscriber pulls from a given partition to do work. Here in Spring Cloud Stream, partitioning is useful for parallel processing, but also for stateful processing. When setting it up, you specify a characteristic that steers messages to a given partition. Then, a single app instance processes all the data in that partition. This can be handy for event processing or any scenario where it’s useful for related messages to get processed by the same instance. Think counters, complex event processing, or time-sensitive calculations.

Unlike with consumer groups. partitioning requires configuration changes to both publishers AND subscribers.  On the publisher side, all that’s needed is (a) the number of partitions, and (b) the expression that describes how data is partitioned. That’s it. No code changes.

#adding configuration for partition processing
spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.attorney
spring.cloud.stream.bindings.output.producer.partitionCount=3

On the subscriber side, you set the number of partitions, and set the “partitioned” property equal to “true.” What’s also interesting, but logical, is that as each subscriber starts, you need to give it an “index” so that Spring Cloud Streams knows which partition it should read from.

#add partition processing
spring.cloud.stream.bindings.input.consumer.partitioned=true
#spring.cloud.stream.instanceIndex=0
spring.cloud.stream.instanceCount=3

Let’s start everything up again. The publisher starts up the same as before. Now though, each subscriber instance starts up with a “”spring.cloud.stream.instanceIndex=X” flag that specifies which index applies.

2017.09.05-stream-09

In RabbitMQ, the setup is different than before. Now, we have three queues, each with a different “routing key” that corresponds to its partition.

2017.09.05-stream-10

Send in a message, and notice that all messages with the same attorney name go to one instance. Change the case number, and see that all messages still go to the same place. Switch the attorney ID, and observe that a different partition (likely) gets it. If you have more data varieties than you do partitions, you’ll see a partition handle more than one set of data. No problem, just know that happens.

2017.09.05-stream-11

Summary

It shouldn’t have to be hard to deal with message brokers. Of course there are plenty of scenarios where you want to flex the advanced options of a broker, but there are also many cases where you just want a reliable intermediary. In those cases, Spring Cloud Stream makes it super easy to abstract away knowledge of the broker, while still normalizing behavior across the unique engines.

In my latest Pluralsight course, I spent over an hour digging into Spring Cloud Stream, and another ninety minutes working with Spring Cloud Data Flow. That project helps you quickly string together Stream applications. Check it out for a deeper dive!

Advertisements



Categories: BizTalk, Cloud, DevOps, General Architecture, Messaging, Microservices, Pivotal, Spring

My new Pluralsight course about coordinating Java microservices with Spring Cloud, is out!

My new Pluralsight course about coordinating Java microservices with Spring Cloud, is out!

Home Cloud My new Pluralsight course about coordinating Java microservices with Spring Cloud, is out!

No microservice is an island. Oh sure, we all talk about isolated components and independent teams. But your apps and services get combined and used in all sorts of ways. It’s inevitable. As you consider how your microservices interact with each other, you’ll uncover new challenges. Fortunately, there’s technology that’s made those challenges easier to solve.

Spring Boot is hot. Like real hot. At this point, it’s basically the de facto way for Java developers build modern apps.

Spring Cloud builds on Spring Boot and introduces all sorts of distributed systems capabilities to your code. Last year I delivered a Pluralsight course that looked at building services with it. But that was only half of the equation. A big portion of Spring Cloud simplifies interactions between services. So, I set out to do a sequel course on Spring Cloud, and am thrilled that the course came out today.

2017.08.26-pluralsight-01

This course, Java Microservices with Spring Cloud: Coordinating Services, focuses on helping you build a more effective, maintainable microservices architecture. How do you discover services? Can you prevent cascading failures? What are your routing options? How does messaging play a role? Can we rethink data integration and do it better? All those questions get answered in this ~6 hour course. The six modules of the course include:

  1. Introducing Spring Cloud and microservices coordination scenarios. A chat about the rise of microservices, problems that emerge, and what Spring Cloud is all about.
  2. Locating services at runtime using service discovery. The classic “configuration management DB” can’t keep up with the pace of change in a microservices architecture. No, you need a different way to see a live look at what services exist, and where they are. Enter Spring Cloud Eureka. We take a deep look at how to use it to register and discovery services.
  3. Protecting systems with circuit breakers. What happens when a service dependency goes offline? Bad things. But you can fail fast and degrade gracefully by using the right approach. We dig into Spring Cloud Hystrix in this module and look at the interesting ways to build a self-healing environment.
  4. Routing your microservices traffic. A centralized load balancer may not be the right fit for a fast-changing environment. Same goes for API gateways. This module looks at Spring Cloud Ribbon for client-side load balancing, and Spring Cloud Zuul as a lightweight microproxy.
  5. Connecting microservices through messaging. Message brokers offer one convenient way to link services in a loosely coupled way. Spring Cloud Stream is a very impressive library that makes messaging easy. Add competing consumers, partition processing, and more, whether you’re using RabbitMQ or Apache Kafka underneath. We do a deep dive in this module.
  6. Building data processing pipelines out of microservices. It’s time to rethink how we process data, isn’t it? This shouldn’t be the realm of experts, and require bulky, irregular updates. Spring Cloud Data Flow is one of the newest parts of Spring Cloud, and promises to mess with your mind. Here, we see how to do real-time or batch processing by connecting individual microservices together. Super fun.

It’s always a pleasure creating content for the Pluralsight audience, and I do hope you enjoy the course!

Advertisements



Categories: Cloud, DevOps, General Architecture, Messaging, Microservices, Spring

Introducing cloud-native integration (and why you should care!)

Introducing cloud-native integration (and why you should care!)

I’ve got three kids now. Trying to get anywhere on time involves heroics. My son is almost ten years old and he’s rarely the problem. The bottleneck is elsewhere. It doesn’t matter how much faster my son gets himself ready, it won’t improve my family’s overall speed at getting out the door. The Theory of Constraints says that you improve the throughput of your process by finding and managing the bottleneck, or constraint. Optimizing areas outside the constraint (e.g. my son getting ready even faster) don’t make much of a difference. Does this relate to software, and application integration specifically? You betcha.

Software delivery goes through a pipeline. Getting from “idea” to “production” requires a series of steps. And then you repeat it over and over for each software update. How fast you get through that process dictates how responsive you can be to customers and business changes. Your development team may operate LIKE A MACHINE and crank out epic amounts of code. But if your dedicated ops team takes forever to deploy it, then it just doesn’t matter how fast your devs are. Inventory builds up, value is lost. My assertion is that the app integration stage of the pipeline is becoming a bottleneck. And without making changes to how you do integration, your cloud-native efforts are going to waste.

What’s “cloud native” all about? At this month’s Integrate conference, I had the pleasure of talking about it. Cloud-native refers to how software is delivered, not where. Cloud-native systems are built for scale, built for continuous change, and built to tolerate failure. Traditional enterprises can become cloud natives, but only if they make serious adjustments to how they deliver software.

Even if you’ve adjusted how you deliver code, I’d suspect that your data, security, and integration practices haven’t caught up. In my talk, I explained six characteristics of a cloud-native integration environment, and mixed in a few demos (highlighted below) to prove my points.

#1 – Cloud-native integration is more composable

By composable, I mean capable of assembling components into something greater. Contrast this to classic integration solutions where all the logic gets embedded into a single artifact. Think ETL workflows where ever step of the process is in one deployable piece. Need to change one component? Redeploy the whole thing. One step require a ton of CPU processing? Find a monster box to host the process in.

A cloud-native integration gets built by assembling independent components. Upgrade and scale each piece independently. To demonstrate this, I built a series of Microsoft Logic Apps. Two of them take in data. The first takes in a batch file from Microsoft OneDrive, the other takes in real-time HTTP requests. Both drop the results to a queue for later processing.

2017.07.10-integrate-01

The “main” Logic App takes in order entries, enriches the order via a REST service I have running in Azure App Service, calls an Azure Function to assign a fraud score, and finally dumps the results to a queue for others to grab.

2017.07.10-integrate-02

My REST API sitting in Azure App Service is connected to a GitHub repo. This means that I should be able to upgrade that individual service, without touching the data pipeline sitting Logic Apps. So that’s what I did. I sent in a steady stream of requests, modified my API code, pushed the change to GitHub, and within a few seconds, the Logic App is emitting out a slightly different payload.

2017.07.10-integrate-03

#2 – Cloud-native integration is more “always on”

One of the best things about early cloud platforms being less than 100% reliable was that it forced us to build for failure. Instead of assuming the infrastructure was magically infallible, we built systems that ASSUMED failure, and architected accordingly.

For integration solutions, have we really done the same? Can we tolerate hardware failure, perform software upgrades, or absorb downstream dependency hiccups without stumbling? A cloud-native integration solution can handle a steady load of traffic while staying online under all circumstances.

#3 – Cloud-native integration is built for scale

Elasticity is a key attribute of cloud. Don’t build out infrastructure for peak usage; build for easy scale when demand dictates. I haven’t seen too many ESB or ETL solutions that transparently scale, on-demand with no special considerations. No, in most cases, scaling is a carefully designed part of an integration platform’s lifecycle. It shouldn’t be.

If you want cloud-native integration, you’ll look to solutions that support rapid scale (in, or out), and let you scale individual pieces. Event ingestions unexpected and overwhelming? Scale that, and that alone. You’ll also want to avoid too much shared capacity, as that creates unexpected coupling and makes scaling the environment more difficult.

#4 – Cloud-native integration is more self-service

The future is clear: there will be more “citizen integrators” who don’t need specialized training to connect stuff. IFTTT is popular, as are a whole new set of iPaaS products that make it simple to connect apps. Sure, they aren’t crazy sophisticated integrations; there will always be a need for specialists there. But integration matters more than ever, and we need to democratize the ability to connect our stuff.

One example I gave here was Pivotal Cloud Cache and Pivotal GemFire. Pivotal GemFire is an industry-leading in-memory data grid. Awesome tech, but not trivial to properly setup and use. So, Pivotal created an opinionated slice of GemFire with a subset of features, but an easier on-ramp. Pivotal Cloud Cache supports specific use cases, and an easy self-service provisioning experience. My challenge to the Integrate conference audience? Why couldn’t we create a simple facade for something powerful, but intimidating, like Microsoft BizTalk Server? What if you wanted a self-service way to let devs create simple integrations? I decided use the brand new Management REST API from BizTalk Server 2016 Feature Pack 1 to build one.

I used the incomparable Spring Boot to build a Java app that consumed those REST APIs. This app makes it simple to create a “pipe” that uses BizTalk’s durable bus to link endpoints.

2017.07.10-integrate-05

I built a bunch of Java classes to represent BizTalk objects, and then created the required API payloads.

2017.07.10-integrate-04

The result? Devs can create a new pipe that takes in data via HTTP and drops the result to two file locations.

2017.07.10-integrate-06

When I click the button above, I use those REST APIs to create a new in-process HTTP receive location, two send ports, and the appropriate subscriptions.

2017.07.10-integrate-07

Fun stuff. This seems like one way you could unlock new value in your ESB, while giving it a more cloud-native UX.

#5 – Cloud-native integration supports more endpoints

There’s no turning back. Your hippest integration offered to enterprise devs CANNOT be SharePoint. Nope. Your teams want to creatively connect to Slack, PagerDuty, Salesforce, Workday, Jira, and yes, enterprisey things like SQL Server and IBM DB2.

These endpoints may be punishing your integration platform with a constant data stream, or, process data irregularly, in bulk. Doing newish patterns like Event Sourcing? Your apps will talk to an integration platform that offers a distributed commit log. Are you ready? Be ready for new endpoints, with new data streams, consumed via new patterns.

#6 – Cloud-native integration demands complete automation

Are you lovingly creating hand-crafted production servers? Stop that. And devs should have complete replicas of production environments, on their desktop. That means packaging and automating the integration bus too. Cloud-natives love automation!

Testing and deploying integration apps must be automated. Without automated tests, you’ll never achieve continuous delivery of your whole system. Additionally, if you have to log into one of your integration servers, you’re doing it wrong. All management (e.g. monitoring, deployments, upgrades) should be done via remote tools and scripts. Think fleets of servers, not long-lived named instances.

To demonstrate this concept, I discussed automating the lifecycle of your integration dependency. Specifically, through the use of a service broker. Initially part of Cloud Foundry, the service broker API has caught on elsewhere. A broad set of companies are now rallying around a single API for advertising services, provisioning, de-provisioning, and more. Microsoft built a Cloud Foundry service broker, and it handles lots of good things. It handles lifecycle and credential sharing for services like Azure SQL Database, Azure Service Bus, Azure CosmosDB, and more. I installed this broker into my Pivotal Web Services account, and it advertised available services.

2017.07.10-integrate-08

Simply by typing in cf create-service azure-servicebus standard integratesb -c service-bus-config.json I kicked off a fast, automated process to generate an Azure Resource Group and create a Service Bus namespace.

2017.07.10-integrate-09

Then, my app automatically gets access to environment variables that hold the credentials. No more embedding creds in code or config, no need to go to the Azure Portal. This makes integration easy, developer-friendly, and repeatable.

2017.07.10-integrate-10

Summary

It’s such an exciting time to be a software developer. We’re solving new problems in new ways, and making life better for so many. The last thing we want is to be held back by a bottleneck in our process. Don’t let integration slow down your ambitions. The technology is there to help you build integration platforms that are more scalable, resilient, and friendly-to-change. Go for it!

Advertisements



Categories: BizTalk, Cloud, Cloud Foundry, DevOps, General Architecture, Messaging, Microservices, Microsoft Azure, Node.js, Pivotal, Spring, Windows Azure Service Bus

Using speaking opportunities as learning opportunities

Using speaking opportunities as learning opportunities

Over this summer, I’ll be speaking at a handful of events. I sign myself up for these opportunities —in addition to teaching courses for Pluralsight —so that I commit time to learning new things. Nothing like a deadline to provide motivation!

Do you find yourself complaining that you have a stale skill set, or your “brand” is unknown outside your company? You can fix that. Sign up for a local user group presentation. Create a short “course” on a new technology and deliver it to colleagues at lunch. Start a blog and share your musings and tech exploration. Pitch a talk to a few big conferences. Whatever you do, don’t wait for others to carve out time for you to uplevel your skills! For me, I’m using this summer to refresh a few of my own skill areas.

In June, I’m once again speaking in London at Integrate. Application integration is arguably the most important/interesting part of Azure right now. Given Microsoft’s resurgence in this topic area, the conference matters more than ever. My particular session focuses on “cloud-native integration.” What is it all about? How do you do it? What are examples of it in action? I’ve spent a fair amount of time preparing for this, so hopefully it’s a fun talk. The conference is nearly sold out, but I know there are handful of tickets left. It’s one of my favorite events every year.

Coming up in July, I’m signed up to speak at PerfGuild. It’s a first-time, online-only conference 100% focused on performance testing. My talk is all about distributed tracing and using it to uncover (and resolve) latency issues. The talk builds on a topic I covered in my Pluralsight course on Spring Cloud, with some extra coverage for .NET and other languages. As of this moment, you can add yourself to the conference waitlist.

Finally, this August I’ll be hitting balmy Orlando, FL to speak at the Agile Alliance conference. This year’s “big Agile” conference has a track centered on foundational concepts. It introduces attendees to concepts like agile project delivery, product ownership, continuous delivery, and more. My talk, DevOps Explained, builds on things I’ve covered in recent Pluralsight courses, as well as new research.

Speaking at conferences isn’t something you do to get wealthy. In fact, it’s somewhat expensive. But in exchange for incurring that cost, I get to allocate time for learning interesting things. I then take those things, and share them with others. The result? I feel like I’m investing in myself, and I get to hang out at conferences with smart people.

If you’re just starting to get out there, use a blog or user groups to get your voice heard. Get to know people on the speaking circuit, and they can often help you get into the big shows! If we connect at any of the shows above, I’m happy to help you however I can.

Advertisements



Categories: BizTalk, General Architecture, .NET, Cloud, Cloud Foundry, DevOps, Microsoft Azure, Messaging, Microservices, Pivotal, Spring

How should you model your event-driven processes?

How should you model your event-driven processes?

During most workdays, I exist in a state of continuous partial attention. I bounce between (planned and unplanned) activities, and accept that I’m often interrupt-driven. While that’s not an ideal state for humans, it’s a great state for our technology systems. Event-driven applications act based on all sorts of triggers: time itself, user-driven actions, system state changes, and much more. Often, these batch-or-realtime, event-driven activities are asynchronous and coordinated in some way. What options do you have for modeling event-driven processes, and what trade-offs do you make with each option?

Option #1 – Single, Deterministic Process

In this scenario, the event handler is monolithic in nature, and any embedded components are purpose-built for the process at hand. Arguably, it’s just a visually modeled code class. While initiated via events, the transition between internal components is pre-determined. The process is typically deployed and updated as a single unit.

What would you use to build it?

You’ve seen (and built?) these before. A traditional ETL job fits the bill. Made up of components for each stage, it’s a single process executed as a linear flow. I’d also categorize some ESB workflows—like BizTalk orchestrations—in this category. Specifically, those with send/receive ports bound to the specific orchestration, embedded code, or external components built JUST to support that orchestration’s flow.

2017.05.02-event-01

In any of these cases, it’s hard (or impossible) to change part of the event handler without re-deploying the entire process.

What are the benefits?

Like with most monolithic things, there’s value in (perceived) simplicity and clarity. A few other benefits:

  • Clearer sense of what’s going on. When there’s a single artifact that explains how you handle a given event, it’s fairly simple to grok the flow. What happens when sensor data comes in? Here you go. It’s predictable.
  • Easier to zero-in on production issues. Did a step in the bulk data sync job fail? Look at the flow and see what bombed out. Clean up any side effects, and re-run. This doesn’t necessarily mean things are easy to fix—frankly, it could be harder—but you do know where things went wrong.
  • Changing and testing everything at once. If you’re worried about the side effects of changing a piece of a flow, that risk may lessen when you’re forced to test the whole thing when one tiny thing changes. One asset to version, one asset to track changes for.
  • Accommodates companies with teams of specialists. From what I can tell, many large companies still have centers-of-excellence for integration pros. That means most ETL and ESB workflows come out of here. If you like that org structure, then you’ll prefer more integrated event handlers.

What are the risks?

These processes are complicated, not complex. Other risks:

  • Cumbersome to change individual parts of the process. Nowadays, our industry prioritizes quick feedback loops and rapid adjustments to software. That’s difficult to do with monolithic event-driven processes. Is one piece broken? Better prioritize, upgrade, compile, test, and deploy the whole thing!
  • Non-trivial to extend the process to include more steps. When we think of event-driven activities, we often think of fairly dynamic behavior. But when all the event responses are tightly coordinated, it’s tough to add/adjust/remove steps.
  • Typically centralizes work within a single team. While your org may like siloed teams of experts, that mode of working doesn’t lend itself to agility or customer focus. If you build a monolithic event-driven process, expect delivery delays as the work queues up behind constrained developers.
  • Process scales as one single unit. Each stage of an event-driven workflow will have its own resource demands. Some will be CPU intensive. Others produce heavy disk I/O. If you have a single ETL or ESB workflow to handle events, expect to scale that entire thing when any one component gets constrained. That’s pretty inefficient and often leads to over-provisioning.

Option #2 – Orchestrated Components

In this scenario, you’ve got a fairly loose wrapper around independent services that respond to events. These are individual components, built and delivered on their own. While still somewhat deterministic—you are still modeling a flow—the events aren’t trapped within that flow.

What would you use to build it?

Without a doubt, you can still use traditional ESB tools to construct this model. A BizTalk orchestration that listens to events and calls out to standalone services? That works. Most iPaaS products also fit the bill here. If you build something with Azure Logic Apps, you’re likely going to be orchestrating a set of services in response to an event. Those services could be REST-based APIs backed by API Management, Azure Functions, or a Service Bus queue that may trigger a whole other event-driven process!

2017.05.02-event-02.png

You could also use tools like Spring Cloud Data Flow to build orchestrated, event-driven processes. Here, you chain together standalone Spring Boot apps atop a messaging backbone. The services are independent, but with a wrapper that defines a flow.

What are the benefits?

The main benefits of this model stem from the decoupling and velocity that comes with it. Others are:

  • Distributed development. While you still have someone stitching the process together, develop the components independently. And hopefully, you get more people in the mix who don’t even need to know the “wrapper” technology. Or in the case of Spring Cloud Data Flow or Logic Apps, the wrapper technology is dev-oriented and easier to understand than traditional integration systems. Either way, this means more parallel development and faster turnaround of the entire workflow.
  • Composable processes. Configure or reconfigure event handlers based on what’s needed. Reuse each step of the event-driven process (e.g. source channel, generic transformation component) in other processes.
  • Loose grip on the event itself. There could be many parties interested in a given event. Your flow may be just one. While you could reuse the inbound channel to spawn each event-driven processes, you can also wiretap orchestrated processes.

What are the risks?

You’ve got some risks with a more complex event-driven flow. Those include:

  • Complexity and complicated-ness. Depending on how you build this, you might not only be complicated but also complex! Many moving parts, many distributed components. This might result in trickier troubleshooting and less certainty about how the system behaves.
  • Hidden dependencies. While the goal may be to loosely orchestrate services in an event-driven flow, it’s easy to have leaky abstractions. “Independent” services may share knowledge between each other, or depend on specific underlying infrastructure. This means that you need good documentation, and services that don’t assume that dependencies exist.
  • Breaking changes and backwards compatibility. Any time you have a looser federation of coordinated pieces, you increase the likelihood of one bad actor causing cascading problems. If you have a bunch of teams that build/run services on their own, and one team combines them into an event-driven workflow, it’s possible to end up with unpredictable behavior. Mitigation options? Strong continuous integration practices to catch breaking changes, and a runtime environment that catches and isolates errors to minimize impact.

Option #3 – Choreographed Components

In this scenario, your event-driven processes are extremely fluid. Instead of anything dictating the flow, services collaborate by publishing and subscribing to messages. It’s fully decentralized. Any given services has no idea who or what is upstream or downstream of it. They do their job, and any service that wants to do subsequent work, great.

What would you use to build it?

In these cases, you’re often working with low-level code, not high level abstractions. Makes sense. But there are frameworks out there that make it easier for you if you don’t crave writing to or reading from queues. For .NET developers, you have things like MassTransit or nServiceBus. Those provide helpful abstractions. If you’re a Java developer, you’ve got something like Spring Cloud Stream. I’ve really fallen in love with it. Stream provides an elegant abstraction atop RabbitMQ or Apache Kafka where the developer doesn’t have to know much of anything about the messaging subsystem.

What are the benefits?

Some of the biggest benefits come from the velocity and creativity that stem from non-deterministic event processing.

  • Encourages adaptable processes. With choreographed event processors, making changes is simple. Deploy another service, and have it listen for a particular event type.
  • Makes everyone an integration developer. Done right, this model lessens the need for a siloed team of experts. Instead, everyone builds apps that care about events. There’s not much explicit “integration” work.
  • Reflects changing business dynamics. Speed wins. But not speed for the sake of it. But speed of learning from customers and incorporating feedback into experiments. Scrutinize anything that adds friction to your learning process. Fixed workflows owned by a single team? Increasingly, that’s an anti-pattern for today’s software-driven, event-powered businesses. You want to be able to handle the influx of new data and events and quickly turn that into value.

What are the risks?

Clearly, there are risks to this sort of “controlled chaos” model of event processing. These include:

  • Loss of cross-step coordination. There’s value in event-driven workflows that manage state between stages, compensate for failed operations, and sequence key steps. Now, there’s nothing that says you can’t have some processes that depend on orchestration, and some on choreography. Don’t adopt an either-or mentality here!
  • Traceability is hairy. When an event can travel any number of possible paths, and those paths are subject to change on a regular basis, auditability can’t be taken for granted! If it takes a long time for an inbound event to reach a particular destination, you’ve got some forensics to do. What part was slow? Did something get dropped? How come this particular step didn’t get triggered? These aren’t impossible challenges, but you’ll want to invest in solid logging and correlation tools.

You’ve got lots of options for modeling event-driven processes. In reality, you’ll probably use a mix of all three options above. And that’s fine! There’s a use case for each. But increasingly, favor options #2 and #3 to give you the flexibility you need.

Did I miss any options? Are there benefits or risks I didn’t list? Tell me in the comments!

Advertisements



Categories: BizTalk, General Architecture, .NET, Cloud, DevOps, Microsoft Azure, Pivotal, Spring