I’ve always enjoyed building systems that utilized flexible messaging platforms. Before joining Microsoft, I was a member of a small ISV team that built a commercial .NET based ESB product (a rare creature in and of itself) that was primarily a WCF based messaging system. We had a number of capabilities that were useful and somewhat unique in their execution but one of my favorites was the ability to provide transactional messaging across multiple platforms by leveraging ActiveMQ in our adapter model.

I’ve always admired the Apache folks. More often than not they seem to just get it (unless you’re talking about docs … but hey their stuff is free and even us ginormous mega power companies screw the pooch on docs sometimes). And ActiveMQ is no exception. It has pretty much  all of the capabilities you’d expect from a full featured broker including:

The list goes on. But you get it. It’s an actual message broker with tons of features, the most important of which is its .NET support via the .NET Messaging API.

There’s only one problem. ActiveMQ like pretty much every other messaging broker was developed pre-cloud. Granted, some of the AMQP brokers such as RabbitMQ have caught on with folks who make all of their money “off prem” but none of the brokers I’ve worked with are what I would call cloud friendly. Maybe there’s one out there and if there is please add a comment because I’d love to check it out.

So where does that leave us? Well if you’re just trying to get some things done in Windows Azure between roles and instances and you don’t need a full featured broker, stop right now and run to go see my colleague Valery Mizonov’s incredible blog about rolling your own. It’s the best thing I’ve seen on the subject and it was derived from real folks working on real challenges while deploying on Windows Azure. Valery is a genius though and he was built based on the design of the Cyberyne Systems Model 101 so it’s not a surprise that he was able to achieve such a masterpiece of engineering and bloggery.

If on the other hand you’re a simpler life form like me and feel a certain shall we say irritation that 58 billion in revenue and 14 billion in profit occurred without something more “out of the box” in the cloud for us messaging lovers then read on.  But first, a warning:

WHAT YOU ARE ABOUT TO READ IS NOT IN ANY WAY A “BEST PRACTICE” OR EVEN SOMETHING NORMAL PEOPLE MIGHT TRY. IT IS AN ENVELOPE PUSHING EXERCISE THAT TURNED OUT PRETTY COOL AND LEFT ME WITH AN APPRECIATION OF THE POWER OF THE WINDOWS AZURE WORKER ROLE AND CONFIRMED MY APPRECIATION OF ACTIVEMQ.

Now, that the warning is out of the way let’s begin the journey…

 

Step 1: Figure Out How To Develop

I’d seen the posts on running Tomcat in Windows Azure and downloaded the Tomcat Accelerator. It gave me hope that what I wanted to do was possible. For what I wanted to do however, I wanted a more “integrated” experience than all of the cmd files in that project not to mention wanting to avoid having to build my own,.

So, I decided to use the technique I had used long ago as a Java Developer dealing with classpaths and a bazillion jar files: sheer brute force.

 

 

 

Yes, that is really is a JRE and ActiveMQ embedded with the files set to be copied into the project. It took a few monotonous minutes to set up but it was well worth it in the long run.

Step 2: Figure Out How Deal With Local Storage and Dynamic IP’s

Anyone who has spent time with Azure knows they need to deal with the rules of Local Storage and the way endpoints are assigned. When you’re dealing with something like starting a JVM and running something like ActiveMQ that has a lot of startup options then things get even more harrowing.

My first goal was then figuring out to get all the things I needed over to the Local Storage and to edit the config files so that when ActiveMQ started up the dynamically assigned ports were used.

This problem is ugly no matter how you slice it (the tomcat example I downloaded had robocopy in a .bat) but after I was done I didn’t feel too bad.

 private void ConfigureDirectories()
 {
            workingDir = Environment.GetEnvironmentVariable("RoleRoot") + @"\approot";
            javaHome = workingDir + @"\jre6";
            var activeMqFiles = workingDir + @"\activemq";
            runtimeDir = RoleEnvironment.GetLocalResource("Storage").RootPath + "activemq";
            Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(activeMqFiles, runtimeDir, true);
  }

 

The awesomeness of the CLR shows up here as I leverage VB.NET for a little recursive copy assist.

Once everything was copied over then it was just a matter of setting up the environment and writing the IP’s to the respective config files

private void ConfigureEnvironment(Process proc)
{
            proc.StartInfo.EnvironmentVariables.Remove("JAVA_HOME");
            proc.StartInfo.EnvironmentVariables.Remove("ACTIVEMQ_HOME");
            proc.StartInfo.EnvironmentVariables.Add("JAVA_HOME", javaHome);
            proc.StartInfo.EnvironmentVariables.Add("ACTIVEMQ_HOME", runtimeDir);
            ManipulateIpAdresses();
 }

       

The rest of the set up is just Xpath boiler plate and Process set up so I won’t go into it here. One note, as of SDK 1.3 you have more options re: pinning the IP.

The important thing is after this part was done I had a broker firing up in dev fabric!

Step 3: Writing the Demo Code

This part was the easiest because the Apache guys have pretty much written everything hard which at the end of the day was the reason for trying to do this in the first place. I also wanted to try and use the durable and guaranteed features of ActiveMQ because there are already plenty of non durable, non guaranteed, multiple delivery options available without going to the pain of installing a 3rd part message broker.

My producer used the following simple driver logic and ran in one instance:

                    AttemptConnection();
                    using (var session = con.CreateSession(AcknowledgementMode.Transactional))
                    {
                        ITopic topic = session.GetTopic("AzureHostedTest");
                        var producer = session.CreateProducer(topic);
                        var start = DateTime.Now;
                        for (int i = 0; i < messageCount; i++)
                        {
                            ITextMessage msg = producer.CreateTextMessage("Hello world "+ msgNumber++);
                            producer.Send(msg);
                        }
                        session.Commit();
                        Thread.Sleep(sleepTime);
                    }

  I just used the NMS libraries directly and didn’t use any of the Spring overlays.

My consumer logic was similarly simple

   var topic = new ActiveMQTopic("AzureHostedTest"); var mySession = session.CreateDurableConsumer(topic, con.ClientId, null, false); mySession.Listener += (message) => { ITextMessage msg = message as ITextMessage; Trace.WriteLine(message.NMSMessageId + " " + msg.Text); session.Commit(); };

 

 

Again, what I was aiming for was the durable options not perf. Like everything else with the product you have a ton of options in terms of perf tuning.

Much of the logic in the solution was concerned with attaining a connection. In a cloud world you need to be very concerned with things spinning up out of order or disappearing. 

Ok. What are the odds of this running at this point. Pretty good actually, provided you wanted to use Local Storage as your durability option.

But we all know that’s just crazy. Nope you need something enterprise. Something serious. You need SQL Azure!

Step 4: Fighting Through the Topology Deployment Options

Once again ActiveMQ shines here in the features and flexibility department. However, this was also the part of the project where I knew the path that led me from being a pure open source guy to working for the evil empire was not out of naked greed or any malicious desire but was instead the need to get something @#$#$%#$$%^$ done! We have a lot of great products that let you actually do stuff. Sometimes the other nerd ecosystems out there seem to place productivity a distant second to coolness or “elegance” or their consultant friendly version of “enterprise” which results in a miserable experience for anyone outside of the tiny of circle of wonder nerds who grok/exploit their systems and style. 

Oh, it looks like an innocuous enough exercise if you read this or this except for the pesky fact that if you use those pages as a guide with the latest ActiveMQ bits and the latest SQL Server JDBC drivers your chances of it working are nothing. Thus, I put myself into the mindset I use to live in 24/7 back in the old days and started to dig my way through bug lists and blogs and forums and experimentation until finally eureka! Or as I tend to think of it now for #$%#$%#$ sake!

Before sharing the tricks and tidbits however, let’s take a short break to describe the deployment I planned on. The topology I chose to deploy was what ActiveMQ calls JDBC Master Slave. This topology uses a sophisticated algorithm that basically says if I get the lock first then I’m the Master otherwise you are. I know I know. I should back off the rocket science a little bit here and make things a little easier to understand by means of a salient and powerful diagram!

My favorite part of this choice was this quote

By default if you use the <jdbcPersistenceAdapter/> to avoid the high performance journal you will be using JDBC Master Slave by default. You just need to run more than one broker and point the client side URIs to them to get master/slave. This works because they both try an acquire an exclusive lock on a shared table in the database and only one will succeed

Heck, it practically springs off the page as the right option. Now, before I go on I would be remiss if I didn’t mention the “high performance journal” referred to in the quote. In order to take advantage of them in Windows Azure however, you’re going to have to likely use a complex topology that works around the limitations of Windows Azure Drives and multiple instances. That’s the kind of thing done in our team’s Design Wins program though and is beyond the scope of this effort.

Besides, doing it this way you can use SQL Azure (of course in true open source fashion you could combine journals and JDBC..all you have to do is get through the docs etc  ).

SQL Azure is an oasis of comfort and predictability in an otherwise turbulent world. It’s your solid and reliable friend in a cloud universe where we’re told that you might as well throw state out the window, abandon the thought of any guarantees and make everything idempotent because… well things are just going to happen in a way you don’t want them to ….so you need to be prepared. Granted, there’s some folks who have said people developing distributed systems should have been doing that all along even when they used on premises technologies, but just because they were right doesn’t mean we should have listened to them!

Now onto the secrets. After downloading the latest JDBC drivers I promptly waded through the xml docs and arrived at the following for my JDBC related settings:

 <persistenceAdapter>
      <jdbcPersistenceAdapter createTablesOnStartup="false"  dataDirectory="${activemq.base}/data" dataSource="#mssql-ds"/>
    </persistenceAdapter>

<bean id="mssql-ds" class="org.apache.commons.dbcp.BasicDataSource">
    <property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
    <property name="url" value="jdbc:sqlserver://[replace].database.windows.net;databaseName=ActiveMQ;"/>
    <property name="username" value="[replace]@[replace].database.windows.net"/>
    <property name="password" value="[replace]"/>
    <property name="maxActive" value="200"/>
    <property name="poolPreparedStatements" value="true"/>
  </bean>

I set createTablesOnStartup to true initially then false after they were created which seemed logical. Once you get past the unintuitive choice of class for datasource that’s not so bad. Unfortunately,when you start ActiveMQ that won’t work because you’ll run into bug similar to this one but with the added fun that the recommendations on the page don’t work for the 5.4.2 broker.

The answer to the issue is hinted at in this bug report. I’ll leave finding the hint to you gentle reader and skip right to its actual implementation because the hint will not help in isolation. You would need to carefully read the resulting stack trace after applying the hint and pair it with the knowledge gleaned to fully get the answer.

Here it is in a nutshell courtesy of 7-zip

 

 

 

 

   

See those files that are named slightly different than the ones that have the exact same content inside them as the highlighted ones. That’s the key. All you have to do is copy one of the files you need to base your new file on out of the jar, rename it using a slightly different convention, then stuff the renamed files back into the jar and away you go! What could be more natural?

 

Step 5: Deploy and Savor the Payoff

Once all the above was figured out all that was left was to grow a tree from an acorn while I waited for the project to upload to Windows Azure. Once it landed I monitored things with SSMS and a simple query

Then I got real brave and decided to cycle instance 0. Sure enough the pause in activity told me it was the master. When things picked up again after a few seconds I knew that the scruffy demo code had actually worked with failover. After that I forget because of the tears and the bourbon.

Here’s the code (be warned it’s fairly large because of the embedding).

 

Special thanks to Steve Marx, Paolo Salvatori and James Podgorski for their feedback and review.