BizTalk Server Backup,DR and Log Shipping – BizTalk360 can make your life easy

BizTalk Server Backup,DR and Log Shipping – BizTalk360 can make your life easy

Original Article: http://blog.biztalk360.com, Cross posted.

Note: This capability is added to our next release of BizTalk360 v5.0 due in 6 weeks time.

One of the design goals for BizTalk360 is to abstract the complexities of some of the hard core concepts in BizTalk Server. We wanted BizTalk environments to be managed, operated and administered by someone who got very basic BizTalk knowledge, not a real BizTalk expert.

In our experience only very few companies have dedicated BizTalk administrators. It may not be to feasible to have a dedicated BizTalk admin resource for various reasons like not enough work to keep the resource busy all the time, financial reason etc. In majority of the cases organisations choose to cross train either DBA’s or WinTel IT person to pick up some of the administrative BizTalk tasks.

For us here at BizTalk360, the goal is to provide both experienced and new BizTalk people (operators, administrators) sophisticated tooling to make their day-to-day life easier. We already addressed some of the challenges like graphical message flow viewerfine grained authorization , graphical throttling analyser, Advanced Event Viewer  etc. On the same theme the next big module is BizTalk backup-disaster recovery visualization.

Introduction

SQL server especially the Message Box database is the heart of your BizTalk environment. All of your BizTalk servers in the group are typically state less and there is tons of logic in the SQL layer for the proper functioning of your BizTalk environment.
For that same reason, Microsoft wants to treat SQL server as a black box when it comes to BizTalk server, its property of Microsoft and you are not supposed to do any traditional database maintenance activities in the SQL server. BizTalk server comes out of the box with predefined back up jobs and clear procedure on how to configure log shipping, standby server and restore procedures. In fact this is the only supported way of doing backup and restore in BizTalk server. There are very good documentation in MSDN and couple of great articles my Nick Heppleston about BizTalk backup/DR.

At very high level this is his how it works

image

  • You configure the backup job in the live environment, which takes periodic full back up and log backs and stored it in a highly available UNC path.
  • You configure a stand by SQL server in a remote location, which got access to both the UNC shared drive and to the Live SQL Environment (as linked server).
  • Both the environment maintains a bunch of SQL tables in management and master databases respectively keeping the history of backup and restore activities
  • Standby environment reads the history from live environment, pick up the data and log files from UNC share and restores it on the standby environment WITH NO RECOVERY option.
  • In the event of failure, you need to perform bunch of activities clearly documented in MSDN to bring the standby environment as live.

Challenges for BizTalk Administrators/DBA

In order to make sure the backup, log shipping and standby restore activities working correctly, a BizTalk administrator or DBA need to perform various checks periodically. There are too many touch points they need to verify, which includes

  • The SQL agents are running correctly on both sides
  • SQL Jobs are configured correctly and enabled
  • No errors in the back up/ restore job history on live and standby environments
  • Need to understand the adm_BackupHistory table to see whether backup are working as expected
  • Need to understand the bts_LogShippingHistory table in standby environment to check whether restore is working as expected
  • May need to look into the configuration of back up job to check the configuration (full backup frequency, log back schedule, UNC path etc)

All these steps just make BizTalk backup/log shipping/DR as an expert activity and it nearly impossible for a non-BizTalk person to understand all these things.

How are we resolving these issues?

BizTalk360 solves all these challenges by providing a BizTalk backup/disaster recovery visualizer as shown in the below picture.

image

BizTalk360 understand the configuration and displays all the details in a simpler/graphical way.  Let’s take a look at each section

  • Backup job configuration right in the UI
  • Graphical view of Live/UNC/Standby environment
  • Live and Standby backup/Log shipping, job history

Backup job configuration right in the UI

You can expand the pane "Backup SQL Job configuration (Live Environment)", which will display your current backup job configuration in an easy to understand way. Behind the scene BizTalk360 parses the back up job configuration (all the job steps, parameters, schedule etc.) and present it in a nice UI.

image

Graphical view of Live/UNC/Standby environment

The next stage shows the graphical representation of your LIVE/UNC/Standby setting clearly pointing how the environments are configured.

image

The live and standby environment clearly shows the health of important parameters like

  • Whether SQL Agent are started
  • Whether the backup/restore job is enabled correctly
  • Whether there are any errors in job history

Note: One of the SQL restore job is supposed to be in stopped state, it?s only enabled during the real disaster. BizTalk360 will understand that configuration and ignores that job while calculating the health.
On the UNC share box it shows the path, names assigned to full and log names in the backup job configuration.

Live and Standby backup/Log shipping, job history

As mentioned earlier it?s important to keep an eye on the backup/log shipping history records to see whether the back is working correctly and data/logs are restored correctly in the stand by environment. Some of the things to note

image

  • You can visualize the databases being backed up/restored
  • The frequency of full backup
  • The frequency of log backup
  • Last set of successful full backup records
  • Last set of successful log backup records
  • Last set of successful full restore records
  • Last set of successful log backup records
  • Notes clearly showing what you need to look for (ex: Restore backup set id must be live -1 to be healthy)
  • Displaying the history records for backup/restore jobs.

BizTalk360 with this single view represent your whole backup/disaster recovery setup and helps you to keep an eye on the health of your DR plan. The important thing to note here is, anyone without any prior BizTalk knowledge can understand the settings. Thanks to BizTalk360.

If you got any feedback please feel free to contact us, we want to make eveyone’s life easy.

Nandri
Saravana Kumar

Windows Phone 8 – What is new?

Windows Phone 8 – What is new?

The presenter, Augusto Valdez, started by stating something to get everyone in a good mode, as we all know that WP is a very good product but it still lacks in sales. They did their own research  by going to the amazon us website and look for what phones people like. The top 3 are WP and out of the top 9, 7 are.

Windows Phone 8 will release at the same time as Windows 8. The different teams are working together, collaborating and trying to get the same experience on the phone as well as desktop or pad.

So here 8 new features in Windows Phone 8

1. The latest and greatest hardware. it will support dual cores and more. It will support 3 different resolutions, the highest begin 1280X720 16:9. They will continue to support MicroSDs and even expand on that functionality by allowing you to install apps from a MicroSD!

2. IE 10. this will be the same code that runs on Windows 8 so it will have great JavaScript and HTML 5 performance. It will also include anti-phishing since that is a great problem with mobile devices at the moment.

3. Native code support. The same code that runs on Windows 8 will run on the phone. Think about the time this little gem might save you.

4. Full support of NFC (near field communication). Now the words “full support” might mean different things to different people but that is what he said. NFC is to me pure science fiction, which either makes it cool or me seem really old.

5. The most complete wallet. Well if you say so. I won’t hold my breath but if we could make way of all these membership cards and cash I would be a very happy guy. Also, the security will sit in the SIM-card and not in the hardware. That means that the security is portable and you can move your identity between different devices.

6. Nokia map technology. This means a lot of things but mostly it means offline maps. Download all the maps for lets say Amsterdam, and use them all day without roaming charges.

7. Windows Phone 8 for business

IMG_1619

If you are using Windows 8 and Windows Phone 8 there should not be any reason not to use the same apps on all your devices. This is when that shared core comes into play. Now the phone is encryptable and you can treat the phone as any other laptop (nearly) in the business, and push different apps to different phones. Perhaps also enforcing some security and restrictions.

Another important thing is that you can install applications to your phone and not use Marketplace. This is of course important to business users. (That little fact won a guy in the audience an Nokia 900 buy the way!)

8. The start screen

IMG_1620

Once again the shared core comes into play and the extended functionalities of the live tiles on Windows 8 will come to Windows Phone 8. the picture is actually from a prototype phone the presenter used to demo features.

The old version

So what will happen to Windows Phone 7.5? Many already know that you will not be able to upgrade a WP7.7 to WP8. Mr Waldez told us that there will be a WP7.8 that will come close to what WP8 will do but not all the way.


Blog Post by: Mikael Sand

Windows Azure with Scott Gu

Windows Azure with Scott Gu

I wonder if Scott Gu can sing? If so he has a lovely basso.

There was very little news to me in this session as I am a frequent attender of the Swedish IMG_1602Azure User Group, however a little repetition might improve my knowledge.

There are a couple of things that still amazes me when it comes to Azure. the first one is the 99.95% monthly SLA. This means that Microsoft guarantees that your servers are up all but about three hours during a 30 day month.

The next thing that amazes me is still the cost of hosting a server. Two small instances (1.6 GHz processor, 1.75 megs of ram, 225 Gb of storage) with 100 Gb of data transfer cost 90 per month from the first month! I can easily tell you the names of a couple of providers that will charge you 600 for the same service.

Also remember this: MSDN Premium and Ultimate comes with Azure! So there is nothing stopping you from giving it a try at least, perhaps time frames but not cost. You only pay for what you use. Start small and scale up or use it heavily in a few hours and the close them. You don’t pay any more.

Virtual private networking is finally here. They talked about it for a while before but now you can have a network within the cloud and the connect to your local network using VPN tunneling. They even provide a way of scripting the Virtual network so that the local network can use VPN to access it (and vice versa).

Since all machines that are running in Windows Azure are VHD you can use VHD that you already have on premise or perhaps other providers.

I stared thinking about something: What can you do locally that you cannot do in Windows Azure? There was no time for questions at the end but perhaps someone can give me a suggestion on twitter.

The next now cool thing is Azure Websites. Something I really wish I had access to back in the day so I could focus on content and not building the actual stuff. Well,  you get 10 free with MSDN. They are very very to deploy using VS 2012. you can also connect them to TFS (Online version as well) and make use of continuous build and deploy.


Blog Post by: Mikael Sand

TechEd Europe 2012 keynote-Key takeaways

TechEd Europe 2012 keynote-Key takeaways

Welkom in Amsterdam!http://www.iamsterdam.com/

Windows is now big! And when I say big I mean HUGE. They do not only power smaller and smaller devices but larger and larger as well. Some specs for the now Windows Server 2012 (yes they are calling it that so stop calling it Windows Server “8”): You can run 64 nodes in a single cluster, you can use up to 4 TB of memory per server, you can run 4 000 VMs, a single VM can support 1 TB of memory and 64 TB of virtual disk!

Given those figures we are very close to seeing the end of the physical server as the goto solutions for information and transaction heavy solutions such as BizTalk tend to be. All this virtualization also makes it easer to maintain and move around as the specs of the different applications changes. Good news for us.

IMG_1607The most impressive part was the way they got more than 1 million I/Os per second from a single virtual machine. Compare this number to your fairly standard (and fast) SSD drive that has about 8 000. I can safely say that physical servers is no longer they primary way to go. Even SQL server can deliver it’s very near maximum performance in a virtual environment. One Microsoft guy said about 99% of all tasks.

The other things they they focused on was the good capabilities to utilize hybrid cloud. They even provisioned a AMS server using Windows Systems center. They also talked a lot about how to integrated different versions of the cloud and how it can be monitored from the same place, including that AMS server. For us familiar with other cloud providers that focus mainly on IaaS, this is a very good thing because most of the time it simply comes down to maintainability.

In Berlin in 2010 I blogged about the keynote as well, and in that post I “predicted” that we might see a future in which we buy desktops in the cloud for our company and they look and behave just as they normally would. We are not quite there yet in some aspects but in other aspects, Microsoft has surpassed my predictions and also my expectations! We can now run servers in the cloud just as we would run them on prem.


Blog Post by: Mikael Sand

Contest: Win a copy of (MCTS): Microsoft BizTalk Server 2010 (70-595) Certification Guide book

Contest: Win a copy of (MCTS): Microsoft BizTalk Server 2010 (70-595) Certification Guide book

Great news for BizTalk community! I first spoke about this book in March 27, 2012 here in my blog and once again. I have 2 copies of the new book (MCTS): Microsoft BizTalk Server 2010 (70-595) Certification Guide to give away, courtesy of PACKT Publishing, and for the first time one of this copies is […]
Blog Post by: Sandro Pereira

BizTalk Server futures-Presentations from TechEd North America

BizTalk Server futures-Presentations from TechEd North America

I have already relayed this information to so many, and given the links to more, that I though I’d put them up here for easy access. There is much and more to be written about the content, but I’ll settle for this. Information has been available around BizTalk Server 2010 R2 for some time, but it got much more real and saw some things unveiled not previously mentioned or detailed. In short:

Application Integration Futures: The Road Map and What's Next on Windows Azure: Video Slides

Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azure: Video Slides


Blog Post by: Johan Hedberg

BizTalk and Windows Azure Service Bus as Relay

Recently, I presented a webinar “Secure Integration Messaging with BizTalk and Windows Azure Service Bus (formerly known as AppFabric Service Bus). The webinar and slide deck and be viewed here.
I’d like to take some time and walk through creating the solution. In this webinar, I demonstrated how to process inbound orders from external trading partners […]
Blog Post by: Stan Kennedy

256 Windows Azure Worker Roles, Windows Kinect and a 90’s Text-Based Ray-Tracer

256 Windows Azure Worker Roles, Windows Kinect and a 90’s Text-Based Ray-Tracer

For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective.

The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour. This article will take a run through how I achieved this.

image

Ray Tracing

Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image.

Pin-Board Toys

Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months.

PolyRay

Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image.

The following is an example of a basic PolyRay scene file.

background Midnight_Blue

static define matte surface { ambient 0.1 diffuse 0.7 }

define matte_white texture { matte { color white } }

define matte_black texture { matte { color dark_slate_gray } }

define position_cylindrical 3

define lookup_sawtooth 1

define light_wood <0.6, 0.24, 0.1>

define median_wood <0.3, 0.12, 0.03>

define dark_wood <0.05, 0.01, 0.005>

define wooden texture { noise surface { ambient 0.2 diffuse 0.7 specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1 lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } }

define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }}

define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7 }

define steely_blue texture { shiny { color black } }

define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }

viewpoint

{

from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60

resolution 640, 480 aspect 1.6 image_format 0

}

light <-10, 30, 20>

light <-10, 30, -20>

object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }

object { sphere <0.000, 0.000, 0.000>, 1.00 chrome }

object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }

After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates.

The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced.

image

The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later.

Modeling the Pin Board

The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal.

object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass }

object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue }

object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue }

object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue }

object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue }

object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue }

object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue }

object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue }

object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }

In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below.

image

The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code.

Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image.

image

When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated.

image

The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used.

Windows Kinect

The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions.

Creating a Depth Field Animation

The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255.

image

A screen shot of the modified Kinect Explorer application is shown below.

image

The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording.

En example of one of the depth images is shown below.

image

Once a series of 2,000 depth images has been captured, the task of creating the animation can begin.

Rendering a Test Frame

In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below.

image

The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280×720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below.

image

The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour.

Windows Azure Worker Roles

The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server.

Number of Servers

Cost

1

$500

16

$8,000

256

$128,000

As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling.

The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below.

Number of Worker Roles

Render Time

Cost

1

256 hours

$30.72

16

16 hours

$30.72

256

1 hour

$30.72

Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles.

Creating a Render Farm in Windows Azure

The architecture of the render farm is shown in the following diagram.

image

The render farm is a hybrid application with the following components:

%u00b7 On-Premise

o Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images.

o Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue.

o Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process.

o Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete.

%u00b7 Windows Azure

o Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.

The architecture of each worker role is shown below.

image

The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image.

The service definition for the worker role with the local storage configuration highlighted is shown below.

<?xml version=”1.0″ encoding=”utf-8″?>

<ServiceDefinition name=”CloudRay” xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition”>

<WorkerRole name=”CloudRayWorkerRole” vmsize=”Small”>

<Imports>

</Imports>

<ConfigurationSettings>

<Setting name=”DataConnectionString” />

</ConfigurationSettings>

<LocalResources>

<LocalStorage name=RayFolder cleanOnRoleRecycle=true />

</LocalResources>

</WorkerRole>

</ServiceDefinition>

The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave’s Targa Animator, another shareware application from the 90’s.

Each worker roll will use the following process to render the animation frames.

1. The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage.

2. PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file.

3. DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file.

4. The JPG file is uploaded from local storage to the images blob container.

5. A message is placed on the images queue to indicate a new image is available for download.

6. The job message is deleted from the job queue.

7. The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used.

The code for this is shown below.

public override void Run()

{

// Set environment variables

string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable(“RoleRoot”), PolyRayLocation);

string dtaPath = Path.Combine(Environment.GetEnvironmentVariable(“RoleRoot”), DTALocation);

LocalResource rayStorage = RoleEnvironment.GetLocalResource(“RayFolder”);

string localStorageRootPath = rayStorage.RootPath;

JobQueue jobQueue = new JobQueue(“renderjobs”);

JobQueue downloadQueue = new JobQueue(“renderimagedownloadjobs”);

CloudRayBlob sceneBlob = new CloudRayBlob(“scenes”);

CloudRayBlob imageBlob = new CloudRayBlob(“images”);

RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();

Frames = 0;

while (true)

{

// Get the render job from the queue

CloudQueueMessage jobMsg = jobQueue.Get();

if (jobMsg != null)

{

// Get the file details

string sceneFile = jobMsg.AsString;

string tgaFile = sceneFile.Replace(“.pi”, “.tga”);

string jpgFile = sceneFile.Replace(“.pi”, “.jpg”);

string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);

string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);

string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);

// Copy the scene file to local storage

sceneBlob.DownloadFile(sceneFilePath);

// Run the ray tracer.

string polyrayArguments =

string.Format(“\”{0}\” -o \”{1}\” -a 2″, sceneFilePath, tgaFilePath);

Process polyRayProcess = new Process();

polyRayProcess.StartInfo.FileName =

Path.Combine(Environment.GetEnvironmentVariable(“RoleRoot”), polyRayPath);

polyRayProcess.StartInfo.Arguments = polyrayArguments;

polyRayProcess.Start();

polyRayProcess.WaitForExit();

// Convert the image

string dtaArguments =

string.Format(” {0} /FJ /P{1}”, tgaFilePath, Path.GetDirectoryName (jpgFilePath));

Process dtaProcess = new Process();

dtaProcess.StartInfo.FileName =

Path.Combine(Environment.GetEnvironmentVariable(“RoleRoot”), dtaPath);

dtaProcess.StartInfo.Arguments = dtaArguments;

dtaProcess.Start();

dtaProcess.WaitForExit();

// Upload the image to blob storage

imageBlob.UploadFile(jpgFilePath);

// Add a download job.

downloadQueue.Add(jpgFile);

// Delete the render job message

jobQueue.Delete(jobMsg);

Frames++;

}

else

{

Thread.Sleep(1000);

}

// Log the worker role activity.

roleLifecycleDataSource.Alive

(“CloudRayWorker”, RoleLifecycleDataSource.RoleLifecycleId, Frames);

}

}

Monitoring Worker Role Instance Lifecycle

In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.

public class RoleLifecycle : TableServiceEntity

{

public string ServerName { get; set; }

public string Status { get; set; }

public DateTime StartTime { get; set; }

public DateTime EndTime { get; set; }

public long SecondsRunning { get; set; }

public DateTime LastActiveTime { get; set; }

public int Frames { get; set; }

public string Comment { get; set; }

public RoleLifecycle()

{

}

public RoleLifecycle(string roleName)

{

PartitionKey = roleName;

RowKey = Utils.GetAscendingRowKey();

Status = “Started”;

StartTime = DateTime.UtcNow;

LastActiveTime = StartTime;

EndTime = StartTime;

SecondsRunning = 0;

Frames = 0;

}

}

A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm.

Rendering the Animation

The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below.

<?xml version=”1.0″ encoding=”utf-8″?>

<ServiceConfiguration serviceName=”CloudRay” xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration” osFamily=”1″ osVersion=”*”>

<Role name=CloudRayWorkerRole>

<Instances count=16 />

<ConfigurationSettings>

<Setting name=”DataConnectionString”

value=”DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=…” />

</ConfigurationSettings>

</Role>

</ServiceConfiguration>

About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.

image

Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.

image

With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.

image

At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required.

Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected.

image

Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.

<?xml version=”1.0″ encoding=”utf-8″?>

<ServiceConfiguration serviceName=”CloudRay” xmlns=”http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration” osFamily=”1″ osVersion=”*”>

<Role name=”CloudRayWorkerRole”>

<Instances count=256 />

<ConfigurationSettings>

<Setting name=”DataConnectionString”

value=”DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=…” />

</ConfigurationSettings>

</Role>

</ServiceConfiguration>

Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.

image

Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.

image

We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.

image

The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute.

image

The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.

Completed Animation

I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below.

Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles

The animation can be viewed in 1280×720 resolution at the following link:

http://www.youtube.com/watch?v=n5jy6bvSxWc

Effective Use of Resources

According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application.

The new management portal displays the CPU usage across the worker roles in the deployment.

The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively.

Grid Computing Scenarios

Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective.

%u00b7 Windows Azure can provide massive compute power, on demand, in a matter of minutes.

%u00b7 The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution.

%u00b7 Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget.

%u00b7 No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.)

Tips for using Windows Azure for Grid Computing Scenarios

I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes.

The following tips may be useful when implementing a grid computing project in Windows Azure.

%u00b7 Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances.

%u00b7 Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required.

%u00b7 Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started.

%u00b7 Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle.

%u00b7 If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective.

%u00b7 Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles.

%u00b7 Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

Creating a new BizTalk machine from a sysprep image in Windows Azure Virtual Machines, and making it work

Creating a new BizTalk machine from a sysprep image in Windows Azure Virtual Machines, and making it work

In its simplest single machine using default configuration BizTalk Server 2010 comes with a sample of how to use sysprep, that resides in the C:\Program Files (x86)\Microsoft BizTalk Server 2010\SDK\Samples\Admin\Sysprep folder. It uses an unattended installation answer file that among other things tells windows to run a number of scripts that updates the database, SSO, BAM etc, where references to the machine name exists. Full description of that “sample” is described here.

Windows Azure on the other hand does not (as far as I can figure out) support running sysprep and supplying an unattend file. The way to properly sysprep a Windows Server VHD running on Windows Azure Virtual Machines is described here. If developing a machine locally the same general steps apply.

The problem with this is that they clash, they mix like cream and lemon – not at all. (I am sure there are ways to combine these, but its not openly apparent how, but if someone knows feel free to Comment on that).

To make a BizTalk Server image created on Windows Azure fully usable we can run the scripts and commands that the unattended configuration would have. When running these scripts you need to know the name the computer had when sysprep was used and the image captured.

There are three steps to this:

  1. Update SQL Server User Groups
    image
  2. Update SQL Server metadata
  3. Update BizTalk Server (including SSO and other components)

Update SQL Server User Groups

Simply rename the groups.

Updating SQL Server Metadata

The problem when logging on to SQL Server after the name change is that you no longer have access with the administrative account, Administrator, since this is really oldmachine\administrator. So you need another administrative account, ie SYSTEM to fix your login for you.

One way to run as system and do the things described for step 1 is to add a Scheduled Task. Set it to run as SYSTEM, since that account will be sysadmin in the database and set it to run a script that fixes the things necessary, ie:

REM UpdateSqlLogins.bat

SET oldcomputername=JEHBTS5
pushd %programfiles(x86)%\Microsoft BizTalk Server 2010\SDK\Samples\Admin\Sysprep\scripts

SqlCmd -s . -d master -A -Q "sp_dropserver %oldcomputername%" -o UpdateSQLLogins.log
SqlCmd -s . -d master -A -Q "sp_addserver %computername%, local" -o UpdateSQLLogins.log
SqlCmd -s . -d master -A -Q "drop login [%oldcomputername%\Administrator]" -o UpdateSQLLogins.log
SqlCmd -s . -d master -A -Q "create login [%computername%\Administrator] from windows" -o UpdateSQLLogins.log
REM SqlCmd -s . -d master -A -Q "EXEC sp_changedbowner [%computername%\Administrator]" -o UpdateSQLLogins.log
SqlCmd -s . -A -Q "EXEC sp_addsrvrolemember @loginame = N'%computername%\Administrator', @rolename = N'sysadmin'" -o UpdateSQLLogins.log

popd

Update BizTalk Server

As the start of the article mentions, Microsoft has briefly covered applying sysprep to a BizTalk Server machine, but since that procedure does not map to this we need to take a somewhat different approach.

  1. Update the file UpdateInfo.xml. In particular remove the <DeploymentUnit Name="Alert"> section, since I am not using BAM Alerts.
  2. Create a file called Replace.vbs and insert the following code.

    .csharpcode, .csharpcode pre
    {
    font-size: small;
    color: black;
    font-family: consolas, "Courier New", courier, monospace;
    background-color: #ffffff;
    /*white-space: pre;*/
    }
    .csharpcode pre { margin: 0em; }
    .csharpcode .rem { color: #008000; }
    .csharpcode .kwrd { color: #0000ff; }
    .csharpcode .str { color: #006080; }
    .csharpcode .op { color: #0000c0; }
    .csharpcode .preproc { color: #cc6633; }
    .csharpcode .asp { background-color: #ffff00; }
    .csharpcode .html { color: #800000; }
    .csharpcode .attr { color: #ff0000; }
    .csharpcode .alt
    {
    background-color: #f4f4f4;
    width: 100%;
    margin: 0em;
    }
    .csharpcode .lnum { color: #606060; }

    'Usage: Replace.vbs <text file to open> <string to be replaced> <string to replace it with>
    
    Dim sOutput, reader, readerStream, writer, writerStream, Wshell
    
    set WshShell = WScript.CreateObject("WScript.Shell")
    
    Set reader = CreateObject("Scripting.FileSystemObject")
    set readerStream = reader.OpenTextFile(WScript.Arguments(0), 1, , -2)
    
    sOutput = Replace(readerStream.ReadAll, WScript.Arguments(1), WScript.Arguments(2))
    sOutput = Replace(sOutput, UCase(WScript.Arguments(1)), WScript.Arguments(2))
    sOutput = Replace(sOutput, LCase(WScript.Arguments(1)), WScript.Arguments(2))
    
    readerStream.Close
    
    Set writer = CreateObject("Scripting.FileSystemObject")
    Set writerStream = writer.CreateTextFile(WScript.Arguments(0), true, False) ''Write the file in ASCII
    writerStream.Write(sOutput)
    writerStream.Close
  3. Create a file called BizTalkSysPrepRestore.bat and place the following code in it.

    REM BizTalkSysPrepRestore.bat
    
    REM SET /P oldcomputername=<test.txt
    SET oldcomputername=JEHBTS5
    pushd %programfiles(x86)%\Microsoft BizTalk Server 2010\SDK\Samples\Admin\Sysprep\scripts
    
    REM First run UpdateSQLLogins.bat. Once. To provision the account as Admin to be allowed to do the below.
    
    net stop BTSSvc$BizTalkServerApplication
    
    net stop RuleEngineUpdateService
    
    net stop ENTSSO
    
    cscript.exe "Replace.vbs" "UpdateInfo.xml" $(NEWCOMPUTERNAME) %computername%
    cscript.exe "Replace.vbs" "UpdateInfo.xml" $(OLDCOMPUTERNAME) %oldcomputername%
    cscript.exe "Replace.vbs" "UpdateInfo.xml" $(OLDCOMPUTERENAME) %oldcomputername%
    
    cscript.exe "UpdateRegistry.vbs" "UpdateInfo.xml" > "UpdateRegistry.log"
    cscript.exe "UpdateDatabase.vbs" "UpdateInfo.xml" > "UpdateSqlServerDatabase.log"
    cscript.exe "UpdateBAMDb.vbs" "UpdateInfo.xml" > "UpdateBAMDb.log"
    
    "UpdateSSO.cmd" > "SSO.log"
    
    REM Update path to SSOXXXX.bak or place in local folder with this name
    "%CommonProgramFiles%\Enterprise Single Sign-On\ssoconfig.exe -restoreSecret SSO.bak
    
    net stop SQLAgent$SQLEXPRESS
    net stop sqlserveragent
    
    net stop MSSQL$SQLEXPRESS
    net stop mssqlserver
    
    cscript.exe "Replace.vbs" "%programfiles(x86)%\Microsoft BizTalk Server 2010\Tracking\bm.exe.config" %oldcomputername% %computername%
    
    net start mssqlserver
    net start MSSQL$SQLEXPRESS
    
    net start sqlserveragent
    net start SQLAgent$SQLEXPRESS
    
    net start RuleEngineUpdateService
    
    net start BTSSvc$BizTalkServerApplication
    
    popd
    
    pause
  4. Run the BizTalkSysPrepRestore.bat as Administrator.
  5. Open SQL Server Management Studio and runt the following SQL script to update Agent jobs.

    USE [msdb]
    GO
    EXEC msdb.dbo.sp_update_job @job_name=N'Backup BizTalk Server (BizTalkMgmtDb)', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'CleanupBTFExpiredEntriesJob_BizTalkMgmtDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'DTA Purge and Archive (BizTalkDTADb)', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'MessageBox_DeadProcesses_Cleanup_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'MessageBox_Message_Cleanup_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'MessageBox_Message_ManageRefCountLog_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'MessageBox_Parts_Cleanup_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'MessageBox_UpdateStats_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'Monitor BizTalk Server (BizTalkMgmtDb)', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'Operations_OperateOnInstances_OnMaster_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'PurgeSubscriptionsJob_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'Rules_Database_Cleanup_BizTalkRuleEngineDb', @owner_login_name=N'sa'
    EXEC msdb.dbo.sp_update_job @job_name=N'TrackedMessages_Copy_BizTalkMsgBoxDb', @owner_login_name=N'sa'
    GO
  6. You should now be set to run BizTalk.

In Summary

I am sure there are better ways to do some of these things. This was a PoC of exactly this, and nothing else. I know there are ways to simplify and automate further. I am sure this is not the best possible solution, but it is one possible solution. It is also a work in progress. I do not have time to take it further right now, but I still wanted to release this post to perhaps help someone else along.

I have hardcoded the old computer name in these scripts, you need to replace that with whatever your original machines name was when you created the image.

There are a couple of things here where I have taken the easy road. All services are run by Administrator, which of course is an administrator. The same is a a member of the groups SSO Administrator, SSO Affiliate Administrator, BizTalk Server Application Users and BizTalk Server Administrator, as well as being assigned sysadmin in the scripts above. So not the ideal best practice security there, and it was outside my scope to make it that.

For the moment, you can also create an image from a virtual machine in Windows Azure without using sysprep. This will work as well, but it’s a quirk, not really what we want here for several reasons.

Since the sysprep support for BizTalk Server is a SAMPLE, I am not sure how “supported” using sysprep on BizTalk Server really is at the moment. The team will have to solve this on their way to offering BizTalk Server as stock images on Windows Azure Virtual Machines, but we are not there yet.

SQL Server does not officially support sysprep in this manner. Instead another procedure is detailed, which includes not fully installing SQL at all before you sysprep. This does not seem to have changed with SQL Server 2012. It will be interesting to see how the team works around this limitation for BizTalk Server 2010 R2. I am guessing that is what the “provisioning” stage that virtual machines go through is for – finalizing installations.

Perhaps not really installing SQL, and following that products offical way to do it, as well as just installing but not configuring BizTalk Server, is the easiest way to do it at the moment. You be the judge.


Blog Post by: Johan Hedberg