o Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images.
o Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue.
o Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process.
o Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete.
· Windows Azure
o Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.
1. The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage.
2. PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file.
3. DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file.
4. The JPG file is uploaded from local storage to the images blob container.
5. A message is placed on the images queue to indicate a new image is available for download.
6. The job message is deleted from the job queue.
7. The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used.
The animation can be viewed in 1280x720 resolution at the following link:
· Windows Azure can provide massive compute power, on demand, in a matter of minutes.
· The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution.
· Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget.
· No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.)
· Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances.
· Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required.
· Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started.
· Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle.
· If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective.
· Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles.
· Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!