This past Tuesday and Wednesday Microsoft Argentina hosted the Going Deep with Windows Azure training event. We took care of second day sessions:
- Azure Service Bus
- Hadoop on Azure
- Node.js and Java on Windows Azure
- Azure Websites
- Azure Virtual Machines
I was in charge of the third session. I started talking about the different technologies supported by Azure and then I focused on just 2 of them: Node.js and Java. In both cases I did a brief explanation and then I run a demo.
For Node.js I deployed to azure the Pictionary sample developed by my colleague David Frassoni (alias Harry). I did it from a machine running Ubuntu and using the new Git Publishing feature offered by Azure.
In the case of Java, I create a Java Web application using Tomcat 7, Eclipse for JEE Developers and the Eclipse Plug-in for Windows Azure. I run the application i the local emulator and then showed how to deploy it to Azure.
Here is the slide deck I used in the session.
During this week I have been working on an application that relies on Azure Blob Storage. In a few words the application run some processes and uploads the resulting files to Azure. The after each run, some files can change while other remains equal. In order to optimize the application JuanAr suggested adding logic to the application in order to upload only the changed files.
So I invested some time to implement what we called the SmartUploader. This class is very simple, it acts as a wrapper around the Azure Storage Client API. Each time this class uploads a file it calculates a checksum value and stores it in the blob metadata. So the next time before uploading the file it compares the current checksum of the file to upload with the checksum of the previously uploaded file and if it is the same the it avoids the upload because it means that the file has not changed.
The code snippet below shows the interesting part of the SmartUploader.
And here is the method that calculates the checksum.
Hope this helps you.