The Cloud Series – Part 3

This is my third post in this series and this time let’s talk development and cloud native applications, to be more precise, web applications.

I had a very interesting chat with a friend/colleague of mine the other day about an application they had. There where some problems with the current setup they had and one big problem was scaling so they decided to rebuild the solution and to host in the cloud. The interesting part here is that the solution in my head differed a lot from what they created so I would like to take the opportunity to compare them here.

There are 2 ways of running web applications in Azure. First of all we have the traditional way when we let Azure host our own Virtual Machine (from now referred as VM), which means that we can create a VM that runs any Windows or Linux. Configure it as we want, install what software we want, tweak the IIS/Apache servers and so on. Then we have the “Cloud native” way where we use web workers in Azure and just upload the code and Azure fires it up. Did you know that you can upload source code directly from GIT? Just by a simple remote push command and Azure will compile the code for us. How cool is that! When we use Web workers we don’t need to install or configure anything in the OS or IIS environment, it just works!

So what does this means?

- VM gives us flexibility to tweak and configure it to match out our needs to 100%. This is the IaaS (Infrastructure as a Service) part of Azure and it releases us from all hardware, network etc. concerns since Azure and Microsoft will handle that for us.

Web workers as the “Cloud native” use PaaS (Platform as a Service) which is the next level of service and it releases us from all above and all OS handling, no installation, no configuration, no updates - nothing! They say “Take care of your application, we do the rest”, still we are able to do configuration to IIS or Windows or install services and so on in the Startup Tasks. But it lets us focus on what we developers do best! Developing applications, and hopefully it gives us more time to improve and maintain the applications instead of handling boring patching, OS updates other updates etc.!

When choosing what to run and how, there is a lot more to think about than the above mentioned areas, since we sometimes need the possibility to install third part components in our OS or we need to have special configurations and setups in our IIS/Apache installation and so on. Every time we need to use/configure things outside of the application itself it might be a better choice to use a Virtual Machine.

So now that we have a little background, let’s set up the context. The application is a plugin that can be used anywhere at a webpage, the idea is to create dynamic video content that provides the client user with the possibility to start streaming a small video after a click on the “start” button. For fun let’s say that the application is used on a big Swedish newspaper site and we got it plugged in on one of the top news. Studies have shown that when this is done before the payload looks something like the chart below. (The chart shows average values/hour in time periods of the day and the values are completely made up, but I want to show you how big the difference are between the biggest peak and the smallest and give us something to plan around.)

Now we have a scenario let’s solve this in 2 different ways one using Virtual Machines and one using only Azure Cloud parts. Basically the solution would look like this.

To start off we need a web application that handles this:

  • provides the site with the content
  • gives access to the movie
  • presents the movie

In the background we would need something like this:

  • Place to store movies
  • Database to store information/logs/configurations etc.

Let’s set this up with Virtual Machines, we use Windows server, IIS on our web application machines. As a backend we setup Virtual machines with clustered NoSQL databases. We are using the load balancer in Azure to help us handle the balancing of the traffic between our web servers and Azure Media Service to provide the movies. It could look something like this (orange boxes is the parts where we are responsible for):

With this setup we could easily add new Virtual Machines at the place where we meet threshold first, either at the database or the Web Application. This gives us a good setup and a lot of scaling potential, flexibility and possibilities. But we need to keep the OS, IIS, NoSQL server etc. updated and upgrade to new versions with all the problems that comes along and so on. Sure some of these things are automatically but we have the responsibility to do this. We also have the responsibility to make sure that the machines are health and run without errors, Azure will only provide us with the assurance that the Virtual Machine runs, not that the image and software on the image is working correct.

In this setup we would have our movies saved in the Azure Media Service and in that way we will be able to provide live streaming with very little effort, since Azure does all the work! Logging each view and request could be heavy and should be done asynchronously by using for example Redis we can even out the peaks and protect our databases from being overloaded. With Redis and a windows service that frequently reads from the create list/queue we will write our logs in our pace and since it’s not critical data it’s not a problem. But we have another problem, what if the host that runs this Virtual Machine crashes and the fault is a corrupt hard drive? Then the local data is lost and we have lost all the logs that was stored on that node. This is not ideal.

On the front end we install and setup the OS Environment, we install and configure IIS, and install our application. On the backend we install OS, install Redis and NoSQL server. Creates cluster and configures Azure Media Service. After this we can just publish new machines that will start of working asap.

So let’s do this the “Cloud native” way! Here we will use Web workers for the web application, Azure Table storage as our NoSQL database, we use service worker for logging with a Service bus Queue for evening the peaks out. Even in this setup we would have our movies saved in the Azure Media Service and in the same way we will be able to provide live streaming to clients. This solution could look something this (orange boxes is the parts where we are responsible for):

So what’s going on here? Well we start of in the same way as before with Azure Load Balancer but then we skip all that comes with hardware OS, IIS etc. and uses Web Workers for the web applications. Then we use Azure Table Storage and for logging we push them on a Service Bus Queue to the Worker role which will log at its own pace just as with the Redis mentioned above but this time with a transient queue that guarantees a delivery. So we develop the software, configure Service bus queue, Table storage and Media server and then we are up and running!

I haven’t mentioned the scaling part since it isn’t the point of this entry but as everything else it’s easy and there are actually complete solutions that are ready to use and only needs configuration, look at WASABi for more information.

To wrap this up, take a good look at the features that exists in these cloud applications because they are really useful and will save you a lot of time. Then you need to make sure that you develop for cloud, since a lot of the applications we have done in the past has been done on high reliable hardware it can be a challenge to start thinking and develop in the way that “it will crash at some point” and start developing applications that are crash friendly. Sometimes we know it’s going down (updates, computer restart and so on) but other times its hardware failure that causes the crash and then it’s an instant. Smaller steps, use queues to shorter request calls and make sure that data don’t vanish.

With HTML 5 and Azure Media Services we manage to create video distribution in just a few minutes. This is all HTML that’s needed:

The green text above is the actual path to our file distributed by Azure Media Services, which means that we don’t need to handle anything but providing the html and a link. Divide this up and add some JavaScript that calls our WCF service with logging info and we got a really light and fast video service.

I think this is awesome and this really speed up the entire process when creating robust and dynamic scaling applications. Even though it looks like we are bound to communicate over http to our cloud native application that’s not entirely true, cause with worker roles we can communicate with all protocols we want. We just need to specify in the Azure management console what to communicate with.

Hope this fires up some ideas, because we can achieve a lot more a lot faster with Azure.

Posted in: •Integration  | Tagged: •Architecture  •Azure  •Cloud  •Development  •HTML5