top of page
blog: Blog2


This post of not about food… but Cloud Computing, though I would say since Cloud Computing is the next best thing after sliced bread, there is definitely some food related stuff/angle in here :)

Ok, my apologies coz THAT was one poor attempt at an already bad joke, so let’s move on…

… to Cloud: There are more reasons cited by experts (from usual suspects like Gartner, Forrester to so called evangelists & their cousins and then there are folks like THE one in this video – it’s must watch, worth every minute of your time if you haven’t seen it already) than I can keep track of for adopting Public Cloud Platform. Some of these are:

  • Infinite capacity

  • On-demand scale

  • Availability

  • Reliability

  • Economics (Capex vs. Opex and Pay for only what you use)

  • Agility/speed

  • Flexibility

  • Global reach

  • … and so on

However amidst all of these, the most fundamental and important reason is either not cited or if so, only in passing without deeper discussion and analysis. And that’s the one I’ll spend time today to share my thoughts on.

The most important reason that will drive adoption of Public Cloud is that the “Free Lunch is Over”… (now there weren’t many restaurants serving any to start with, so this is just rhetorical flourish that I wanted to display as I try and establish my point :))

What I mean by ‘Free Lunch Is Over’ is that Moore’s law is now a days not what it used to be and the yearly performance gains in software applications driven by hardware improvements, specifically CPU clock speed – the biggest driver of software performance improvement, are now long gone… it’s over folks!!

Single threaded software application performance is increasing modestly and in fact, decreasing in most cases. Virtualization only improves hardware utilization and datacenter management costs but software application performance gains will only occur if the applications are written to scale to multiple and heterogeneous cores and across different machines, in case of on premise deployment or multiple nodes, in case of elastic Public Cloud platforms.

Computing hardware world is transitioning fast from Moore’s law to Amdahl’s, which necessitates innovation and adjustments throughout the software development stack.

So welcome to Amdahl’s world !! We still love Moore’s world though… he was a great guy to have beer with (Kingfisher was the favored brand).

In this new world the scaleout paradigm rules the roost vs. the traditional scaleup model that developers are used to as programming paradigm for the applications. And this new world is creating seismic shift in the software development as the computing moves from predominantly single threaded to multi-threaded, massively parallel and scalable applications.

Most remarkable changes are happening in the evolution of silicon technologies and consequently the CPUs, where multi/many core is gaining unstoppable momentum. This change is so fundamental and groundbreaking that the resulting performance increase in software applications that properly utilize the power of these heterogeneous multiple cores is breathtaking. Of course, the performance increase depends on the application type, i.e. data and compute intensive nature, programming model used and form factor of the device where the application will be deployed. Immense performance increases that ranges anywhere from 8X to 80X over similar functionality single threaded application are possible to achieve in many cases.

And with the already apparent data deluge, which is going to get worse exponentially as storage and bandwidth prices continue to plunge, coupled with the need to process and mine this humongous amount of data to gather scarce intelligence is going to place stringent performance demands on software applications. Parallel processing and scale out capabilities that harness the commodity hardware, be it in private or public datacenters, to the maximum is going to be the competitive advantage that will differentiate the winners from the losers in every industry, every vertical and every segment – small, medium or enterprise businesses in every geography.

Ok, all good so far… you buy (or may be not) all that is written above but are scratching your head thinking how does all this gobbledygook connect to Cloud Platform and its adoption by customers across the spectrum from SMB to large enterprises.

Here is why: by its very design, Public Cloud platform is inherently best suited for applications that scale out/in based on demand/load they need to cater to. Achieving this to harness full performance potential of applications with in premises infrastructure is extremely difficult (think of clusters upon clusters one would need to keep adding… this is undoable at multiple levels from technical to economics). So the most natural place for the customer to go and achieve these phenomenal performance gains is to adopt Public Cloud platforms… there are many out there, but I think this will eventually be dominated by three players: Microsoft (with its Azure platform), Amazon (with its AWS platform) and Google (with GCE/GAE platform). We’ll talk about these 3 players in some detail in one of the next posts.

Ok, I am done with my stuff…

Let me know what you think… dialogue is always better than monologue, so please do leave your thoughts by adding comments below. And please do also suggest topics that you find interesting and think I can take a stab at writing about.

Here’s is some info on key terms used above, putting them here for easy ref:

Moore’s Law (source here): Moore’s law describes a long-term trend in the computing hardware, in which the number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. The capabilities of many digital electronic devices are strongly linked to Moore’s law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. The trend has continued for more than half a century and is not expected to stop until 2015 or later.

Amdahl’s Law (source and more info here): Is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speed up using multiple processors or cores. The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speed up is limited up to 20X.

Multi/Many Core:

  • Multi core: >1 General Purpose CPU (Large x86 Core)

  • Multi/many core: >=1 General Purpose CPU (Large x86 Core) + Many Small Cores (GPU, GPGPU, LPIA, Specialized Cores)



bottom of page