What does $100K Buy in Terms of Compute Time? GCE and EC2 square off against big iron.


Let’s say you have a $100,000 to spend this year to crunch numbers, a lot of numbers. How much compute time does that buy?

In this article, I am going to try to answer that question, comparing the outright purchase of big hardware with cloud-based alternatives including Amazon’s EC2 and Google Compute Engine. Note, for my particular computational task, each core or compute node requires 3.5GB of RAM, helping to constrain the options for this problem.

Buying Hardware

Everyone knows that buying computational muscle much larger than your typical laptop or desktop gets expensive fast. Let’s ballpark that this $100,000 will buy you a 512-core machine with 4GB of RAM per core and some storage (say 10 TB). These cores are from the AMD Opteron 6200 Series, the “Interlagos” family and these processors claim up to 16 cores per chip (there is some dispute as to this number as each pair of cores shares a floating point unit).

I am intentionally ignoring the siting, maintenance, setup, and operational costs, plus the staff time spent getting such a system ordered and installed. For the sake of argument, let’s say we can run 512 cores per hour every day of the year for $100K. Put another way, this hardware offers 4,485,120 core-hours of compute time over the year.

Google Compute Engine (GCE)

GCE is the new cloud computing kid on the block and Google has come out swinging. The company’s n1-standard-8 offers 8 virtualized cores with 30GB of RAM with or without ephemeral storage ($0.96/hour or $1.104/hour, respectively).

Assuming the n1-standard-8, each of these instances costs $23.04 per day or $8409.6 per year. Bulk pricing may be available but no information is available on the current website. Thus, that $100,000 offers up 11.89 n1-standard-8 instances running 24 hours a day, 365 days a year or just over 95 cores running continuously. Another way, $100K buys 833,333 core-hours of compute time.

Amazon Web Services

Amazon EC2 is the default cloud services provider. As this service has been around for some time, Amazon offers the most options when it comes to buying CPU time. In this case, we will look at two such pricing options: On Demand and Reserved Instances. For an almost apples to apples comparison with GCE, we will use the second-generation m3.2xlarge instance that is roughly comparable to the n1-standard-8 (although my personal benchmarks have shown that the n1-standard-8 offers a 10-20% performance advantage).

On Demand

EC2 allows users to rent virtual instances by the hour and this is the most directly comparable pricing option to GCE. As the m3.2xlarge is $1.00 per hour or $24 per day, $100,000 offers the user 11.41 m3.2xlarge instances running all year or just over 91 cores. Or, $100K buys 800,000 core-hours.

Reserved Instances

EC2 also allows consumers to buy down the hourly instance charge with a Reserved Instance or, in Amazon’s own words:

Reserved Instances give you the option to make a low, one-time payment for each instance you want to reserve and in turn receive a significant discount on the hourly charge for that instance. There are three Reserved Instance types (Light, Medium, and Heavy Utilization Reserved Instances) that enable you to balance the amount you pay upfront with your effective hourly price.

As we would be running instances night and day, we will look at the “Heavy Utilization Reserved Instance pricing. Each m3.2xlarge instance reserved requires an initial payment of $3432 $2978 for a 1-year term but then the instance is available at a much lower rate: $0.282 $0.246 per hour.

Thus, running one such instance all year costs:
$3432 + $0.282 x 24 x 365 = $5902

$2978 + $0.246 x 24 hours/day x 365 days/year = $5132.96

Our $100,000 thus buys 16.94 19.48 m3.2xlarge instances running 24 x 7 for the year. In terms of cores, this option offers 155.9 cores running continuously for a year and this is a considerable jump from On Demand pricing. Put another way, $100K buys 1,186,980 1,365,294 core-hours.

Please note that the strike out reflects the recent Amazon Reserved Instance price drop that occurred 3/5/2013.


In short, for continuous, consistent use over a year, purchasing hardware offers almost 4x the raw processing power (512 cores vs 155.5 cores) of the nearest cloud option. Even if we assume that our hardware pricing is wildly optimistic and cut the machine in half to 256 cores, there is still a 2x advantage. Again, I realize that I am asking for some broad approximations here such as the equivalence between a virtualized Intel Sandy Bridge core and an actual hardware processor such as the Opteron 6200 series.

Screen Shot 2013-03-05 at 8.43.26 AM

However, what the cloud offers that the hardware (and to some extent, the Amazon Reserved Heavy Utilization instances) cannot is radical flexibility. For example, the cloud could accelerate tasks by briefly scaling the number of cores. If we assume an embarrassingly parallel task that scales perfectly and takes our 512 core machine 4 weeks, there is no reason we couldn’t task 2048 cores to finish the work in a week. Of course we would spend down our budget much faster but having those results 3 weeks earlier could create massive value for the project in other ways.  Or, put another way, GCE, for $100,000, offers the best flexible deal: a bucket of 833,333 core-hours that can be poured out at whatever rate the user wants. If a steady flow of computation is desired, either the hardware purchase or the Amazon Reserved Instances offer the better deal.

(Note, DataCommunityDC is an Amazon Affiliate. Thus, if you click the image in the post and buy the book, we will make approximately $0.43 and retire to a small island).

The following two tabs change content below.

Sean Murphy

Senior Scientist and Data Science Consultant at JHU
Sean Patrick Murphy, with degrees in math, electrical engineering, and biomedical engineering and an MBA from Oxford, has served as a senior scientist at Johns Hopkins University for over a decade, advises several startups, and provides learning analytics consulting for EverFi. Previously, he served as the Chief Data Scientist at a series A funded health care analytics firm, and the Director of Research at a boutique graduate educational company. He has also cofounded a big data startup and Data Community DC, a 2,000 member organization of data professionals. Find him on LinkedIn, Twitter, and .
This entry was posted in Projects, Reviews and tagged . Bookmark the permalink.
  • Ross

    Love it! A short, reasoned article that clearly delineates where the cloud makes sense (or cents) — and where (or when) it does not. Thanks.