Short answer, it depends on your use-case.

We migrated to i2.xlarge nodes and saw an immediate increase in performance.  
If you just need plain ole raw disk space and don’t have a performance 
requirement to meet then the m1 machines would work, or hell even SSD EBS 
volumes may work for you.  The problem we were having is that we couldn’t fill 
the m1 machines because we needed to add more nodes for performance.  Now we 
have much more power and just the right amount of disk space.

Basically saying, these are not apples-to-apples comparisons



On August 19, 2014 at 11:57:10 AM, Jeremy Jongsma (jer...@barchart.com) wrote:

The latest consensus around the web for running Cassandra on EC2 seems to be 
"use new SSD instances." I've not seen any mention of the elephant in the room 
- using the new SSD instances significantly raises the cluster cost per TB. 
With Cassandra's strength being linear scalability to many terabytes of data, 
it strikes me as odd that everyone is recommending such a large storage cost 
hike almost without reservation.

Monthly cost comparison for a 100TB cluster (non-reserved instances):

m1.xlarge (2x420 non-SSD): $30,000 (120 nodes)
m3.xlarge (2x40 SSD): $250,000 (1250 nodes! Clearly not an option)
i2.xlarge (1x800 SSD): $76,000 (125 nodes)

Best case, the cost goes up 150%. How are others approaching these new 
instances? Have you migrated and eaten the costs, or are you staying on 
previous generation until prices come down?

Reply via email to