RE: Current data density limits with Open Source Cassandra

2017-02-15 Thread SEAN_R_DURITY
. For the actual application, I have not seen a great impact based on the size of disk available. Sean Durity From: daemeon reiydelle [mailto:daeme...@gmail.com] Sent: Wednesday, February 08, 2017 10:56 PM To: user@cassandra.apache.org Subject: Re: Current data density limits with Open Source Cassandra your

Re: Current data density limits with Open Source Cassandra

2017-02-08 Thread daemeon reiydelle
your MMV. Think of that storage limit as fairly reasonable for active data likely to tombstone. Add more for older/historic data. Then think about time to recover a node. *...* *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198London (+44) (0) 20 8144 9872* On Wed, Feb 8, 2017 at 2:14 PM, Ben

Re: Current data density limits with Open Source Cassandra

2017-02-08 Thread Ben Slater
The major issue we’ve seen with very high density (we generally say <2TB node is best) is manageability - if you need to replace a node or add node then restreaming data takes a *long* time and there we fairly high chance of a glitch in the universe meaning you have to start again before it’s

Current data density limits with Open Source Cassandra

2017-02-08 Thread Hannu Kröger
Hello, Back in the day it was recommended that max disk density per node for Cassandra 1.2 was at around 3-5TB of uncompressed data. IIRC it was mostly because of heap memory limitations? Now that off-heap support is there for certain data and 3.x has different data storage format, is that