Hi Michael, thanks for the reply,
I would RAID0 all those data drives, personally, and give up managing them
separately. They are on multiple PCIe controllers, one drive per channel,
right?
Raid 0 is a simple way to go but one disk failure can cause the whole
volume down, so I am afraid raid
You can watch this: https://www.youtube.com/watch?v=uoggWahmWYI
Aaron is discussing about support for big nodes
On Wed, May 14, 2014 at 3:13 AM, Yatong Zhang bluefl...@gmail.com wrote:
Thank you Aaron, but we're planning about 20T per node, is that feasible?
On Mon, May 12, 2014 at 4:33
Thank you Aaron, but we're planning about 20T per node, is that feasible?
On Mon, May 12, 2014 at 4:33 PM, Aaron Morton aa...@thelastpickle.comwrote:
We've learned that compaction strategy would be an important point cause
we've ran into 'no space' trouble because of the 'sized tiered'
Hi,
We're going to deploy a large Cassandra cluster in PB level. Our scenario
would be:
1. Lots of writes, about 150 writes/second at average, and about 300K size
per write.
2. Relatively very small reads
3. Our data will be never updated
4. But we will delete old data periodically to free space
We've learned that compaction strategy would be an important point cause
we've ran into 'no space' trouble because of the 'sized tiered' compaction
strategy.
If you want to get the most out of the raw disk space LCS is the way to go,
remember it uses approximately twice the disk IO.
From