, but I resist.
Sean Durity
From: Amit Agrawal [mailto:amit.ku.agra...@gmail.com]
Sent: Friday, December 15, 2017 9:38 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: Data Node Density
Thanks Nicholas. Am aware of the official recommendations. However, in the last
project, we tried with
Typing this on a phone during my commute, please excuse the inevitable typos in
what I expect will be a long email because there’s nothing else for me to do
right now.
There’s a few reasons people don’t typically recommend huge nodes, the biggest
reason being expansion and replacement. This qu
Thanks Nicholas. Am aware of the official recommendations. However, in the
last project, we tried with 5 TB and it worked fine.
So asking for expereinces around.
Anybody knows anyone who provides a consultancy on open source cassandra.
Datastax just does it for the enterprise version!
On Fri, De
Hi Amit,
This is way too much data per node, official recommendation are to try to
stay below 2Tb per node, I have seen nodes up to 4Tb but then maintenance
gets really complicated (backup, boostrap, streaming for repair etc etc)
Nicolas
On 15 December 2017 at 15:01, Amit Agrawal
wrote:
> Hi,
Hi,
We are trying to setup a 3 node cluster with 20 TB HD on each node.
its a bare metal setup with 44 cores on each node.
So in total 60 TB, 66 cores , 3 node cluster.
The data velocity is very less, low access rates.
has anyone tried with this configuration ?
A bit urgent.
Regards,
-A