Hi there,
Please don't be worried about under utilization of CPU/RAM as it will
increase with active usage of the data in due course.
However, as pointed out already by other members , you may want to relook
at the hardware itself wherein 5TB per node storage with higher CPU+RAM can
rather be
Another option to consider is changing your SSTable compression. The default is
LZ4, which is fast for reads and writes but compressed files have a larger disk
footprint. A better alternative might be Zstd, which optimizes for disk
footprint. Here’s the full documentation:
Thank You
Sent using https://www.zoho.com/mail/
On Tue, 16 Nov 2021 10:00:19 +0330 wrote
> I can, but i thought with 5TB per node already violated best practices (1-2
>TB per node) and won't be a good idea to 2X or 3X that?
The main downside of larger disks is that it takes
> I can, but i thought with 5TB per node already violated best practices (1-2
> TB per node) and won't be a good idea to 2X or 3X that?
The main downside of larger disks is that it takes longer to replace a host
that goes down, since there’s less network capacity to move data from surviving
I can, but i thought with 5TB per node already violated best practices (1-2 TB
per node) and won't be a good idea to 2X or 3X that?
Sent using https://www.zoho.com/mail/
On Mon, 15 Nov 2021 20:55:53 +0330 wrote
It sounds like you can downsize your cluster but increase your
It sounds like you can downsize your cluster but increase your drive capacity.
Depending on how your cluster is deployed, it’s very possible that disks larger
than 5TB per node are available. Could you reduce the number of nodes and
increase your disk sizes?
—
Abe
We have apps like this, also. For straight Cassandra, I think it is just the
nature of how it works. DataStax provides some interesting solutions in
different directions: BigNode (for handling 10-20 TB nodes) or Astra
(cloud-based/container-driven solution that DOES separate read, write, and