500 nodes, 20tb of ACTIVE DATA per node in hdfs, no brainer, no problem.
But remember the cross DC traffic will get substantial.

“All men dream, but not equally. Those who dream by night in the dusty
recesses of their minds wake up in the day to find it was vanity, but the
dreamers of the day are dangerous men, for they may act their dreams with
open eyes, to make it possible.” — T.E. Lawrence

sent from my mobile
Daemeon Reiydelle
skype daemeon.c.m.reiydelle
USA 415.501.0198

On May 19, 2017 9:05 AM, "ZAIDI, ASAD A" <az1...@att.com> wrote:

> Hello Folks -
>
> I'm using open source apache Cassandra 2.2 .My cluster is spread over 14
> nodes in cluster in two data centers.
>
>
>
> My DC1 data center nodes are reaching 2TB of consumed volume. we don't
> have much space left on disk.
>
> I am wondering if there is guideline available that can point me to
> certain best practice that describe when we should add more nodes to the
> cluster.  should we add more storage or add more nodes. I guess we should
> scale Cassandra horizontally so adding node may be better option.. i am
> looking for a criteria that describes node density thresholds, if there are
> any.
>
> Can you guys please share your thoughts , experience. I'll much appreciate
> your reply. Thanks/Asad
>
>
>
>
>

Reply via email to