Those are rough guidelines, actual effective node size is going to depend
on your read/write workload and the compaction strategy you choose.  The
biggest reason data density per node usually needs to be limited is due to
data grooming overhead introduced by compaction.  Data at rest essentially
becomes I/O debt.  If you're using Leveled compaction, the interest rate on
that debt is higher.

If you're writing aggressively you'll find that you run out of I/O capacity
for smaller data at rest.  If you use compaction strategies that allow for
data to eventually stop compacting (Date Tiered, Time Windowed), you may be
able to have higher data density per node assuming that some of your data
is going into the no-longer-compacting stages.

Beyond that it'll be hard to say what the right size for you is.  Target
the recommended numbers and if you find that you're not running out of I/O
as you approach them you can probably go bigger.  Just remember to leave
~50% disk capacity free to leave room for compaction to happen.

On Fri, May 27, 2016 at 1:52 PM Anshu Vajpayee <anshu.vajpa...@gmail.com>
wrote:

> Hi All,
> I have question regarding max disk space limit  on a node.
>
> As per Data stax, We can have 1TB max disk space for rotational disks and
> up to 5 TB for SSDs on a node.
>
> Could you please suggest per your experience what would be limit for space
> on a single node with out causing so much stress on a  node?
>
>
>
>
>
> *​Thanks,​*
>
>

Reply via email to