If you have a read-heavy cluster, even with LCS you can still optimize by 
having more of the key / row cache in memory.

If you have a write heavy / read heavy , then you need more memory so that more 
data is available in the memtable as its written.

Not having enough memory means not enough heapspace, so unncessary GC pressure 
even with G1GC … which has STW pauses … eventually.

Non-response was generally due to GC pauses… (considering that Data model was 
good all around)
On Jul 17, 2018, 10:39 AM -0400, Vsevolod Filaretov <vsfilare...@gmail.com>, 
wrote:
> @Rahul Singh thank you for the answer!
>
> What is your logic behind such RAM-per-node values? What symptoms usually 
> suggest you that you need more RAM?
>
> Did you ever get C* node soft lockdowns/not-respondings due to node being 
> loaded up to 100% of either ram/cpu/IO? If yes - under which conditions?
>
> Thank you!
>
> Best regards,
> Vsevolod.
>
> > вт, 17 июл. 2018 г., 17:22 Rahul Singh <rahul.xavier.si...@gmail.com>:
> > > I usually don’t want to put more than 1.0-1.5 TB ( at the most ) per 
> > > node. It makes streaming slow beyond my patience and keeps the repair / 
> > > compaction processes lean. Memory depends on how much you plan to keep in 
> > > memory in terms of key / row cache. For my uses, no less than 64GB if not 
> > > more ~ 128GB. The lowest I’ve gone is 16GB but that’s for dev purposes 
> > > only.
> > >
> > > --
> > > Rahul Singh
> > > rahul.si...@anant.us
> > > https://www.anant.us/datastax
> > >
> > > Anant Corporation
> > > On Jul 17, 2018, 8:26 AM -0400, Vsevolod Filaretov 
> > > <vsfilare...@gmail.com>, wrote:
> > > > What are general community and/or your personal experience viewpoints 
> > > > on cassandra node RAM amount vs data stored per node question?
> > > >
> > > > Thank you very much.
> > > >
> > > > Best regards,
> > > > Vsevolod.

Reply via email to