In most cases, Cassandra is pretty efficient about memory usage.
However, if your use case does require/need/demand more memory for your 
workload, I would not hesitate to use heap > 32 GB.
FYI, we have configured our heap for 84 GB.
However there's more tuning that we have done beyond just the heap, so make 
sure you are aware of what else needs to be done.

From: "Steinmaurer, Thomas" <>
Date: Tuesday, February 13, 2018 at 1:49 AM
To: "" <>
Subject: RE: if the heap size exceeds 32GB..

Stick with 31G in your case. Another article on compressed Oops:


From: Eunsu Kim []
Sent: Dienstag, 13. Februar 2018 08:09
Subject: if the heap size exceeds 32GB..

According to the article above, if the heap size of the JVM is about 32GB, it 
is a waste of memory because it can not use the compress object pointer. (Of 
course talking about ES)

But if this is a general theory about the JVM, does that apply to Cassandra as 

I am using a 64 GB physical memory server and I am concerned about heap size 

Thank you.
The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 313

Reply via email to