Hi Team,

I have 64GB of total system memory. 5 node cluster.

xxxxxxxxxxxxx ~# free -m
              total        used        free      shared  buff/cache   available
Mem:          64266       17549       41592          66        5124       46151
Swap:             0           0           0
xxxxxxxxxxxxx ~#

and "egrep -c 'processor([[:space:]]+):.*' /proc/cpuinfo" giving 12 cpu cores.

Currently Cassandra-env.sh calculating MAX_HEAP_SIZE as '8GB' and HEAP_NEWSIZE 
as '1200 MB'

I am facing Java insufficient memory issue and Cassandra service is getting 

I going to hard code the HEAP values in Cassandra-env.sh as below.

MAX_HEAP_SIZE="16G"  (1/4 of total RAM)

Is these values correct for my setup in production? Is there any disadvantages 
doing this?

Please let me know if any of you people faced the same issue.

Thanks in advance!

Best regards,
Bhargav M

Reply via email to