On Thu, 21 Jan 2010 11:04:58 -0600 Jonathan Ellis <jbel...@gmail.com> wrote: 

JE> 2010/1/21 Ted Zlatanov <t...@lifelogs.com>:
>> Based on that, it seems like a good idea to enable the parallel or
>> concurrent garbage collectors with large heaps.  We're looking at this
>> at our site as well so I'm curious about people's experiences.

JE> Cassandra's default jvm options (bin/cassandra.in.sh) enable the concurrent 
GC.

On Thu, 21 Jan 2010 11:04:35 -0600 Brandon Williams <dri...@gmail.com> wrote: 

BW> Cassandra already uses the ParNew and CMS GCs by default (in 
cassandra.in.sh)

Are those the best GC choices for Cassandra on a machine like what the
OP mentioned?  There are many more tuning options:

http://java.sun.com/performance/reference/whitepapers/6_performance.html,
http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html
http://blogs.sun.com/watt/resource/jvm-options-list.html

plus the specific settings Cassandra uses:

        -XX:SurvivorRatio=8 \
        -XX:TargetSurvivorRatio=90 \
        -XX:+AggressiveOpts \
        -XX:+UseParNewGC \
        -XX:+UseConcMarkSweepGC \
        -XX:+CMSParallelRemarkEnabled \
        -XX:SurvivorRatio=128 \
        -XX:MaxTenuringThreshold=0 \

may not be right for a heap 16-64 times larger than the 1 GB heap
specified in cassandra.in.sh.

Also, maybe these options:

        -ea \
        -Xdebug \
        -XX:+HeapDumpOnOutOfMemoryError \
        -Xrunjdwp:transport=dt_socket,server=y,address=8888,suspend=n \

should go in a "debugging" configuration, triggered by setting
$CASSANDRA_DEBUG?  With a 60+ GB heap, dumping it to a file could be
very painful.  It's pretty bad with a smaller heap too.

Finally, is there a reason the -server option is not used?

Thanks
Ted

Reply via email to