> So, are you saying this is normal and expected from Cassandra?  So,
> under load, we can expect java garbage collection to stop the Cassandra
> process on that server from time to time, essentially taking out the
> node for short periods of time while it does garbage collection?

This thread is getting out of hand and out into la-la-land. Original
poster: If you want to skip some rantings of mine, skip to the end and
do (1) and get the results to the list.

First of all, re-checking the history, it seems the only concrete
information is:

INFO [ScheduledTasks:1] 2011-03-05 15:21:23,524 GCInspector.java (line
128) GC for ConcurrentMarkSweep: 18052 ms, -997761672 reclaimed
leaving 5796586088

This indicates that a concurrent mark/sweep GC took 18 seconds. That
may or may not be a bit high for the heap size, but regardless, the
CMS is not a stop-the-world pause. It involves some stop-the-world
pauses, but those 18 seconds aren't it.

I still don't see anything that tells what's actually going on in the
OP's case. But the fact that the heap grew rather than shrunk as a
result of the GC cycle suggests that something is indeed wrong.
Probably the heap is too full, as has already been suggested, and the
question is just why. Probably *something* is tweaked incorrectly for
the heap size, be that row cache, memtable flush thresholds, etc. Or
there's a bug. But there seems to be a distinct lack of information
and a distinct non-lack of random speculating and GC blaming.

CASSANDRA-2252 as was linked to is *not* a magic fix for this.

A lot can be said about garbage collection techniques, but the point
is that the point of the CMS collector is to avoid the need for long
stop-the-world pauses. Some are still required, but they are supposed
to normally be short. For some workloads, you eventually reach a point
where fragmentation in the old generation causes the need do to a full
stop-the-world pause while the entire heap is compacted. This *does*
result in a long uninterrupted pause, if/when it happens.

Usually, it happens because you actually have too much live data on
the heap. That is entirely different from having a reasonable workload
that is still not handled by the GC in a sensible fashion.

Is it possible that everything is configured correctly and the OP is
triggering a bug or just triggering a sufficiently poor behavior of
CMS such that the freezes are due to unavoidable periodic compactions?
Yes. Do we have nearly enough information to know? No. Should we
assume it is the GC/JVM:s fault before having such information, given
that lots of people run Cassandra without triggering this to the
extent this seems to imply? No.

I would suggest to the OP:

(1) I cannot stress this one enough: Run with -XX:+PrintGC
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps and collect the output.
(2) Attach to your process with jconsole or some similar tool.
(3) Observe the behavior of the heap over time. Preferably post
screenshots so others can look at them.

(1) in particular is very important. It's completely useless to be
speculating about details and making sweeping statements when all
indications so far indicate that there is too much live data on the
heap, when there is not even the results of (1) to go by.

(1) will give you output which shows when different stages of GC
trigger and information about heap sizes etc. It will also print the
reason for fallback to full GC, such as a promotion failure. One can
usually observe fairly well what lead up to such a fallback and draw
conclusions. It will also show which stages took what amount of time
(and not all of them are stop-the-world).

-- 
/ Peter Schuller

Reply via email to