I am using 1.0.5 . The logs suggest that it was one single instance of
failure and I'm unable to reproduce it.
>From the logs, In a span of 30 seconds, heap usage went from 4.8 gb to
8.8 gb With stop-the-world gc running 20 times. I believe that parNew
was unable to clean up memory due to some problem. I would report if I
am able to reproduce this failure.

On Mon, Jun 18, 2012 at 6:14 AM, aaron morton <aa...@thelastpickle.com> wrote:
> Not commenting on the GC advice but Cassandra memory usage has improved a
> lot since that was written. I would take a look at what was happening and
> see if tweeking Cassandra config helped before modifying GC settings.
>
> "GCInspector.java(line 88): Heap is .9934 full." Is this expected? or
> should I adjust my flush_largest_memtable_at variable.
>
> flush_largetsmemtable_at is a a safety valve only. Reducing it may help avid
> OOM, by it will not treat the cause.
>
> What version are you using ?
>
> 1.0.0 had a an issue where deletes were not taken into consideration
> (https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L33) but this
> does not sound like the same problem.
>
> Take a look in the logs on the machine and see if it was associated with a
> compaction or repair operation.
>
> I would also consider experimenting on one node with 8GB / 800MB heap sizes.
> More is not always better.
>
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 14/06/2012, at 8:05 PM, rohit bhatia wrote:
>
> Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
> and server logs, I think my situation is this
>
> "The default cassandra settings has the highest peak heap usage. The
> problem with this is that it raises the possibility that during the
> CMS cycle, a collection of the young generation runs out of memory to
> migrate objects to the old generation (a so-called concurrent mode
> failure), leading to stop-the-world full garbage collection. However,
> with a slightly lower setting of the CMS threshold, we get a bit more
> headroom, and more stable overall performance."
>
> I see concurrentMarkSweep system.log Entries trying to gc 2-4 collections.
>
> Any suggestions for preemptive measure for this would be welcome.
>
>

Reply via email to