Re: Cassandra out of Heap memory

2012-06-17 Thread aaron morton
Not commenting on the GC advice but Cassandra memory usage has improved a lot 
since that was written. I would take a look at what was happening and see if 
tweeking Cassandra config helped before modifying GC settings.

 GCInspector.java(line 88): Heap is .9934 full. Is this expected? or
 should I adjust my flush_largest_memtable_at variable.
flush_largetsmemtable_at is a a safety valve only. Reducing it may help avid 
OOM, by it will not treat the cause. 

What version are you using ? 

1.0.0 had a an issue where deletes were not taken into consideration 
(https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L33) but this does 
not sound like the same problem. 

Take a look in the logs on the machine and see if it was associated with a 
compaction or repair operation. 

I would also consider experimenting on one node with 8GB / 800MB heap sizes. 
More is not always better. 


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 14/06/2012, at 8:05 PM, rohit bhatia wrote:

 Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
 and server logs, I think my situation is this
 
 The default cassandra settings has the highest peak heap usage. The
 problem with this is that it raises the possibility that during the
 CMS cycle, a collection of the young generation runs out of memory to
 migrate objects to the old generation (a so-called concurrent mode
 failure), leading to stop-the-world full garbage collection. However,
 with a slightly lower setting of the CMS threshold, we get a bit more
 headroom, and more stable overall performance.
 
 I see concurrentMarkSweep system.log Entries trying to gc 2-4 collections.
 
 Any suggestions for preemptive measure for this would be welcome.



Re: Cassandra out of Heap memory

2012-06-17 Thread rohit bhatia
I am using 1.0.5 . The logs suggest that it was one single instance of
failure and I'm unable to reproduce it.
From the logs, In a span of 30 seconds, heap usage went from 4.8 gb to
8.8 gb With stop-the-world gc running 20 times. I believe that parNew
was unable to clean up memory due to some problem. I would report if I
am able to reproduce this failure.

On Mon, Jun 18, 2012 at 6:14 AM, aaron morton aa...@thelastpickle.com wrote:
 Not commenting on the GC advice but Cassandra memory usage has improved a
 lot since that was written. I would take a look at what was happening and
 see if tweeking Cassandra config helped before modifying GC settings.

 GCInspector.java(line 88): Heap is .9934 full. Is this expected? or
 should I adjust my flush_largest_memtable_at variable.

 flush_largetsmemtable_at is a a safety valve only. Reducing it may help avid
 OOM, by it will not treat the cause.

 What version are you using ?

 1.0.0 had a an issue where deletes were not taken into consideration
 (https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L33) but this
 does not sound like the same problem.

 Take a look in the logs on the machine and see if it was associated with a
 compaction or repair operation.

 I would also consider experimenting on one node with 8GB / 800MB heap sizes.
 More is not always better.


 -
 Aaron Morton
 Freelance Developer
 @aaronmorton
 http://www.thelastpickle.com

 On 14/06/2012, at 8:05 PM, rohit bhatia wrote:

 Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
 and server logs, I think my situation is this

 The default cassandra settings has the highest peak heap usage. The
 problem with this is that it raises the possibility that during the
 CMS cycle, a collection of the young generation runs out of memory to
 migrate objects to the old generation (a so-called concurrent mode
 failure), leading to stop-the-world full garbage collection. However,
 with a slightly lower setting of the CMS threshold, we get a bit more
 headroom, and more stable overall performance.

 I see concurrentMarkSweep system.log Entries trying to gc 2-4 collections.

 Any suggestions for preemptive measure for this would be welcome.




Re: Cassandra out of Heap memory

2012-06-14 Thread rohit bhatia
Looking at http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html
and server logs, I think my situation is this

The default cassandra settings has the highest peak heap usage. The
problem with this is that it raises the possibility that during the
CMS cycle, a collection of the young generation runs out of memory to
migrate objects to the old generation (a so-called concurrent mode
failure), leading to stop-the-world full garbage collection. However,
with a slightly lower setting of the CMS threshold, we get a bit more
headroom, and more stable overall performance.

I see concurrentMarkSweep system.log Entries trying to gc 2-4 collections.

Any suggestions for preemptive measure for this would be welcome.


Re: Cassandra out of Heap memory

2012-06-13 Thread rohit bhatia
To clarify things

Our setup contains of 8 nodes of 32 gb ram...
with a heap_max size of 12gb
and heap new size of 1.6 gb

The load on our nodes is write/read ratio of 10 with 6 main Column Families.
Although the flushes of column families occur every hour with sstables
sizes of around 50-100 mb. The memtable size for those seems to be
around 500mb. (Is this 10-20 times overhead expected).

Also This is the first time I'm seeing max Heap size reached
Exceptions. Could there be a significant reason to this other than
that the cassandra server were running without restarting for 2
months,


On Wed, Jun 13, 2012 at 6:30 PM, rohit bhatia rohit2...@gmail.com wrote:
 Hi

 My cassandra node went out of heap memory with this message
 GCInspector.java(line 88): Heap is .9934 full. Is this expected? or
 should I adjust my flush_largest_memtable_at variable.

 Also one change I did in my cluster was add 5 Column Families which are empty
 Should empty ColumnFamilies cause significant increase in cassandra heap 
 usage?

 Thanks
 Rohit