I do not see the out of heap errors but I am taking a bit of a performance
hit.
Take a look at nodetool cfhistograms to see how many SSTables are being touched
per read and the local read latency.
In general if you are hitting more than 4 it's not great.
BloomFilterFalseRatio is
I also encountered similar problem. I dump the jvm heap and analyse it by
eclipse mat. The eclipse plugin told me there are 10334 instances of
SSTableReader, consuming 6.6G memory. I found the CompactionExecutor thread
held 8000+ SSTalbeReader object. I wonder why there are so many
SSTableReader
On Wed, Jun 26, 2013 at 12:16 AM, aaron morton aa...@thelastpickle.comwrote:
bloom_filter_fp_chance value that was changed from default to 0.1, looked
at the filters and they are about 2.5G on disk and I have around 8G of heap.
I will try increasing the value to 0.7 and report my results.
bloom_filter_fp_chance value that was changed from default to 0.1, looked at
the filters and they are about 2.5G on disk and I have around 8G of heap.
I will try increasing the value to 0.7 and report my results.
You need to re-write the sstables on disk using nodetool upgradesstables.
nodetool -h localhost flush didn't do much good.
Do you have 100's of millions of rows ?
If so see recent discussions about reducing the bloom_filter_fp_chance and
index_sampling.
If this is an old schema you may be using the very old setting of 0.000744
which creates a lot of bloom filters.
On Fri, Jun 21, 2013 at 2:53 AM, aaron morton aa...@thelastpickle.comwrote:
nodetool -h localhost flush didn't do much good.
Do you have 100's of millions of rows ?
If so see recent discussions about reducing the bloom_filter_fp_chance and
index_sampling.
Yes, I have 100's of millions of
bloom_filter_fp_chance = 0.7 is probably way too large to be effective and
you'll probably have issues compacting deleted rows and get poor read
performance with a value that high. I'd guess that anything larger than
0.1 might as well be 1.0.
-Bryan
On Fri, Jun 21, 2013 at 5:58 AM, srmore
I will take a heap dump and see whats in there rather than guessing.
On Fri, Jun 21, 2013 at 4:12 PM, Bryan Talbot btal...@aeriagames.comwrote:
bloom_filter_fp_chance = 0.7 is probably way too large to be effective and
you'll probably have issues compacting deleted rows and get poor read
If you want, you can try to force the GC through Jconsole. Memory-Perform GC.
It theoretically triggers a full GC and when it will happen depends on the JVM
-Wei
- Original Message -
From: Robert Coli rc...@eventbrite.com
To: user@cassandra.apache.org
Sent: Tuesday, June 18, 2013
I see an issues when I run high traffic to the Cassandra nodes, the heap
gets full to about 94% (which is expected) but the thing that confuses me
is that the heap usage never goes down after the traffic is stopped
(at-least, it appears to be so) . I kept the nodes up for a day after
stopping the
On Tue, Jun 18, 2013 at 8:25 AM, srmore comom...@gmail.com wrote:
I see an issues when I run high traffic to the Cassandra nodes, the heap
gets full to about 94% (which is expected)
Which is expected to cause GC failure? ;)
But seriously, the reason your node is unable to GC is that you have
Thanks Rob,
But then shouldn't JVM C G it eventually ? I can still see Cassandra alive
and kicking but looks like the heap is locked up even after the traffic is
long stopped.
nodetool -h localhost flush didn't do much good.
the version I am running is 1.0.12 (I know its due for a upgrade but
On Tue, Jun 18, 2013 at 10:33 AM, srmore comom...@gmail.com wrote:
But then shouldn't JVM C G it eventually ? I can still see Cassandra alive
and kicking but looks like the heap is locked up even after the traffic is
long stopped.
No, when GC system fails this hard it is often a permanent
13 matches
Mail list logo