bloom_filter_fp_chance = 0.7 is probably way too large to be effective and you'll probably have issues compacting deleted rows and get poor read performance with a value that high. I'd guess that anything larger than 0.1 might as well be 1.0.
-Bryan On Fri, Jun 21, 2013 at 5:58 AM, srmore <comom...@gmail.com> wrote: > > On Fri, Jun 21, 2013 at 2:53 AM, aaron morton <aa...@thelastpickle.com>wrote: > >> > nodetool -h localhost flush didn't do much good. >> >> Do you have 100's of millions of rows ? >> If so see recent discussions about reducing the bloom_filter_fp_chance >> and index_sampling. >> > Yes, I have 100's of millions of rows. > > >> >> If this is an old schema you may be using the very old setting of >> 0.000744 which creates a lot of bloom filters. >> >> bloom_filter_fp_chance value that was changed from default to 0.1, looked > at the filters and they are about 2.5G on disk and I have around 8G of heap. > I will try increasing the value to 0.7 and report my results. > > It also appears to be a case of hard GC failure (as Rob mentioned) as the > heap is never released, even after 24+ hours of idle time, the JVM needs to > be restarted to reclaim the heap. > > Cheers >> >> ----------------- >> Aaron Morton >> Freelance Cassandra Consultant >> New Zealand >> >> @aaronmorton >> http://www.thelastpickle.com >> >> On 20/06/2013, at 6:36 AM, Wei Zhu <wz1...@yahoo.com> wrote: >> >> If you want, you can try to force the GC through Jconsole. >> Memory->Perform GC. >> >> It theoretically triggers a full GC and when it will happen depends on >> the JVM >> >> -Wei >> >> ------------------------------ >> *From: *"Robert Coli" <rc...@eventbrite.com> >> *To: *user@cassandra.apache.org >> *Sent: *Tuesday, June 18, 2013 10:43:13 AM >> *Subject: *Re: Heap is not released and streaming hangs at 0% >> >> On Tue, Jun 18, 2013 at 10:33 AM, srmore <comom...@gmail.com> wrote: >> > But then shouldn't JVM C G it eventually ? I can still see Cassandra >> alive >> > and kicking but looks like the heap is locked up even after the traffic >> is >> > long stopped. >> >> No, when GC system fails this hard it is often a permanent failure >> which requires a restart of the JVM. >> >> > nodetool -h localhost flush didn't do much good. >> >> This adds support to the idea that your heap is too full, and not full >> of memtables. >> >> You could try nodetool -h localhost invalidatekeycache, but that >> probably will not free enough memory to help you. >> >> =Rob >> >> >> >