I really don't think I have more than 500 million rows ... any smart way to
count rows number inside the ks?
use the output from nodetool cfstats, it has a row count and bloom filter size
for each CF.
You may also want to upgrade to 1.1 to get global cache management, that can
make things
Hi Aaron, thanks for your help.
If you have more than 500Million rows you may want to check the
bloom_filter_fp_chance, the old default was 0.000744 and the new (post 1.)
number is 0.01 for sized tiered.
I really don't think I have more than 500 million rows ... any smart way to
count rows
Do you have a copy of the specific stack trace? Given the version and
CL behavior, one thing you may be experiencing is:
https://issues.apache.org/jira/browse/CASSANDRA-4578
On Mon, Jul 22, 2013 at 7:15 AM, cbert...@libero.it cbert...@libero.it wrote:
Hi Aaron, thanks for your help.
If you have
I'm experiencing some problems after 3 years of cassandra in production (from
0.6 to 1.0.6) -- for 2 times in 3 weeks 2 nodes crashed with OutOfMemory
Exception.
Take a look at how many rows you have and the size of the bloom filters. You
may have grown :)
If you have more than 500Million
Hi all,
I'm experiencing some problems after 3 years of cassandra in production (from
0.6 to 1.0.6) -- for 2 times in 3 weeks 2 nodes crashed with OutOfMemory
Exception.
In the log I can read the warn about the few heap available ... now I'm
increasing a little bit my RAM, my Java Heap (1/4 of