Do you have the back trace for from the heap dump so we can see what the array
was and what was using it ?
Cheers
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 10/12/2013, at 4:41 am, Klaus
We're running largely default settings, with the exception of shard
(1) and replica (0-n) counts and EC2-related snitch etc. No row
caching at all. The logs never showed the same kind of entries
pre-OOM, it basically occurred out of the blue.
However, it seems that the problem has now subsided
Do you have any secondary indexes defined in the schema? That could lead to
a 'mega row' pretty easily depending on the cardinality of the value.
On Mon, Dec 9, 2013 at 3:02 AM, Klaus Brunner klaus.brun...@gmail.comwrote:
We're running largely default settings, with the exception of shard
(1)
2013/12/9 Nate McCall n...@thelastpickle.com:
Do you have any secondary indexes defined in the schema? That could lead to
a 'mega row' pretty easily depending on the cardinality of the value.
That's an interesting point - but no, we don't have any secondary
indexes anywhere. From the heap dump,
We're getting fairly reproducible OOMs on a 2-node cluster using
Cassandra 1.2.11, typically in situations with a heavy read load. A
sample of some stack traces is at
https://gist.github.com/KlausBrunner/7820902 - they're all failing
somewhere down from table.getRow(), though I don't know if
Hi,
Just taking a wild shot here, sorry if it does not help. Could it be thrown
during reading the sstable? That is, try to find the configuration
parameters for read operation, tune down a little for those settings. Also
check on the the chunk_length_kb.
I am not sure if you had got a chance to take a look at this
http://www.datastax.com/docs/1.1/troubleshooting/index#oom
http://www.datastax.com/docs/1.1/install/recommended_settings
Can you attach the cassandra logs and the cassandra.yaml, it should be able
to give us more details about the