Re: Seed Node OOM

2015-06-16 Thread Alain RODRIGUEZ
Hi,

Is your OOM on heap or on native memory ? Since 2.1 put a lot of things on
native memory I would say that it is almost always bad to have 6 GB out of
8 for the heap (unless you have a very small data set), since in the 2 GB
remaining you have to keep bloom filters, indexes and more + Page caching
if you have free space in there...

If the OOM is in the heap, is there any sign of pressure in the logs
(PARNEW / CMS) ? Also did you activated GC logs to troubleshoot this
manually or through a third party application ? +1 with Sebastien for the
hprof analyse.

Rob might also be right by pointing the memory link.

Hope this will help.

C*heers,

Alain

2015-06-15 19:39 GMT+02:00 Robert Coli rc...@eventbrite.com:

 On Sat, Jun 13, 2015 at 4:39 AM, Oleksandr Petrov 
 oleksandr.pet...@gmail.com wrote:

 We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
 constant OOMs in one of our clusters.


 Maybe this memory leak?

 https://issues.apache.org/jira/browse/CASSANDRA-9549

 =Rob



Re: Seed Node OOM

2015-06-15 Thread Robert Coli
On Sat, Jun 13, 2015 at 4:39 AM, Oleksandr Petrov 
oleksandr.pet...@gmail.com wrote:

 We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
 constant OOMs in one of our clusters.


Maybe this memory leak?

https://issues.apache.org/jira/browse/CASSANDRA-9549

=Rob


Re: Seed Node OOM

2015-06-13 Thread Sebastian Estevez
The commitlog size is likely a red herring. In 2.0 we had 1gb commitlogs by
default. In 2.1 we have 8gb commitlogs by default. This is configurable at
the yaml.

Not sure what's causing the OOM. Did it generate an hprof file you can
analyze?
On Jun 13, 2015 7:42 AM, Oleksandr Petrov oleksandr.pet...@gmail.com
wrote:

 Sorry I completely forgot to mention it in an original message: we have
 rather large commitlog directory (which is usually rather small), 8G of
 commitlogs. Draining and flushing didn't help.

 On Sat, Jun 13, 2015 at 1:39 PM, Oleksandr Petrov 
 oleksandr.pet...@gmail.com wrote:

 Hi,

 We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
 constant OOMs in one of our clusters.

 It's a rather small cluster: 3 nodes, EC2 xlarge: 2CPUs, 8GB RAM, set up
 with datastax AMI.

 Configs (yaml and env.sh) are rather default: we've changed only
 concurrent compactions to 2 (although tried 1, too), tried setting HEAP and
 NEW to different values, ranging from 4G/200 to 6G/200M.

 Write load is rather small: 200-300 small payloads (4 varchar fields as a
 primary key, 2 varchar fields and a couple of long/double fields), plus
 some larger (1-2kb) payloads with a rate of 10-20 messages per second.

 We do a lot of range scans, but they are rather quick.

 It kind of started overnight. Compaction is taking a long time. Other two
 nodes in a cluster behave absolutely normally: no hinted handoffs, normal
 heap sizes. There were no write bursts, no tables added no indexes changed.

 Anyone experienced something similar? Maybe any pointers?

 --
 alex p




 --
 alex p



Re: Seed Node OOM

2015-06-13 Thread Oleksandr Petrov
Sorry I completely forgot to mention it in an original message: we have
rather large commitlog directory (which is usually rather small), 8G of
commitlogs. Draining and flushing didn't help.

On Sat, Jun 13, 2015 at 1:39 PM, Oleksandr Petrov 
oleksandr.pet...@gmail.com wrote:

 Hi,

 We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
 constant OOMs in one of our clusters.

 It's a rather small cluster: 3 nodes, EC2 xlarge: 2CPUs, 8GB RAM, set up
 with datastax AMI.

 Configs (yaml and env.sh) are rather default: we've changed only
 concurrent compactions to 2 (although tried 1, too), tried setting HEAP and
 NEW to different values, ranging from 4G/200 to 6G/200M.

 Write load is rather small: 200-300 small payloads (4 varchar fields as a
 primary key, 2 varchar fields and a couple of long/double fields), plus
 some larger (1-2kb) payloads with a rate of 10-20 messages per second.

 We do a lot of range scans, but they are rather quick.

 It kind of started overnight. Compaction is taking a long time. Other two
 nodes in a cluster behave absolutely normally: no hinted handoffs, normal
 heap sizes. There were no write bursts, no tables added no indexes changed.

 Anyone experienced something similar? Maybe any pointers?

 --
 alex p




-- 
alex p


Seed Node OOM

2015-06-13 Thread Oleksandr Petrov
Hi,

We're using Cassandra, recently migrated to 2.1.6, and we're experiencing
constant OOMs in one of our clusters.

It's a rather small cluster: 3 nodes, EC2 xlarge: 2CPUs, 8GB RAM, set up
with datastax AMI.

Configs (yaml and env.sh) are rather default: we've changed only concurrent
compactions to 2 (although tried 1, too), tried setting HEAP and NEW to
different values, ranging from 4G/200 to 6G/200M.

Write load is rather small: 200-300 small payloads (4 varchar fields as a
primary key, 2 varchar fields and a couple of long/double fields), plus
some larger (1-2kb) payloads with a rate of 10-20 messages per second.

We do a lot of range scans, but they are rather quick.

It kind of started overnight. Compaction is taking a long time. Other two
nodes in a cluster behave absolutely normally: no hinted handoffs, normal
heap sizes. There were no write bursts, no tables added no indexes changed.

Anyone experienced something similar? Maybe any pointers?

-- 
alex p