I am getting an out of memory error why I try to start Cassandra on one of
my nodes. Cassandra will run for a minute, and then exit without outputting
any error in the log file. It is happening while SSTableReader is opening a
couple hundred thousand things.

I am running a 6 node cluster using Apache Cassandra 2.1.2 with DataStax
OpsCenter 5.0.2 from the AWS EC2 AMI "DataStax Auto-Clustering AMI
2.5.1-hvm" (DataStax Community AMI). I'm using m3.xlarge instances, which
have 15 GiB of memory.

By default, the formula in /etc/cassandra/cassandra-env.sh sets Xms and Xmx
to 3.6 GiB. I tried overriding with 8 GiB and 2 GiB; both result in the
same problem.

This shows up in system.log on startup. There is nothing after that last
line. I'm not sure why there are over 200 thousand SSTable things to open.

    INFO  [main] 2015-02-10 18:31:44,766 ColumnFamilyStore.java:268 -
Initializing OpsCenter.settings
    INFO  [SSTableBatchOpen:1] 2015-02-10 18:31:44,767
SSTableReader.java:392 - Opening
/raid0/cassandra/data/OpsCenter/settings-4455ec427ca411e4bd3f1927a2a71193/OpsCenter-settings-ka-1755
(290 bytes)
    ...
    INFO  [SSTableBatchOpen:4] 2015-02-10 18:31:44,775
SSTableReader.java:392 - Opening
/raid0/cassandra/data/OpsCenter/settings-4455ec427ca411e4bd3f1927a2a71193/OpsCenter-settings-ka-1753
(288 bytes)
    INFO  [main] 2015-02-10 18:31:44,797 AutoSavingCache.java:146 - reading
saved cache
/raid0/cassandra/saved_caches/OpsCenter-settings-4455ec427ca411e4bd3f1927a2a71193-KeyCache-b.db
    INFO  [main] 2015-02-10 18:31:56,504 ColumnFamilyStore.java:268 -
Initializing OpsCenter.rollups60
    INFO  [SSTableBatchOpen:2] 2015-02-10 18:32:08,353
SSTableReader.java:392 - Opening
/raid0/cassandra/data/OpsCenter/rollups60-445613507ca411e4bd3f1927a2a71193/OpsCenter-rollups60-ka-359458
(195 bytes)
    ... (201,260 more lines like this)
    INFO  [SSTableBatchOpen:1] 2015-02-10 18:32:47,804
SSTableReader.java:392 - Opening
/raid0/cassandra/data/OpsCenter/rollups60-445613507ca411e4bd3f1927a2a71193/OpsCenter-rollups60-ka-332976
(291 bytes)

When I run Cassandra right on the command line (as opposed to starting the
service), I get error information in the output.

    Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0x00000007fd2ec000, 1241088, 0) failed; error='Cannot
allocate memory' (errno=12)
    #
    # There is insufficient memory for the Java Runtime Environment to
continue.
    # Native memory allocation (malloc) failed to allocate 1241088 bytes
for committing reserved memory.
    # An error report file with more information is saved as:
    # /raid0/cassandra/hs_err_pid22970.log

That log file is big, but part of it near the top reads

    #  Out of Memory Error (os_linux.cpp:2726), pid=22970,
tid=140587792205568
    #
    # JRE version: Java(TM) SE Runtime Environment (7.0_51-b13) (build
1.7.0_51-b13)
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.51-b03 mixed mode
linux-amd64 compressed oops)

Does anyone know how I might get Cassandra on this node running again? I'm
not very familiar with correctly tuning Java memory parameters, and I'm not
sure if that's the right solution in this case anyway.

BTW, I didn't know what an SSTable was. I found the definition here:
http://www.datastax.com/documentation/cassandra/2.1/share/glossary/gloss_sstable.html

Thank you,
 ~ Paul Nickerson

Reply via email to