Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-18 Thread J. Ryan Earl
What's the 'ulimit -a' output of the user cassandra runs as?  From this and
your previous OOM thread, is sounds like you skipped the requisite OS
configuration.

On Wed, Sep 17, 2014 at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:

 Hi there,

 I am using leveled compaction strategy and have many sstable files. The
 error was during the startup, so any idea about this?


 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more





ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
Hi there,

I am using leveled compaction strategy and have many sstable files. The
error was during the startup, so any idea about this?


 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more



Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread graham sanderson
Are you running on a 32 bit JVM?

On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:

 Hi there,
 
 I am using leveled compaction strategy and have many sstable files. The error 
 was during the startup, so any idea about this?
  
 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 199) 
 Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at 
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 199) 
 Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in 
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more
 



smime.p7s
Description: S/MIME cryptographic signature


Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
no, I am running 64 bit JVM。 But I have many sstable files, about 30k+

On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com wrote:

 Are you running on a 32 bit JVM?

 On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:

 Hi there,

 I am using leveled compaction strategy and have many sstable files. The
 error was during the startup, so any idea about this?


 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more






Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
sorry, about 300k+

On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang bluefl...@gmail.com wrote:

 no, I am running 64 bit JVM。 But I have many sstable files, about 30k+

 On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com
 wrote:

 Are you running on a 32 bit JVM?

 On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:

 Hi there,

 I am using leveled compaction strategy and have many sstable files. The
 error was during the startup, so any idea about this?


 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more







Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Rahul Neelakantan
What is your sstable size set to for each of the sstables, using LCS? Are you 
at the default of 5 MB?

Rahul Neelakantan

 On Sep 17, 2014, at 10:58 AM, Yatong Zhang bluefl...@gmail.com wrote:
 
 sorry, about 300k+
 
 On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang bluefl...@gmail.com wrote:
 no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
 
 On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com wrote:
 Are you running on a 32 bit JVM?
 
 On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:
 
 Hi there,
 
 I am using leveled compaction strategy and have many sstable files. The 
 error was during the startup, so any idea about this?
  
 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at 
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in 
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more
 


Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Yatong Zhang
My sstable size is 192MB. I removed some data directories to reduce the
data that need to load, and this time it worked, so I was sure this was
because of the data was too large.
I tried to tune the JVM parameters, like heap size or stack size, but
didn't help. I finally got it resolved by add some options to
'/etc/sysctl.conf':

# Controls the maximum number of PID
 kernel.pid_max = 999
 # Controls the maximum number of threads
 kernel.threads-max = 999
 # Controls the maximum number of virtual memory areas
 vm.max_map_count = 999


Hope this would be helpful to others, but any other advices are also welcome

On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan ra...@rahul.be wrote:

 What is your sstable size set to for each of the sstables, using LCS? Are
 you at the default of 5 MB?

 Rahul Neelakantan

 On Sep 17, 2014, at 10:58 AM, Yatong Zhang bluefl...@gmail.com wrote:

 sorry, about 300k+

 On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang bluefl...@gmail.com
 wrote:

 no, I am running 64 bit JVM。 But I have many sstable files, about 30k+

 On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com
 wrote:

 Are you running on a 32 bit JVM?

 On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:

 Hi there,

 I am using leveled compaction strategy and have many sstable files. The
 error was during the startup, so any idea about this?


 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java
 (line 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java
 (line 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.io.IOException: Map failed
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
 at
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
 ... 10 more
 Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)
 at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
 ... 11 more








Re: ava.lang.OutOfMemoryError: unable to create new native thread

2014-09-17 Thread Chris Lohfink
Check out that the limits here are set correctly:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/install/installRecommendSettings.html

particularly the:
 * mmap limit which is really what this looks like...
 * nproc limit which on some distros defaults to 1024 can be your issue 
(maximum number of threads).. i don't think this is it but maybe.

If you think you have it all correctly but still hitting limit verify that the 
process is picking them up:

cat /proc/`cat /var/run/cassandra/cassandra.pid`/limits

or

cat /proc/whateverCassandraPIDIs/limits

should look (something) like:

Limit Soft Limit   Hard Limit   Units 
Max cpu time  unlimitedunlimitedseconds   
Max file size unlimitedunlimitedbytes 
Max data size unlimitedunlimitedbytes 
Max stack size8388608  unlimitedbytes 
Max core file sizeunlimitedunlimitedbytes 
Max resident set  unlimitedunlimitedbytes 
Max processes unlimitedunlimitedprocesses 
Max open files10   10   files 
Max locked memory unlimitedunlimitedbytes 
Max address space unlimitedunlimitedbytes 
Max file locksunlimitedunlimitedlocks 
Max pending signals   1638216382signals   
Max msgqueue size 819200   819200   bytes 
Max nice priority 20   20   
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus

---
Chris Lohfink

On Sep 17, 2014, at 6:09 PM, Yatong Zhang bluefl...@gmail.com wrote:

 My sstable size is 192MB. I removed some data directories to reduce the data 
 that need to load, and this time it worked, so I was sure this was because of 
 the data was too large.
 I tried to tune the JVM parameters, like heap size or stack size, but didn't 
 help. I finally got it resolved by add some options to '/etc/sysctl.conf':
 
 # Controls the maximum number of PID
 kernel.pid_max = 999
 # Controls the maximum number of threads
 kernel.threads-max = 999
 # Controls the maximum number of virtual memory areas
 vm.max_map_count = 999
  
 Hope this would be helpful to others, but any other advices are also welcome
 
 On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan ra...@rahul.be wrote:
 What is your sstable size set to for each of the sstables, using LCS? Are you 
 at the default of 5 MB?
 
 Rahul Neelakantan
 
 On Sep 17, 2014, at 10:58 AM, Yatong Zhang bluefl...@gmail.com wrote:
 
 sorry, about 300k+
 
 On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang bluefl...@gmail.com wrote:
 no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
 
 On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson gra...@vast.com wrote:
 Are you running on a 32 bit JVM?
 
 On Sep 17, 2014, at 9:43 AM, Yatong Zhang bluefl...@gmail.com wrote:
 
 Hi there,
 
 I am using leveled compaction strategy and have many sstable files. The 
 error was during the startup, so any idea about this?
  
 ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java (line 
 199) Exception in thread Thread[FlushWriter:4,5,main]
 java.lang.OutOfMemoryError: unable to create new native thread
 at java.lang.Thread.start0(Native Method)
 at java.lang.Thread.start(Thread.java:693)
 at 
 java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
 at 
 java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
 ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java (line 
 199) Exception in thread Thread[FlushWriter:2,5,main]
 FSReadError in 
 /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
 at 
 org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
 at