My sstable size is 192MB. I removed some data directories to reduce the
data that need to load, and this time it worked, so I was sure this was
because of the data was too large.
I tried to tune the JVM parameters, like heap size or stack size, but
didn't help. I finally got it resolved by add some options to
'/etc/sysctl.conf':

# Controls the maximum number of PID
> kernel.pid_max = 9999999
> # Controls the maximum number of threads
> kernel.threads-max = 9999999
> # Controls the maximum number of virtual memory areas
> vm.max_map_count = 9999999
>

Hope this would be helpful to others, but any other advices are also welcome

On Wed, Sep 17, 2014 at 11:54 PM, Rahul Neelakantan <ra...@rahul.be> wrote:

> What is your sstable size set to for each of the sstables, using LCS? Are
> you at the default of 5 MB?
>
> Rahul Neelakantan
>
> On Sep 17, 2014, at 10:58 AM, Yatong Zhang <bluefl...@gmail.com> wrote:
>
> sorry, about 300k+
>
> On Wed, Sep 17, 2014 at 10:56 PM, Yatong Zhang <bluefl...@gmail.com>
> wrote:
>
>> no, I am running 64 bit JVM。 But I have many sstable files, about 30k+
>>
>> On Wed, Sep 17, 2014 at 10:50 PM, graham sanderson <gra...@vast.com>
>> wrote:
>>
>>> Are you running on a 32 bit JVM?
>>>
>>> On Sep 17, 2014, at 9:43 AM, Yatong Zhang <bluefl...@gmail.com> wrote:
>>>
>>> Hi there,
>>>
>>> I am using leveled compaction strategy and have many sstable files. The
>>> error was during the startup, so any idea about this?
>>>
>>>
>>>> ERROR [FlushWriter:4] 2014-09-17 22:36:59,383 CassandraDaemon.java
>>>> (line 199) Exception in thread Thread[FlushWriter:4,5,main]
>>>> java.lang.OutOfMemoryError: unable to create new native thread
>>>>         at java.lang.Thread.start0(Native Method)
>>>>         at java.lang.Thread.start(Thread.java:693)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1017)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1163)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:724)
>>>> ERROR [FlushWriter:2] 2014-09-17 22:36:59,472 CassandraDaemon.java
>>>> (line 199) Exception in thread Thread[FlushWriter:2,5,main]
>>>> FSReadError in
>>>> /data5/cass/system/compactions_in_progress/system-compactions_in_progress-jb-23-Index.db
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200)
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168)
>>>>         at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:334)
>>>>         at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:324)
>>>>         at
>>>> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:394)
>>>>         at
>>>> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:342)
>>>>         at
>>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>>         at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>         at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>         at java.lang.Thread.run(Thread.java:724)
>>>> Caused by: java.io.IOException: Map failed
>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:849)
>>>>         at
>>>> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192)
>>>>         ... 10 more
>>>> Caused by: java.lang.OutOfMemoryError: Map failed
>>>>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>>>>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:846)
>>>>         ... 11 more
>>>>
>>>
>>>
>>>
>>
>

Reply via email to