Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-07 Thread Laszlo Szabo
The last run I attempted used 135GB of RAM allocated to the JVM (arguments
below), and while there are OOM errors, there is not a stack trace in
either the system or debug log.  On direct memory runs, there is a stack
trace.  The last Direct memory run used 60GB heaps and 60GB for off heap
(that was the stack trace attached).  The HPROF file is 135GB and I'm
trying to generate the heap information from that now, but its been running
for 2 hours.

The closest I can get to the stack trace for the 135GB heap run is below.

ERROR [PERIODIC-COMMIT-LOG-SYNCER] 2018-08-07 00:34:13,980
> JVMStabilityInspector.java:82 - Exiting due to error while processing
> commit log during initialization.
> java.lang.OutOfMemoryError: Java heap space
> ERROR [MessagingService-Incoming-/10.1.1.11] 2018-08-07 00:34:13,980
> CassandraDaemon.java:228 - Exception in thread
> Thread[MessagingService-Incoming-/10.1.1.11,5,main]
> java.lang.OutOfMemoryError: Java heap space
> ERROR [HintsWriteExecutor:1] 2018-08-07 00:34:13,981
> CassandraDaemon.java:228 - Exception in thread
> Thread[HintsWriteExecutor:1,5,main]
> java.lang.OutOfMemoryError: Java heap space
> ERROR [MessagingService-Incoming-/10.1.1.13] 2018-08-07 00:34:13,980
> CassandraDaemon.java:228 - Exception in thread
> Thread[MessagingService-Incoming-/10.1.1.13,5,main]
> java.lang.OutOfMemoryError: Java heap space
> INFO  [Service Thread] 2018-08-07 00:34:13,982 StatusLogger.java:101 -
> system.schema_triggers0,0



JVM Arguments: [-Xloggc:/var/log/cassandra/gc.log, -ea,
> -XX:+UseThreadPriorities, -XX:ThreadPriorityPolicy=42,
> -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:StringTableSize=103

, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB,
> -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem,
> -Djava.net.preferIPv4Stack=true, -Xms135G, -Xmx135G, -XX:+UseParNewGC,
> -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled,

-XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1,
> -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly,
> -XX:CMSWaitDuration=1, -XX:+CMSParallelInitialMarkEnabled,
> -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnab

led, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC,
> -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime,
> -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation,
> -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -X

mn2048M, -XX:+UseCondCardMark,
> -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler,
> -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar,
> -Dcassandra.jmx.local.port=7199,
> -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.j

mxremote.password.file=/etc/cassandra/jmxremote.password,
> -Djava.library.path=/usr/share/cassandra/lib/sigar-bin,
> -Dlogback.configurationFile=logback.xml,
> -Dcassandra.logdir=/var/log/cassandra,
> -Dcassandra.storagedir=/var/lib/cassandra, -Dcassan

dra-pidfile=/var/run/cassandra/cassandra.pid,
> -XX:HeapDumpPath=/var/lib/cassandra/java_1533559747.hprof,
> -XX:ErrorFile=/var/lib/cassandra/hs_err_1533559747.log]



On Tue, Aug 7, 2018 at 11:12 AM, Jeff Jirsa  wrote:

> That's a direct memory OOM - it's not the heap, it's the offheap.
>
> You can see that 
> gpsmessages.addressreceivedtime_idxgpsmessages.addressreceivedtime_idx
> is holding about 2GB of offheap memory (most of it for the bloom filter),
> but none of the others look like they're holding a ton offheap (either in
> bloom filter, memtable, etc).  With what JVM args are you starting
> cassandra (how much direct memory are you allocating)? Are all of your OOMs
> in direct memory?
>
>
>
> On Tue, Aug 7, 2018 at 6:30 AM, Laszlo Szabo <
> laszlo.viktor.sz...@gmail.com> wrote:
>
>> Hi,
>>
>> Thanks for the fast response!
>>
>> We are not using any materialized views, but there are several indexes.
>> I don't have a recent heap dump, and it will be about 24 before I can
>> generate an interesting one, but most of the memory was allocated to byte
>> buffers, so not entirely helpful.
>>
>> nodetool cfstats is also below.
>>
>> I also see a lot of flushing happening, but it seems like there are too
>> many small allocations to be effective.  Here are the messages I see,
>>
>> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459
>>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>>> ColumnFamily='gpsmessages') to free up room. Used total: 0.54/0.05, live:
>>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>>
>> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459
>>> ColumnFamilyStore.java:915 - Enqueuing flush of gpsmessages: 0.000KiB (0%)
>>> on-heap, 0.014KiB (0%) off-heap
>>
>> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460
>>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>>> ColumnFamily='user_history') to free up room. Used total: 0.54/0.05, live:
>>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>>
>> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461
>>> 

Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-07 Thread Jeff Jirsa
That's a direct memory OOM - it's not the heap, it's the offheap.

You can see
that gpsmessages.addressreceivedtime_idxgpsmessages.addressreceivedtime_idx
is holding about 2GB of offheap memory (most of it for the bloom filter),
but none of the others look like they're holding a ton offheap (either in
bloom filter, memtable, etc).  With what JVM args are you starting
cassandra (how much direct memory are you allocating)? Are all of your OOMs
in direct memory?



On Tue, Aug 7, 2018 at 6:30 AM, Laszlo Szabo 
wrote:

> Hi,
>
> Thanks for the fast response!
>
> We are not using any materialized views, but there are several indexes.  I
> don't have a recent heap dump, and it will be about 24 before I can
> generate an interesting one, but most of the memory was allocated to byte
> buffers, so not entirely helpful.
>
> nodetool cfstats is also below.
>
> I also see a lot of flushing happening, but it seems like there are too
> many small allocations to be effective.  Here are the messages I see,
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='gpsmessages') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915
>> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='user_history') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 ColumnFamilyStore.java:915
>> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.011KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='tweets') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915
>> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='user_history') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
>> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='tweets') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
>> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='gpsmessages') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915
>> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%)
>> off-heap
>
>
>>
>
> Stack traces from errors are below.
>
>
>> java.io.IOException: Broken pipe
>
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_181]
>
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>> ~[na:1.8.0_181]
>
> at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.
>> doFlush(BufferedDataOutputStreamPlus.java:323)
>> ~[apache-cassandra-3.11.1.jar:3.11.1]
>
> at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.
>> flush(BufferedDataOutputStreamPlus.java:331)
>> ~[apache-cassandra-3.11.1.jar:3.11.1]
>
> at org.apache.cassandra.streaming.ConnectionHandler$
>> OutgoingMessageHandler.sendMessage(ConnectionHandler.java:409)
>> [apache-cassandra-3.11.1.jar:3.11.1]
>
> at org.apache.cassandra.streaming.ConnectionHandler$
>> OutgoingMessageHandler.run(ConnectionHandler.java:380)
>> [apache-cassandra-3.11.1.jar:3.11.1]
>
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
>
> ERROR [MutationStage-226] 2018-08-06 07:16:08,236
>> JVMStabilityInspector.java:142 - JVM state determined to be unstable.
>> Exiting forcefully due to:
>
> java.lang.OutOfMemoryError: Direct buffer memory
>
> at 

Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-07 Thread Jonathan Haddad
By default Cassandra is set to generate a heap dump on OOM. It can be a bit
tricky to figure out what’s going on exactly but it’s the best evidence you
can work with.

On Tue, Aug 7, 2018 at 6:30 AM Laszlo Szabo 
wrote:

> Hi,
>
> Thanks for the fast response!
>
> We are not using any materialized views, but there are several indexes.  I
> don't have a recent heap dump, and it will be about 24 before I can
> generate an interesting one, but most of the memory was allocated to byte
> buffers, so not entirely helpful.
>
> nodetool cfstats is also below.
>
> I also see a lot of flushing happening, but it seems like there are too
> many small allocations to be effective.  Here are the messages I see,
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='gpsmessages') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915
>> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='user_history') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 ColumnFamilyStore.java:915
>> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.011KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='tweets') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915
>> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='user_history') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
>> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB (0%)
>> off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='tweets') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
>> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472
>> ColumnFamilyStore.java:1305 - Flushing largest CFS(Keyspace='userinfo',
>> ColumnFamily='gpsmessages') to free up room. Used total: 0.54/0.05, live:
>> 0.00/0.00, flushing: 0.40/0.04, this: 0.00/0.00
>
> DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915
>> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%)
>> off-heap
>
>
>>
>
> Stack traces from errors are below.
>
>
>> java.io.IOException: Broken pipe
>
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>> ~[na:1.8.0_181]
>
> at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_181]
>
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
>> ~[na:1.8.0_181]
>
> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(BufferedDataOutputStreamPlus.java:323)
>> ~[apache-cassandra-3.11.1.jar:3.11.1]
>
> at
>> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.flush(BufferedDataOutputStreamPlus.java:331)
>> ~[apache-cassandra-3.11.1.jar:3.11.1]
>
> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:409)
>> [apache-cassandra-3.11.1.jar:3.11.1]
>
> at
>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:380)
>> [apache-cassandra-3.11.1.jar:3.11.1]
>
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
>
> ERROR [MutationStage-226] 2018-08-06 07:16:08,236
>> JVMStabilityInspector.java:142 - JVM state determined to be unstable.
>> Exiting forcefully due to:
>
> java.lang.OutOfMemoryError: Direct buffer memory
>
> at java.nio.Bits.reserveMemory(Bits.java:694) ~[na:1.8.0_181]
>
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>> ~[na:1.8.0_181]
>
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>> ~[na:1.8.0_181]
>
> at
>> 

Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-07 Thread Laszlo Szabo
Hi,

Thanks for the fast response!

We are not using any materialized views, but there are several indexes.  I
don't have a recent heap dump, and it will be about 24 before I can
generate an interesting one, but most of the memory was allocated to byte
buffers, so not entirely helpful.

nodetool cfstats is also below.

I also see a lot of flushing happening, but it seems like there are too
many small allocations to be effective.  Here are the messages I see,

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='gpsmessages') to
> free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04,
> this: 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,459 ColumnFamilyStore.java:915
> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.014KiB (0%)
> off-heap

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,460 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='user_history') to
> free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04,
> this: 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,461 ColumnFamilyStore.java:915
> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.011KiB (0%)
> off-heap

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='tweets') to free
> up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, this:
> 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,465 ColumnFamilyStore.java:915
> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='user_history') to
> free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04,
> this: 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
> - Enqueuing flush of user_history: 0.000KiB (0%) on-heap, 0.024KiB (0%)
> off-heap

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='tweets') to free
> up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04, this:
> 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,470 ColumnFamilyStore.java:915
> - Enqueuing flush of tweets: 0.000KiB (0%) on-heap, 0.188KiB (0%) off-heap

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:1305
> - Flushing largest CFS(Keyspace='userinfo', ColumnFamily='gpsmessages') to
> free up room. Used total: 0.54/0.05, live: 0.00/0.00, flushing: 0.40/0.04,
> this: 0.00/0.00

DEBUG [SlabPoolCleaner] 2018-08-06 07:16:08,472 ColumnFamilyStore.java:915
> - Enqueuing flush of gpsmessages: 0.000KiB (0%) on-heap, 0.013KiB (0%)
> off-heap


>

Stack traces from errors are below.


> java.io.IOException: Broken pipe

at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> ~[na:1.8.0_181]

at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> ~[na:1.8.0_181]

at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> ~[na:1.8.0_181]

at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_181]

at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
> ~[na:1.8.0_181]

at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.doFlush(BufferedDataOutputStreamPlus.java:323)
> ~[apache-cassandra-3.11.1.jar:3.11.1]

at
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.flush(BufferedDataOutputStreamPlus.java:331)
> ~[apache-cassandra-3.11.1.jar:3.11.1]

at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:409)
> [apache-cassandra-3.11.1.jar:3.11.1]

at
> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:380)
> [apache-cassandra-3.11.1.jar:3.11.1]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]

ERROR [MutationStage-226] 2018-08-06 07:16:08,236
> JVMStabilityInspector.java:142 - JVM state determined to be unstable.
> Exiting forcefully due to:

java.lang.OutOfMemoryError: Direct buffer memory

at java.nio.Bits.reserveMemory(Bits.java:694) ~[na:1.8.0_181]

at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> ~[na:1.8.0_181]

at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> ~[na:1.8.0_181]

at
> org.apache.cassandra.utils.memory.SlabAllocator.getRegion(SlabAllocator.java:139)
> ~[apache-cassandra-3.11.1.jar:3.11.1]

at
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:104)
> ~[apache-cassandra-3.11.1.jar:3.11.1]

at
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
> ~[apache-cassandra-3.11.1.jar:3.11.1]

at
> 

Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-06 Thread Jeff Jirsa


Upgrading to 3.11.3 May fix it (there were some memory recycling bugs fixed 
recently), but analyzing the heap will be the best option

If you can print out the heap histogram and stack trace or open a heap dump in 
your kit or visualvm or MAT and show us what’s at the top of the reclaimed 
objects, we may be able to figure out what’s going on

-- 
Jeff Jirsa


> On Aug 6, 2018, at 5:42 PM, Jeff Jirsa  wrote:
> 
> Are you using materialized views or secondary indices? 
> 
> -- 
> Jeff Jirsa
> 
> 
>> On Aug 6, 2018, at 3:49 PM, Laszlo Szabo  
>> wrote:
>> 
>> Hello All,
>> 
>> I'm having JVM unstable / OOM errors when attempting to auto bootstrap a 9th 
>> node to an existing 8 node cluster (256 tokens).  Each machine has 24 cores 
>> 148GB RAM and 10TB (2TB used).  Under normal operation the 8 nodes have JVM 
>> memory configured with Xms35G and Xmx35G, and handle 2-4 billion inserts per 
>> day.  There are never updates, deletes, or sparsely populated rows.  
>> 
>> For the bootstrap node, I've tried memory values from 35GB to 135GB in 10GB 
>> increments. I've tried using both memtable_allocation_types (heap_buffers 
>> and offheap_buffers).  I've not tried modifying the 
>> memtable_cleanup_threshold but instead have tried memtable_flush_writers 
>> from 2 to 8.  I've tried memtable_(off)heap_space_in_mb from 2 to 6. 
>>  I've tried both CMS and G1 garbage collection with various settings.  
>> 
>> Typically, after streaming about ~2TB of data, CPU load will hit a maximum, 
>> and the "nodetool info" heap memory will, over the course of an hour, 
>> approach the maximum.  At that point, CPU load will drop to a single thread 
>> with minimal activity until the system becomes unstable and eventually the 
>> OOM error occurs.
>> 
>> Excerpt of the system log is below, and what I consistently see is the 
>> MemtableFlushWriter and the MemtableReclaimMemory pending queues grow as the 
>> memory becomes depleted, but the number of completed seems to stop changing 
>> a few minutes after the CPU load spikes.
>> 
>> One other data point is there seems to be a huge number of mutations that 
>> occur after most of the stream has occured.  Concurrent_writes is set at 256 
>> with the queue getting as high as 200K before dropping down.  
>> 
>> Any suggestions for yaml changes or jvm changes?  JVM.options is currently 
>> the default with the memory set to the max, the current YAML file is below.
>> 
>> Thanks!
>> 
>> 
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,329 StatusLogger.java:51 - 
>>> MutationStage 1 2  191498052 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,331 StatusLogger.java:51 - 
>>> ViewMutationStage 0 0  0 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,338 StatusLogger.java:51 - 
>>> PerDiskMemtableFlushWriter_0 0 0   5865 0   
>>>   0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,343 StatusLogger.java:51 - 
>>> ReadStage 0 0  0 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,347 StatusLogger.java:51 - 
>>> ValidationExecutor0 0  0 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,360 StatusLogger.java:51 - 
>>> RequestResponseStage  0 0  8 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,380 StatusLogger.java:51 - 
>>> Sampler   0 0  0 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,382 StatusLogger.java:51 - 
>>> MemtableFlushWriter   8 74293   4716 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,388 StatusLogger.java:51 - 
>>> ReadRepairStage   0 0  0 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,389 StatusLogger.java:51 - 
>>> CounterMutationStage  0 0  0 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,404 StatusLogger.java:51 - 
>>> MiscStage 0 0  0 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,407 StatusLogger.java:51 - 
>>> CompactionExecutor813493 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,410 StatusLogger.java:51 - 
>>> InternalResponseStage 0 0 16 0  
>>>0
>>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,413 StatusLogger.java:51 - 
>>> MemtableReclaimMemory 1  6066356 0  
>>>0
>>> INFO  [Service Thread] 2018-08-06 17:49:26,421 StatusLogger.java:51 - 
>>> 

Re: Bootstrap OOM issues with Cassandra 3.11.1

2018-08-06 Thread Jeff Jirsa
Are you using materialized views or secondary indices? 

-- 
Jeff Jirsa


> On Aug 6, 2018, at 3:49 PM, Laszlo Szabo  
> wrote:
> 
> Hello All,
> 
> I'm having JVM unstable / OOM errors when attempting to auto bootstrap a 9th 
> node to an existing 8 node cluster (256 tokens).  Each machine has 24 cores 
> 148GB RAM and 10TB (2TB used).  Under normal operation the 8 nodes have JVM 
> memory configured with Xms35G and Xmx35G, and handle 2-4 billion inserts per 
> day.  There are never updates, deletes, or sparsely populated rows.  
> 
> For the bootstrap node, I've tried memory values from 35GB to 135GB in 10GB 
> increments. I've tried using both memtable_allocation_types (heap_buffers and 
> offheap_buffers).  I've not tried modifying the memtable_cleanup_threshold 
> but instead have tried memtable_flush_writers from 2 to 8.  I've tried 
> memtable_(off)heap_space_in_mb from 2 to 6.  I've tried both CMS and 
> G1 garbage collection with various settings.  
> 
> Typically, after streaming about ~2TB of data, CPU load will hit a maximum, 
> and the "nodetool info" heap memory will, over the course of an hour, 
> approach the maximum.  At that point, CPU load will drop to a single thread 
> with minimal activity until the system becomes unstable and eventually the 
> OOM error occurs.
> 
> Excerpt of the system log is below, and what I consistently see is the 
> MemtableFlushWriter and the MemtableReclaimMemory pending queues grow as the 
> memory becomes depleted, but the number of completed seems to stop changing a 
> few minutes after the CPU load spikes.
> 
> One other data point is there seems to be a huge number of mutations that 
> occur after most of the stream has occured.  Concurrent_writes is set at 256 
> with the queue getting as high as 200K before dropping down.  
> 
> Any suggestions for yaml changes or jvm changes?  JVM.options is currently 
> the default with the memory set to the max, the current YAML file is below.
> 
> Thanks!
> 
> 
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,329 StatusLogger.java:51 - 
>> MutationStage 1 2  191498052 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,331 StatusLogger.java:51 - 
>> ViewMutationStage 0 0  0 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,338 StatusLogger.java:51 - 
>> PerDiskMemtableFlushWriter_0 0 0   5865 0
>>  0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,343 StatusLogger.java:51 - 
>> ReadStage 0 0  0 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,347 StatusLogger.java:51 - 
>> ValidationExecutor0 0  0 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,360 StatusLogger.java:51 - 
>> RequestResponseStage  0 0  8 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,380 StatusLogger.java:51 - 
>> Sampler   0 0  0 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,382 StatusLogger.java:51 - 
>> MemtableFlushWriter   8 74293   4716 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,388 StatusLogger.java:51 - 
>> ReadRepairStage   0 0  0 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,389 StatusLogger.java:51 - 
>> CounterMutationStage  0 0  0 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,404 StatusLogger.java:51 - 
>> MiscStage 0 0  0 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,407 StatusLogger.java:51 - 
>> CompactionExecutor813493 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,410 StatusLogger.java:51 - 
>> InternalResponseStage 0 0 16 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,413 StatusLogger.java:51 - 
>> MemtableReclaimMemory 1  6066356 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,421 StatusLogger.java:51 - 
>> AntiEntropyStage  0 0  0 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,430 StatusLogger.java:51 - 
>> CacheCleanupExecutor  0 0  0 0   
>>   0
>> INFO  [ScheduledTasks:1] 2018-08-06 17:49:26,431 StatusLogger.java:51 - 
>> PendingRangeCalculator0 0  9 0   
>>   0
>> INFO  [Service Thread] 2018-08-06 17:49:26,436 StatusLogger.java:61 - 
>>