Babar Tareen created CASSANDRA-8152:
---------------------------------------
Summary: Cassandra crashes with Native memory allocation failure
Key: CASSANDRA-8152
URL: https://issues.apache.org/jira/browse/CASSANDRA-8152
Project: Cassandra
Issue Type: Bug
Environment: EC2 (i2.xlarge)
Reporter: Babar Tareen
Priority: Critical
Attachments: db06_hs_err_pid26159.log.zip,
db_05_hs_err_pid25411.log.zip
On a 6 node Cassandra (datastax-community-2.1) cluster running on EC2
(i2.xlarge) instances, Jvm hosting the cassandra service randomly crashes with
following error.
{code}
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 12288 bytes for
committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2747), pid=26159, tid=140305605682944
#
# JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 1.7.0_60-b19)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode linux-amd64
compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
Current thread (0x0000000008341000): JavaThread "MemtableFlushWriter:2055"
daemon [_thread_new, id=23336, stack(0x00007f9b71c56000,0x00007f9b71c97000)]
Stack: [0x00007f9b71c56000,0x00007f9b71c97000], sp=0x00007f9b71c95820, free
space=254k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0x99e7ca] VMError::report_and_die()+0x2ea
V [libjvm.so+0x496fbb] report_vm_out_of_memory(char const*, int, unsigned
long, char const*)+0x9b
V [libjvm.so+0x81d81e] os::Linux::commit_memory_impl(char*, unsigned long,
bool)+0xfe
V [libjvm.so+0x81d8dc] os::pd_commit_memory(char*, unsigned long, bool)+0xc
V [libjvm.so+0x81565a] os::commit_memory(char*, unsigned long, bool)+0x2a
V [libjvm.so+0x81bdcd] os::pd_create_stack_guard_pages(char*, unsigned
long)+0x6d
V [libjvm.so+0x9522de] JavaThread::create_stack_guard_pages()+0x5e
V [libjvm.so+0x958c24] JavaThread::run()+0x34
V [libjvm.so+0x81f7f8] java_start(Thread*)+0x108
{code}
Changes in cassandra-env.sh settings
{code}
MAX_HEAP_SIZE="8G"
HEAP_NEWSIZE="800M"
JVM_OPTS="$JVM_OPTS -XX:TargetSurvivorRatio=50"
JVM_OPTS="$JVM_OPTS -XX:+AggressiveOpts"
JVM_OPTS="$JVM_OPTS -XX:+UseLargePages"
{code}
Writes are about 10K-15K/sec and there are very few reads. Cassandra 2.0.9 with
same settings never crashed. JVM crash logs are attached from two machines.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)