Sean Chow created HADOOP-17209:
----------------------------------
Summary: ErasureCode native library memory leak
Key: HADOOP-17209
URL: https://issues.apache.org/jira/browse/HADOOP-17209
Project: Hadoop Common
Issue Type: Bug
Components: native
Affects Versions: 3.1.3, 3.2.1, 3.3.0
Reporter: Sean Chow
Assignee: Sean Chow
We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} HDFS
in production, and both of them have the memory increasing over {{-Xmx}} value.
This's the jvm options:
{code:java}
-Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT
-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true
-Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled
-XX:+HeapDumpOnOutOfMemoryError ...{code}
The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g.
{code:java}
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27
/usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code}
!image-2020-08-15-17-45-27-363.png!
!image-2020-08-15-17-50-48-598.png!
This too much memory used leads to my machine unresponsive(if enable swap), or
oom-killer happens.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]