[
https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296000#comment-16296000
]
Jepson edited comment on HDFS-12936 at 1/8/18 1:31 AM:
-------------------------------------------------------
[~anu] [~cheersyang] [~alicezhangchen] Thank you very much.
I turn up these parameters.
{code:java}
1.
echo "kernel.threads-max=196605" >> /etc/sysctl.conf
echo "kernel.pid_max=196605" >> /etc/sysctl.conf
echo "vm.max_map_count=393210" >> /etc/sysctl.conf
sysctl -p
2.
/etc/security/limits.conf
* soft nofile 196605
* hard nofile 196605
* soft nproc 196605
* hard nproc 196605
{code}
was (Author: [email protected]):
[~anu] [~cheersyang] [~alicezhangchen] Thank you very much.
I turn up these parameters.
{code:java}
1.
echo "sys.kernel.threads-max=196605" >> /etc/sysctl.conf
echo "sys.kernel.pid_max=196605" >> /etc/sysctl.conf
echo "sys.vm.max_map_count=393210" >> /etc/sysctl.conf
sysctl -p
2.
/etc/security/limits.conf
* soft nofile 196605
* hard nofile 196605
* soft nproc 196605
* hard nproc 196605
{code}
> java.lang.OutOfMemoryError: unable to create new native thread
> --------------------------------------------------------------
>
> Key: HDFS-12936
> URL: https://issues.apache.org/jira/browse/HDFS-12936
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Environment: CDH5.12
> hadoop2.6
> Reporter: Jepson
> Original Estimate: 96h
> Remaining Estimate: 96h
>
> I configure the max user processes 65535 with any user ,and the datanode
> memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> PacketResponder:
> BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2017-12-17 23:58:31,425 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory.
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory.
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory.
> Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
> Receiving BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928
> src: /192.168.17.54:40478 dest: /192.168.17.48:50010
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]