TestAppend2#testComplexAppend failed on "Too many open files"
-------------------------------------------------------------
Key: HDFS-690
URL: https://issues.apache.org/jira/browse/HDFS-690
Project: Hadoop HDFS
Issue Type: Bug
Components: test
Affects Versions: 0.21.0
Reporter: Hairong Kuang
Priority: Blocker
Fix For: 0.21.0
the append write failed on "Too many open files":
Some bytes were failed to append to a file on the following error:
java.io.IOException: Cannot run program "stat": java.io.IOException: error=24,
Too many open files
at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
at java.lang.Runtime.exec(Runtime.java:593)
at java.lang.Runtime.exec(Runtime.java:466)
at
org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:644)
at
org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:205)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1075)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1058)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:110)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258)
at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
at
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.