[
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14100983#comment-14100983
]
Arpit Agarwal commented on HDFS-6694:
-------------------------------------
I think it's the same failure.
{code}
java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:409)
at
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:467)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:766)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:710)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:218)
at java.lang.Thread.run(Thread.java:662)
{code}
> TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently
> with various symptoms
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-6694
> URL: https://issues.apache.org/jira/browse/HDFS-6694
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0
> Reporter: Yongjun Zhang
> Assignee: Yongjun Zhang
> Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HDFS-6694.001.dbg.patch, HDFS-6694.001.dbg.patch,
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt,
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt
>
>
> TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently
> with various symptoms. Typical failures are described in first comment.
--
This message was sent by Atlassian JIRA
(v6.2#6252)