[
https://issues.apache.org/jira/browse/HDFS-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12984509#action_12984509
]
Konstantin Shvachko commented on HDFS-1401:
-------------------------------------------
Got the same failure, but with a different trace just now. Looks like it is
failing on DataNode startup this time.
{code}
Testcase: testUnfinishedBlockRead took 3.074 sec
Testcase: testUnfinishedBlockPacketBufferOverrun took 1.737 sec
Testcase: testImmediateReadOfNewFile took 2.189 sec
Testcase: testUnfinishedBlockCRCErrorTransferTo took 3.2 sec
Testcase: testUnfinishedBlockCRCErrorTransferToVerySmallWrite took 10.543 sec
Testcase: testUnfinishedBlockCRCErrorNormalTransfer took 3.658 sec
Testcase: testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite took 0.435 sec
Caused an ERROR
Too many open files
java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at java.nio.channels.Selector.open(Selector.java:209)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:318)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1501)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:408)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:332)
at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:292)
at
org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:421)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:512)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:282)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:264)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1575)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1518)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1485)
at
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:674)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:479)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:199)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:74)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:191)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88)
at
org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73)
{code}
Let me know if the entire log is needed.
> TestFileConcurrentReader test case is still timing out / failing
> ----------------------------------------------------------------
>
> Key: HDFS-1401
> URL: https://issues.apache.org/jira/browse/HDFS-1401
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.22.0
> Reporter: Tanping Wang
> Priority: Critical
> Attachments: HDFS-1401.patch
>
>
> The unit test case, TestFileConcurrentReader after its most recent fix in
> HDFS-1310 still times out when using java 1.6.0_07. When using java
> 1.6.0_07, the test case simply hangs. On apache Hudson build ( which
> possibly is using a higher sub-version of java) this test case has presented
> an inconsistent test result that it sometimes passes, some times fails. For
> example, between the most recent build 423, 424 and build 425, there is no
> effective change, however, the test case failed on build 424 and passed on
> build 425
> build 424 test failed
> https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/424/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/
> build 425 test passed
> https://hudson.apache.org/hudson/job/Hadoop-Hdfs-trunk/425/testReport/org.apache.hadoop.hdfs/TestFileConcurrentReader/
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.