[
https://issues.apache.org/jira/browse/HDFS-6532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413411#comment-15413411
]
Yiqun Lin commented on HDFS-6532:
---------------------------------
I looked into the logs info when this test failed, they both showed these stack
infos:
{code}
BP-1186421078-172.17.0.2-1470312073795:blk_1073741826_1006] WARN
hdfs.DataStreamer (DataStreamer.java:closeResponder(873)) - Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1245)
at java.lang.Thread.join(Thread.java:1319)
at
org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
at
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:733)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:729)
2016-08-04 12:02:02,523 [Thread-0] INFO hdfs.DFSClient
(TestCrcCorruption.java:testCorruptionDuringWrt(140)) - Got expected exception
java.io.InterruptedIOException: Interrupted while waiting for data to be
acknowledged by pipeline
at
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:775)
at
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
at
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at
org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}
But now I am not sure that why InterruptedException happens intermittently. In
addition, I found that if InterruptedException was threw when the program did
the {{dataQueue.wait()}}, then it will lead the files not completely closed in
{{DFSClient#closeAllFilesBeingWritten}}. This issue was tracked by HDFS-10549.
I thinks this two issue was related.
> Intermittent test failure
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-6532
> URL: https://issues.apache.org/jira/browse/HDFS-6532
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode, hdfs-client
> Affects Versions: 2.4.0
> Reporter: Yongjun Zhang
>
> Per https://builds.apache.org/job/Hadoop-Hdfs-trunk/1774/testReport, we had
> the following failure. Local rerun is successful
> {code}
> Regression
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt
> Failing for the past 1 build (Since Failed#1774 )
> Took 50 sec.
> Error Message
> test timed out after 50000 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 50000 milliseconds
> at java.lang.Object.wait(Native Method)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2024)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2008)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2107)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
> at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:98)
> at
> org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:133)
> {code}
> See relevant exceptions in log
> {code}
> 2014-06-14 11:56:15,283 WARN datanode.DataNode
> (BlockReceiver.java:verifyChunks(404)) - Checksum error in block
> BP-1675558312-67.195.138.30-1402746971712:blk_1073741825_1001 from
> /127.0.0.1:41708
> org.apache.hadoop.fs.ChecksumException: Checksum error:
> DFSClient_NONMAPREDUCE_-1139495951_8 at 64512 exp: 1379611785 got: -12163112
> at
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:353)
> at
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:284)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:402)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:537)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234)
> at java.lang.Thread.run(Thread.java:662)
> 2014-06-14 11:56:15,285 WARN datanode.DataNode
> (BlockReceiver.java:run(1207)) - IOException in BlockReceiver.run():
> java.io.IOException: Shutting down writer and responder due to a checksum
> error in received data. The error response has been sent upstream.
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1352)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1278)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1199)
> at java.lang.Thread.run(Thread.java:662)
> ...
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]