[ https://issues.apache.org/jira/browse/HDFS-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Todd Lipcon updated HDFS-3041: ------------------------------ Attachment: test.txt Here's a test modification which shows the problem. It's not trivial to fix... will work on this in the coming weeks. > DFSOutputStream.close doesn't properly handle interruption > ---------------------------------------------------------- > > Key: HDFS-3041 > URL: https://issues.apache.org/jira/browse/HDFS-3041 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs client > Affects Versions: 0.23.0, 0.24.0 > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Attachments: test.txt > > > TestHFlush.testHFlushInterrupted can fail occasionally due to a race: if a > thread is interrupted while calling close(), then the {{finally}} clause of > the {{close}} function sets {{closed = true}}. At this point it has enqueued > the "end of block" packet to the DNs, but hasn't called {{completeFile}}. > Then, if {{close}} is called again (as in the test case), it will be > short-circuited since {{closed}} is already true. Thus {{completeFile}} never > ends up getting called. This also means that the test can fail if the > pipeline is running slowly, since the assertion that the file is the correct > length won't see the last packet or two. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira