[
https://issues.apache.org/jira/browse/HADOOP-3760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613666#action_12613666
]
Lohit Vijayarenu commented on HADOOP-3760:
------------------------------------------
I was able to reproduce this by commenting out call to file complete, a request
to namenode. The changes in HADOOP-3681 was waiting for a while, retrying 10
times for file to complete and then check for isClose(). By that time closed is
already set to true so, it would throw a stream closed exception. isClosed()
call after flushInternal() should do the job needed for HADOOP-3681. I tested
both the cases and it seems to fix it.
> DFS operations fail because of Stream closed error
> --------------------------------------------------
>
> Key: HADOOP-3760
> URL: https://issues.apache.org/jira/browse/HADOOP-3760
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.17.1, 0.18.0
> Reporter: Amar Kamat
> Priority: Blocker
> Fix For: 0.18.0
>
>
> DFS operations fail because of {{java.io.IOException: Stream closed.}}.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.