[
https://issues.apache.org/jira/browse/HDFS-3701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Uma Maheswara Rao G updated HDFS-3701:
--------------------------------------
Attachment: HDFS-3701.branch-1.v4.patch
Oops, Its my mistake. Prepared it on hurry. Really sorry for my mistake here.
Actually I might have removed thinking that updateBlkInfo will throw exception
in failure. did not looked bak once changed updateBlkInfo.
Removed comment along with this patch.
In trunk, waitFor is throwing IOException, do you think, I can file a small
trivial bug and change for consistency in code?
> HDFS may miss the final block when reading a file opened for writing if one
> of the datanode is dead
> ---------------------------------------------------------------------------------------------------
>
> Key: HDFS-3701
> URL: https://issues.apache.org/jira/browse/HDFS-3701
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 1.0.3
> Reporter: nkeywal
> Assignee: nkeywal
> Priority: Critical
> Attachments: HDFS-3701.branch-1.v2.merged.patch,
> HDFS-3701.branch-1.v3.patch, HDFS-3701.branch-1.v4.patch,
> HDFS-3701.ontopof.v1.patch, HDFS-3701.patch
>
>
> When the file is opened for writing, the DFSClient calls one of the datanode
> owning the last block to get its size. If this datanode is dead, the socket
> exception is shallowed and the size of this last block is equals to zero.
> This seems to be fixed on trunk, but I didn't find a related Jira. On 1.0.3,
> it's not fixed. It's on the same area as HDFS-1950 or HDFS-3222.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira