[
https://issues.apache.org/jira/browse/HDFS-826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jitendra Nath Pandey updated HDFS-826:
--------------------------------------
Attachment: HDFS-826.20-security.1.patch
> Allow a mechanism for an application to detect that datanode(s) have died in
> the write pipeline
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-826
> URL: https://issues.apache.org/jira/browse/HDFS-826
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs client
> Affects Versions: 0.20-append
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Fix For: 0.20-append, 0.21.0
>
> Attachments: HDFS-826-0.20-v2.patch, HDFS-826-0.20.patch,
> HDFS-826.20-security.1.patch, Replicable4.txt, ReplicableHdfs.txt,
> ReplicableHdfs2.txt, ReplicableHdfs3.txt
>
>
> HDFS does not replicate the last block of the file that is being currently
> written to by an application. Every datanode death in the write pipeline
> decreases the reliability of the last block of the currently-being-written
> block. This situation can be improved if the application can be notified of a
> datanode death in the write pipeline. Then, the application can decide what
> is the right course of action to be taken on this event.
> In our use-case, the application can close the file on the first datanode
> death, and start writing to a newly created file. This ensures that the
> reliability guarantee of a block is close to 3 at all time.
> One idea is to make DFSOutoutStream. write() throw an exception if the number
> of datanodes in the write pipeline fall below minimum.replication.factor that
> is set on the client (this is backward compatible).
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira