These 2-part blog posts from Yongjun should help you understand the HDFS
file write recovery process better:
http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-processes-part-1/
 and
http://blog.cloudera.com/blog/2015/03/understanding-hdfs-recovery-processes-part-2/

On Mon, Sep 7, 2015 at 10:39 AM miriyala srinivas <[email protected]>
wrote:

> Hi All,
>
> I am just started Learning fundamentals of  HDFS  and its internal
> mechanism , concepts used here are very impressive and looks simple but
> makes me confusing and my question is *who will responsible for handling
> DFS write failure in pipe line (assume replication factor is 3 and 2nd DN
> failed in the pipeline)*? if any data node failed during the pipe line
> write then the entire pipe line will get stopped? or new data node added to
> the existing pipe line? how this entire mechanism works?I really appreciate
> if someone with good knowledge of HDFS can explains to me.
>
> Note:I read bunch of documents but none seems to be explained what i am
> looking for.
>
> thanks
> srinivas
>

Reply via email to