@Harsh thanks for sharing link. On Tue, Sep 8, 2015 at 6:56 AM, Harsh J <[email protected]> wrote:
> [image: Boxbe] <https://www.boxbe.com/overview> This message is eligible > for Automatic Cleanup! ([email protected]) Add cleanup rule > <https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Ftoken%3DDskzNGpvGEtqzVfF%252FHII0TWD32sZJwX1X0ntoIpp9JvtNevAVHjiQfIT1eEAKBENJ1oKtL%252BVEzL106vJbLC2%252BD2Zjk0KGy9L26amEFTXGtV0dl5AzoHHqzklSJy92giPNfVub5TBTsw%253D%26key%3D0d%252B0eZbKZ6qlbkuthm2%252Boh5Zw1YFLki4y2RCuoRVvns%253D&tc_serial=22562045659&tc_rand=80405818&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_ADD&utm_content=001> > | More info > <http://blog.boxbe.com/general/boxbe-automatic-cleanup?tc_serial=22562045659&tc_rand=80405818&utm_source=stf&utm_medium=email&utm_campaign=ANNO_CLEANUP_ADD&utm_content=001> > > These 2-part blog posts from Yongjun should help you understand the HDFS > file write recovery process better: > http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-processes-part-1/ > and > http://blog.cloudera.com/blog/2015/03/understanding-hdfs-recovery-processes-part-2/ > > On Mon, Sep 7, 2015 at 10:39 AM miriyala srinivas <[email protected]> > wrote: > >> Hi All, >> >> I am just started Learning fundamentals of HDFS and its internal >> mechanism , concepts used here are very impressive and looks simple but >> makes me confusing and my question is *who will responsible for handling >> DFS write failure in pipe line (assume replication factor is 3 and 2nd DN >> failed in the pipeline)*? if any data node failed during the pipe line >> write then the entire pipe line will get stopped? or new data node added to >> the existing pipe line? how this entire mechanism works?I really appreciate >> if someone with good knowledge of HDFS can explains to me. >> >> Note:I read bunch of documents but none seems to be explained what i am >> looking for. >> >> thanks >> srinivas >> > >
