Hi,

I couldn't find any code that would relay this failure to the NN. The relevant 
code is in DFSOutputStream:DataStreamer:processDatanodeError()

For trunk: 
https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
For 0.20: 
http://javasourcecode.org/html/open-source/hadoop/hadoop-0.20.203.0/org/apache/hadoop/hdfs/DFSClient.DFSOutputStream.java.html
 

I believe the assumption here is that the NN should independently discover the 
failed node. Also, some failures might not be worthy of being reported because 
the DN is expected to recover from them.

Ravi.




________________________________
 From: Rahul Bhattacharjee <[email protected]>
To: "[email protected]" <[email protected]> 
Sent: Friday, May 17, 2013 12:10 PM
Subject: HDFS write failures!
 


Hi,


I was going through some documents about HDFS write pattern. It looks like the 
write pipeline is closed when a error is encountered and the faulty node isĀ  
taken out of the pipeline and the write continues.Few other intermediate steps 
are to move the un-acked packets from ack queue to the data queue.


My question is , is this faulty data node is reported to the NN and whether NN 
would continue to use it as a valid DN while serving other write requests in 
future or will it make it as faulty ?


Thanks,
Rahul

Reply via email to