[ 
https://issues.apache.org/jira/browse/HDFS-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13023527#comment-13023527
 ] 

dhruba borthakur commented on HDFS-1848:
----------------------------------------

Thanks Eli for the explanation. It seems fine to make the datanode check those 
directories where it can store data blocks, but to make it check other 
directories (basically use the datanode code to implement a poor man's 
general-purpose disk check, especially for disks that it does not use to store 
data blocks) seems kind-of disturbing to me.

suppose, i have a machine that is running the tasktracker and not the datanode. 
Who is going to check the root disk in this case? what if I am running neither 
the tasktarcker nor the datnode, but instead running the backup node, then who 
checks for the health of the root disk on that machine?

I would rather make the datanode only check the validity of all the directories 
where it is comnfigured to store data.



> Datanodes should shutdown when a critical volume fails
> ------------------------------------------------------
>
>                 Key: HDFS-1848
>                 URL: https://issues.apache.org/jira/browse/HDFS-1848
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: data-node
>            Reporter: Eli Collins
>             Fix For: 0.23.0
>
>
> A DN should shutdown when a critical volume (eg the volume that hosts the OS, 
> logs, pid, tmp dir etc.) fails. The admin should be able to specify which 
> volumes are critical, eg they might specify the volume that lives on the boot 
> disk. A failure in one of these volumes would not be subject to the threshold 
> (HDFS-1161) or result in host decommissioning (HDFS-1847) as the 
> decommissioning process would likely fail.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to