[ 
https://issues.apache.org/jira/browse/HADOOP-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641662#action_12641662
 ] 

Konstantin Shvachko commented on HADOOP-4480:
---------------------------------------------

Does it make more sense for a node with a bad drive to go to decommissioning 
mode?
The web UI can be modified to have 3 sections (live, dead and decommissioning 
nodes) rather than just 2.
And the node can be still used for replication and read-only purposes.

> data node process should not die if one dir goes bad
> ----------------------------------------------------
>
>                 Key: HADOOP-4480
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to 
> store blocks, it currently exits when one of them is not writable.   Instead, 
> it should either completely ignore that directory or attempt to continue 
> reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to