[ 
https://issues.apache.org/jira/browse/HADOOP-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641629#action_12641629
 ] 

dhruba borthakur commented on HADOOP-4480:
------------------------------------------

If you really want to do this, then we will need better reporting to the 
adminstrator so that he/she can come to know that this datanode needs tending 
to.

In the existing scheme of things, if a partition becomre read-only, the 
datanode shuts down and the adminstrator can see it being listed as 'dead" in 
the HDFS UI.


> data node process should not die if one dir goes bad
> ----------------------------------------------------
>
>                 Key: HADOOP-4480
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to 
> store blocks, it currently exits when one of them is not writable.   Instead, 
> it should either completely ignore that directory or attempt to continue 
> reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to