[ 
https://issues.apache.org/jira/browse/HADOOP-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12641721#action_12641721
 ] 

dhruba borthakur commented on HADOOP-4480:
------------------------------------------

@Konstantin: At the end of a decommission process, a datanode actually shuts 
down, doesn't it? If so, then using decommissioning might not work in the 
current scenario.

@Allen: HDFS currently assumes that the entire disk space on the cluster is 
readable/writable. This keeps the accounting of used space/free space, etc 
pretty simple. If there is aviailable disk space, then it can be used to store 
new files in HDFS. The entire space available in the data directories are free 
disk space. If we allow the datanode to remember read-only disks, then there 
will be some changes in accounting. Similarly, there will be some changes 
needed for reporting and alerting adminstrators. This means that the 
"adminstration" of the cluster becomes slightly more complex. The advantage is 
that the last bit of disk space gets to be used.

So, my question is: are you seeing this to be a real problem on production 
clusters?

> data node process should not die if one dir goes bad
> ----------------------------------------------------
>
>                 Key: HADOOP-4480
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to 
> store blocks, it currently exits when one of them is not writable.   Instead, 
> it should either completely ignore that directory or attempt to continue 
> reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to