[ 
http://issues.apache.org/jira/browse/HADOOP-163?page=comments#action_12376374 ] 

Yoram Arnon commented on HADOOP-163:
------------------------------------

what I'm suggesting is close to your suggestion:
if you're read-only, behave as though you're 100% full, serve only read 
requests, but don't mislead: report you're read-only, not that you're 100% 
full. Namenode will avoid new block allocations to the node, but its log will 
contain an error that could trigger external corrective action.


> If a DFS datanode cannot write onto its file system. it should tell the name 
> node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>     Reporter: Runping Qi

>
> I observed that sometime, if a file of a data node is not mounted properly, 
> it may not be writable. In this case, any data writes will fail. The name 
> node should stop assigning new blocks to that data node. The webpage should 
> show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to