[ 
http://issues.apache.org/jira/browse/HADOOP-163?page=comments#action_12376120 ] 

Runping Qi commented on HADOOP-163:
-----------------------------------


Exiting is an option. However, the datanode may still be able to read, thus to 
serve the existing blocks.


> If a DFS datanode cannot write onto its file system. it should tell the name 
> node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>     Reporter: Runping Qi

>
> I observed that sometime, if a file of a data node is not mounted properly, 
> it may not be writable. In this case, any data writes will fail. The name 
> node should stop assigning new blocks to that data node. The webpage should 
> show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to