[ 
http://issues.apache.org/jira/browse/HADOOP-163?page=comments#action_12376292 ] 

Yoram Arnon commented on HADOOP-163:
------------------------------------

I envision 24x7 systems where the datanode is automatically restarted upon 
failure by init or another HA component. When a partition/FS fails, it will 
likely remain in a failed state after restart, so reporting up and stopping to 
serve would be better than simply exitting, which would lead to thrashing. At 
the end of the day, outside intervention would be required, so the most 
important part is diagnosing the error and reporting it as such. reporting 100% 
full would not generate the same kind of attention by a correction 
system/person.

> If a DFS datanode cannot write onto its file system. it should tell the name 
> node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>     Reporter: Runping Qi

>
> I observed that sometime, if a file of a data node is not mounted properly, 
> it may not be writable. In this case, any data writes will fail. The name 
> node should stop assigning new blocks to that data node. The webpage should 
> show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to