[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ]
     
Doug Cutting closed HADOOP-163:
-------------------------------


> If a DFS datanode cannot write onto its file system. it should tell the name 
> node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>     Reporter: Runping Qi
>     Assignee: Hairong Kuang
>      Fix For: 0.3.0
>  Attachments: disk.patch
>
> I observed that sometime, if a file of a data node is not mounted properly, 
> it may not be writable. In this case, any data writes will fail. The name 
> node should stop assigning new blocks to that data node. The webpage should 
> show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to