[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ]
Hairong Kuang updated HADOOP-163:
---------------------------------
Attachment: disk.patch
In this patch, if a data node finds that its data directory becomes not
readable or writable, it logs the error and reports the problem to its namen
ode and shut down itself. When the name node receives the error report, it lots
the error and removes the data node info.
A data node detects disk problem at startup time, when it receives a r/w
request, after it receives a command from its name node, and before it sends
out a block report. A data node will not start up if its data dir is not
readable or writable.
> If a DFS datanode cannot write onto its file system. it should tell the name
> node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-163
> URL: http://issues.apache.org/jira/browse/HADOOP-163
> Project: Hadoop
> Type: Bug
> Components: dfs
> Versions: 0.2
> Reporter: Runping Qi
> Assignee: Hairong Kuang
> Fix For: 0.3
> Attachments: disk.patch
>
> I observed that sometime, if a file of a data node is not mounted properly,
> it may not be writable. In this case, any data writes will fail. The name
> node should stop assigning new blocks to that data node. The webpage should
> show that node is in an abnormal state.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira