[ 
https://issues.apache.org/jira/browse/HADOOP-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12486465
 ] 

Koji Noguchi commented on HADOOP-1200:
--------------------------------------

I've seen many occasions when one of the disks become read-only and TaskTracker 
stays up, heartbeats, but it doesn't make any progress which will hang the job. 
 
On the other hands, datanode cleverly stops itself leaving a log on the 
namenode.

2007-04-03 13:13:48,997 WARN org.apache.hadoop.dfs.NameNode: Report from 
__.__.__.__:____: can not create directory: /___/dfs/data/data/subdir0
2007-04-03 13:13:48,998 WARN org.apache.hadoop.dfs.NameNode: Report from 
__.__.__.__:____: directory is not writable: /___/dfs/data/data
2007-04-03 13:13:49,024 INFO org.apache.hadoop.net.NetworkTopology: Removing a 
node: /__.__.__.__/__.__.__.__:____





> Datanode should periodically do a disk check
> --------------------------------------------
>
>                 Key: HADOOP-1200
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1200
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.2
>            Reporter: Hairong Kuang
>             Fix For: 0.13.0
>
>
> HADOOP-1170 removed the disk checking feature. But this is a needed feature 
> for maintaining a large cluster. I agree that checking the disk on every I/O 
> is too costly. A nicer approach is to have a thread that periodically do a 
> disk check. It then automatically decommissions itself when any error occurs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to