[ 
https://issues.apache.org/jira/browse/HDFS-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12885734#action_12885734
 ] 

Allen Wittenauer commented on HDFS-1282:
----------------------------------------

This could potentially break operating systems like Solaris where the df is 
parsed incorrectly to begin with...

> namenode should reject datanodes which send impossible block reports
> --------------------------------------------------------------------
>
>                 Key: HDFS-1282
>                 URL: https://issues.apache.org/jira/browse/HDFS-1282
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node, name-node
>    Affects Versions: 0.20.1
>            Reporter: Andrew Ryan
>
> Over the past few weeks we've had several datanodes with bad disks that 
> suffer ext3 corruption, and consequently start reporting impossible values 
> for how full they are. This particular node, for example, has a configured 
> capacity of 10.86TB but reports 1733.95TB used, for a total of 15973.57% 
> utilization.
> Node   Last Contact   Admin State     Configured Capacity (TB)        Used 
> (TB)       Non DFS  Used (TB)      Remaining  (TB)         Used  (%)       
> Used  (%)       Remaining (%)   Blocks 
> hadoop2254     44     In Service      10.86   1733.95 0       5.24    
> 15973.57   48.25        65602 
> If we can avoid generating such bogus data on the datanode that would be 
> great.  But if the namenode receives such an impossible block report, it 
> should definitely consider that datanode to be not trustworthy, and in my 
> opinion, make it dead.
> The "fix" in our case was either to fsck or replace the bad disk.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to