[
https://issues.apache.org/jira/browse/HDFS-12630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wei-Chiu Chuang resolved HDFS-12630.
------------------------------------
Resolution: Duplicate
> Rolling restart can create inconsistency between blockMap and corrupt
> replicas map
> ----------------------------------------------------------------------------------
>
> Key: HDFS-12630
> URL: https://issues.apache.org/jira/browse/HDFS-12630
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.6.0
> Reporter: Andre Araujo
>
> After a NN rolling restart several HDFS files started showing block problems.
> Running FSCK for one of the files or for the directory that contained it
> would complete with a FAILED message but without any details of the failure.
> The NameNode log showed the following:
> {code}
> 2017-10-10 16:58:32,147 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> FSCK started by hdfs (auth:KERBEROS_SSL) from /10.92.128.4 for path
> /user/prod/data/file_20171010092201.csv at Tue Oct 10 16:58:32 PDT 2017
> 2017-10-10 16:58:32,147 WARN
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent
> number of corrupt replicas for blk_1941920008_1133195379 blockMap has 1 but
> corrupt replicas map has 2
> 2017-10-10 16:58:32,147 WARN org.apache.hadoop.hdfs.server.namenode.NameNode:
> Fsck on path '/user/prod/data/file_20171010092201.csv' FAILED
> java.lang.ArrayIndexOutOfBoundsException
> {code}
> After triggering a full block report for all the DNs the problem went away.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]