[
https://issues.apache.org/jira/browse/HDFS-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173769#comment-14173769
]
Hudson commented on HDFS-7208:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #1928 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1928/])
HDFS-7208. NN doesn't schedule replication when a DN storage fails.
Contributed by Ming Ma (szetszwo: rev 41980c56d3c01d7a0ddc7deea2d89b7f28026722)
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/DatanodeStorage.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
> NN doesn't schedule replication when a DN storage fails
> -------------------------------------------------------
>
> Key: HDFS-7208
> URL: https://issues.apache.org/jira/browse/HDFS-7208
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Ming Ma
> Assignee: Ming Ma
> Fix For: 2.6.0
>
> Attachments: HDFS-7208-2.patch, HDFS-7208-3.patch, HDFS-7208.patch
>
>
> We found the following problem. When a storage device on a DN fails, NN
> continues to believe replicas of those blocks on that storage are valid and
> doesn't schedule replication.
> A DN has 12 storage disks. So there is one blockReport for each storage. When
> a disk fails, # of blockReport from that DN is reduced from 12 to 11. Given
> dfs.datanode.failed.volumes.tolerated is configured to be > 0, NN still
> considers that DN healthy.
> 1. A disk failed. All blocks of that disk are removed from DN dataset.
>
> {noformat}
> 2014-10-04 02:11:12,626 WARN
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removing
> replica BP-1748500278-xx.xx.xx.xxx-1377803467793:1121568886 on failed volume
> /data/disk6/dfs/current
> {noformat}
> 2. NN receives DatanodeProtocol.DISK_ERROR. But that isn't enough to have NN
> remove the DN and the replicas from the BlocksMap. In addition, blockReport
> doesn't provide the diff given that is done per storage.
> {noformat}
> 2014-10-04 02:11:12,681 WARN org.apache.hadoop.hdfs.server.namenode.NameNode:
> Disk error on DatanodeRegistration(xx.xx.xx.xxx,
> datanodeUuid=f3b8a30b-e715-40d6-8348-3c766f9ba9ab, infoPort=50075,
> ipcPort=50020,
> storageInfo=lv=-55;cid=CID-e3c38355-fde5-4e3a-b7ce-edacebdfa7a1;nsid=420527250;c=1410283484939):
> DataNode failed volumes:/data/disk6/dfs/current
> {noformat}
> 3. Run fsck on the file and confirm the NN's BlocksMap still has that replica.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)