[
https://issues.apache.org/jira/browse/HDFS-7208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173216#comment-14173216
]
Hadoop QA commented on HDFS-7208:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12675133/HDFS-7208-3.patch
against trunk revision 0af1a2b.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing
The following test timeouts occurred in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.mover.TestStorageMover
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/8435//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8435//console
This message is automatically generated.
> NN doesn't schedule replication when a DN storage fails
> -------------------------------------------------------
>
> Key: HDFS-7208
> URL: https://issues.apache.org/jira/browse/HDFS-7208
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Reporter: Ming Ma
> Assignee: Ming Ma
> Attachments: HDFS-7208-2.patch, HDFS-7208-3.patch, HDFS-7208.patch
>
>
> We found the following problem. When a storage device on a DN fails, NN
> continues to believe replicas of those blocks on that storage are valid and
> doesn't schedule replication.
> A DN has 12 storage disks. So there is one blockReport for each storage. When
> a disk fails, # of blockReport from that DN is reduced from 12 to 11. Given
> dfs.datanode.failed.volumes.tolerated is configured to be > 0, NN still
> considers that DN healthy.
> 1. A disk failed. All blocks of that disk are removed from DN dataset.
>
> {noformat}
> 2014-10-04 02:11:12,626 WARN
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removing
> replica BP-1748500278-xx.xx.xx.xxx-1377803467793:1121568886 on failed volume
> /data/disk6/dfs/current
> {noformat}
> 2. NN receives DatanodeProtocol.DISK_ERROR. But that isn't enough to have NN
> remove the DN and the replicas from the BlocksMap. In addition, blockReport
> doesn't provide the diff given that is done per storage.
> {noformat}
> 2014-10-04 02:11:12,681 WARN org.apache.hadoop.hdfs.server.namenode.NameNode:
> Disk error on DatanodeRegistration(xx.xx.xx.xxx,
> datanodeUuid=f3b8a30b-e715-40d6-8348-3c766f9ba9ab, infoPort=50075,
> ipcPort=50020,
> storageInfo=lv=-55;cid=CID-e3c38355-fde5-4e3a-b7ce-edacebdfa7a1;nsid=420527250;c=1410283484939):
> DataNode failed volumes:/data/disk6/dfs/current
> {noformat}
> 3. Run fsck on the file and confirm the NN's BlocksMap still has that replica.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)