[
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099052#comment-14099052
]
Hadoop QA commented on HDFS-6833:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12662084/HDFS-6833-6.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
The following test timeouts occurred in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
org.apache.hadoop.hdfs.TestHDFSServerPorts
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/7646//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7646//console
This message is automatically generated.
> DirectoryScanner should not register a deleting block with memory of DataNode
> -----------------------------------------------------------------------------
>
> Key: HDFS-6833
> URL: https://issues.apache.org/jira/browse/HDFS-6833
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.0.0
> Reporter: Shinichi Yamashita
> Assignee: Shinichi Yamashita
> Attachments: HDFS-6833-6.patch, HDFS-6833.patch, HDFS-6833.patch,
> HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch
>
>
> When a block is deleted in DataNode, the following messages are usually
> output.
> {code}
> 2014-08-07 17:53:11,606 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
> Scheduling blk_1073741825_1001 file
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> for deletion
> 2014-08-07 17:53:11,617 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
> Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> However, DirectoryScanner may be executed when DataNode deletes the block in
> the current implementation. And the following messsages are output.
> {code}
> 2014-08-07 17:53:30,519 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
> Scheduling blk_1073741825_1001 file
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> for deletion
> 2014-08-07 17:53:31,426 INFO
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
> BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata
> files:0, missing block files:0, missing blocks in memory:1, mismatched
> blocks:0
> 2014-08-07 17:53:31,426 WARN
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added
> missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
> getNumBytes() = 21230663
> getBytesOnDisk() = 21230663
> getVisibleLength()= 21230663
> getVolume() = /hadoop/data1/dfs/data/current
> getBlockFile() =
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> unlinked =false
> 2014-08-07 17:53:31,531 INFO
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
> Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file
> /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
> {code}
> Deleting block information is registered in DataNode's memory.
> And when DataNode sends a block report, NameNode receives wrong block
> information.
> For example, when we execute recommission or change the number of
> replication, NameNode may delete the right block as "ExcessReplicate" by this
> problem.
> And "Under-Replicated Blocks" and "Missing Blocks" occur.
> When DataNode run DirectoryScanner, DataNode should not register a deleting
> block.
--
This message was sent by Atlassian JIRA
(v6.2#6252)