[
https://issues.apache.org/jira/browse/HDFS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13528552#comment-13528552
]
Hadoop QA commented on HDFS-4270:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12560296/HDFS-4270.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/3632//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3632//console
This message is automatically generated.
> Replications of the highest priority should be allowed to choose a source
> datanode that has reached its max replication limit
> -----------------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-4270
> URL: https://issues.apache.org/jira/browse/HDFS-4270
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 3.0.0, 0.23.5
> Reporter: Derek Dagit
> Assignee: Derek Dagit
> Priority: Minor
> Attachments: HDFS-4270-branch-0.23.patch, HDFS-4270.patch,
> HDFS-4270.patch
>
>
> Blocks that have been identified as under-replicated are placed on one of
> several priority queues. The highest priority queue is essentially reserved
> for situations in which only one replica of the block exists, meaning it
> should be replicated ASAP.
> The ReplicationMonitor periodically computes replication work, and a call to
> BlockManager#chooseUnderReplicatedBlocks selects a given number of
> under-replicated blocks, choosing blocks from the highest-priority queue
> first and working down to the lowest priority queue.
> In the subsequent call to BlockManager#computeReplicationWorkForBlocks, a
> source for the replication is chosen from among datanodes that have an
> available copy of the block needed. This is done in
> BlockManager#chooseSourceDatanode.
> chooseSourceDatanode's job is to choose the datanode for replication. It
> chooses a random datanode from the available datanodes that has not reached
> its replication limit (preferring datanodes that are currently
> decommissioning).
> However, the priority queue of the block does not inform the logic. If a
> datanode holds the last remaining replica of a block and has already reached
> its replication limit, the node is dismissed outright and the replication is
> not scheduled.
> In some situations, this could lead to data loss, as the last remaining
> replica could disappear if an opportunity is not taken to schedule a
> replication. It would be better to waive the max replication limit in cases
> of highest-priority block replication.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira