[
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14956657#comment-14956657
]
Hudson commented on HDFS-1172:
------------------------------
FAILURE: Integrated in Hadoop-Hdfs-trunk #2431 (See
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2431/])
HDFS-1172. Blocks in newly completed files are considered (jing9: rev
2a987243423eb5c7e191de2ba969b7591a441c70)
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Blocks in newly completed files are considered under-replicated too quickly
> ---------------------------------------------------------------------------
>
> Key: HDFS-1172
> URL: https://issues.apache.org/jira/browse/HDFS-1172
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 0.21.0
> Reporter: Todd Lipcon
> Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-1172-150907.patch, HDFS-1172.008.patch,
> HDFS-1172.009.patch, HDFS-1172.010.patch, HDFS-1172.011.patch,
> HDFS-1172.012.patch, HDFS-1172.013.patch, HDFS-1172.014.patch,
> HDFS-1172.014.patch, HDFS-1172.patch, hdfs-1172.txt, hdfs-1172.txt,
> replicateBlocksFUC.patch, replicateBlocksFUC1.patch, replicateBlocksFUC1.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't
> find an existing JIRA. It often happens that we see the NN schedule
> replication on the last block of files very quickly after they're completed,
> before the other DNs in the pipeline have a chance to report the new block.
> This results in a lot of extra replication work on the cluster, as we
> replicate the block and then end up with multiple excess replicas which are
> very quickly deleted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)