[ 
https://issues.apache.org/jira/browse/HDFS-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13013538#comment-13013538
 ] 

Matt Foley commented on HDFS-1172:
----------------------------------

Fixing this issue will not only remove a performance issue, it will also help 
with memory management.  Every over-replicated block gets its "triplets" array 
re-allocated.  This is a set of (3 x replication) object references used to 
link the block into each datanode's blockList.  If the replica count becomes 
greater than the replication factor, this array gets re-allocated, and it never 
gets shrunk if the replica count decreases.  If this is happening with 
essentially every new block, then there's an awful lot of excess memory being 
wasted on unused triplets.  In a 200M block namenode, one excess triplet per 
block is 4.8GB!

> Blocks in newly completed files are considered under-replicated too quickly
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-1172
>                 URL: https://issues.apache.org/jira/browse/HDFS-1172
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.21.0
>            Reporter: Todd Lipcon
>            Assignee: Hairong Kuang
>         Attachments: HDFS-1172.patch, replicateBlocksFUC.patch
>
>
> I've seen this for a long time, and imagine it's a known issue, but couldn't 
> find an existing JIRA. It often happens that we see the NN schedule 
> replication on the last block of files very quickly after they're completed, 
> before the other DNs in the pipeline have a chance to report the new block. 
> This results in a lot of extra replication work on the cluster, as we 
> replicate the block and then end up with multiple excess replicas which are 
> very quickly deleted.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to