[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14635410#comment-14635410
 ] 

Hudson commented on HDFS-6945:
------------------------------

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2209/])
Move HDFS-6945 to 2.7.2 section in CHANGES.txt. (aajisaka: rev 
a628f675900d2533ddf86fb3d3e601238ecd68c3)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> BlockManager should remove a block from excessReplicateMap and decrement 
> ExcessBlocks metric when the block is removed
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6945
>                 URL: https://issues.apache.org/jira/browse/HDFS-6945
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.5.0
>            Reporter: Akira AJISAKA
>            Assignee: Akira AJISAKA
>            Priority: Critical
>              Labels: metrics
>             Fix For: 2.8.0, 2.7.2
>
>         Attachments: HDFS-6945-003.patch, HDFS-6945-004.patch, 
> HDFS-6945-005.patch, HDFS-6945.2.patch, HDFS-6945.patch
>
>
> I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
> however, there are no over-replicated blocks (confirmed by fsck).
> After a further research, I noticed when deleting a block, BlockManager does 
> not remove the block from excessReplicateMap or decrement excessBlocksCount.
> Usually the metric is decremented when processing block report, however, if 
> the block has been deleted, BlockManager does not remove the block from 
> excessReplicateMap or decrement the metric.
> That way the metric and excessReplicateMap can increase infinitely (i.e. 
> memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to