[ 
https://issues.apache.org/jira/browse/HDFS-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387399#comment-14387399
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6945:
-------------------------------------------

The new patch looks good.  However, it cannot be applied cleanly.  We need to 
update the patch.  For the new patch, could you also change removeBlock(..) to 
call removeBlockFromMap(..) instead of calling the individual methods?  

> BlockManager should remove a block from excessReplicateMap and decrement 
> ExcessBlocks metric when the block is removed
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6945
>                 URL: https://issues.apache.org/jira/browse/HDFS-6945
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.5.0
>            Reporter: Akira AJISAKA
>            Assignee: Akira AJISAKA
>            Priority: Critical
>              Labels: metrics
>         Attachments: HDFS-6945-003.patch, HDFS-6945-004.patch, 
> HDFS-6945.2.patch, HDFS-6945.patch
>
>
> I'm seeing ExcessBlocks metric increases to more than 300K in some clusters, 
> however, there are no over-replicated blocks (confirmed by fsck).
> After a further research, I noticed when deleting a block, BlockManager does 
> not remove the block from excessReplicateMap or decrement excessBlocksCount.
> Usually the metric is decremented when processing block report, however, if 
> the block has been deleted, BlockManager does not remove the block from 
> excessReplicateMap or decrement the metric.
> That way the metric and excessReplicateMap can increase infinitely (i.e. 
> memory leak can occur).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to