[ 
https://issues.apache.org/jira/browse/HDFS-14702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900585#comment-16900585
 ] 

He Xiaoqiao commented on HDFS-14702:
------------------------------------

[~crh], Thanks for your feedback, just notice that HDFS-8859 try to improve 
memory footprint but not backport. `ReplicaMap` information as following,
||Type||Name||Value||
|ref|entrySet|null|
|int|hashSeed|0|
|int|modCount|1261173816|
|float|loadFactor|0.75|
|int|threshold|6291456|
|int|size|4093978|
|ref|table|java.util.HashMap$Entry[8388608] @ 0x78cb249f0|
|ref|values|java.util.HashMap$Values @ 0x778155350|
|ref|keySet|null|
!datanode.dump.png!

> Datanode.ReplicaMap memory leak
> -------------------------------
>
>                 Key: HDFS-14702
>                 URL: https://issues.apache.org/jira/browse/HDFS-14702
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.1
>            Reporter: He Xiaoqiao
>            Priority: Major
>         Attachments: datanode.dump.png
>
>
> DataNode memory is occupied by ReplicaMaps and cause GC high frequency then 
> write performance degrade.
> It is about 600K block replicas located at DataNode, but when dump heap, 
> there are over 8M items of ReplicaMaps and footprint over 500MB. It seems 
> that memory leak. One more situation, the block w/r ops is very high.
> Do not test HDFS-8859 and no idea if it can solve this issue.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to