[
https://issues.apache.org/jira/browse/HDFS-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14952741#comment-14952741
]
Hudson commented on HDFS-8988:
------------------------------
FAILURE: Integrated in Hadoop-trunk-Commit #8610 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/8610/])
HDFS-8988. Use LightWeightHashSet instead of LightWeightLinkedSet in (yliu: rev
73b86a5046fe3262dde7b05be46b18575e35fd5f)
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
*
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Use LightWeightHashSet instead of LightWeightLinkedSet in
> BlockManager#excessReplicateMap
> -----------------------------------------------------------------------------------------
>
> Key: HDFS-8988
> URL: https://issues.apache.org/jira/browse/HDFS-8988
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Yi Liu
> Assignee: Yi Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-8988.001.patch, HDFS-8988.002.patch
>
>
> {code}
> public final Map<String, LightWeightLinkedSet<Block>> excessReplicateMap =
> new HashMap<>();
> {code}
> {{LightWeightLinkedSet}} extends {{LightWeightHashSet}} and in addition it
> stores elements in double linked list to ensure ordered traversal. So it
> requires more memory for each entry (2 references = 8 + 8 bytes = 16 bytes,
> assume 64-bits system/JVM).
> I have traversed the source code, and we don't need ordered traversal for
> excess replicated blocks, so could use {{LightWeightHashSet}} to save memory.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)