[
https://issues.apache.org/jira/browse/HDFS-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934697#comment-14934697
]
Yi Liu commented on HDFS-8859:
------------------------------
Thanks Uma.
There is an unused import, I will remove it in the new version of patch.
{quote}
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java:69:29:
Variable 'entries' must be private and have accessor methods.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java:71:17:
Variable 'hash_mask' must be private and have accessor methods.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java:73:17:
Variable 'size' must be private and have accessor methods.
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LightWeightGSet.java:77:17:
Variable 'modification' must be private and have accessor methods.
{quote}
Making the variables of super class 'protected' and modify them in sub classes
is a natural behavior, I don't know why checkstype reports we should use
private and access through methods. We always access the protected variables
in the super class directly in other hadoop code.
So I will leave these checkstyle items.
> Improve DataNode ReplicaMap memory footprint to save about 45%
> --------------------------------------------------------------
>
> Key: HDFS-8859
> URL: https://issues.apache.org/jira/browse/HDFS-8859
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Yi Liu
> Assignee: Yi Liu
> Attachments: HDFS-8859.001.patch, HDFS-8859.002.patch,
> HDFS-8859.003.patch, HDFS-8859.004.patch, HDFS-8859.005.patch
>
>
> By using following approach we can save about *45%* memory footprint for each
> block replica in DataNode memory (This JIRA only talks about *ReplicaMap* in
> DataNode), the details are:
> In ReplicaMap,
> {code}
> private final Map<String, Map<Long, ReplicaInfo>> map =
> new HashMap<String, Map<Long, ReplicaInfo>>();
> {code}
> Currently we use a HashMap {{Map<Long, ReplicaInfo>}} to store the replicas
> in memory. The key is block id of the block replica which is already
> included in {{ReplicaInfo}}, so this memory can be saved. Also HashMap Entry
> has a object overhead. We can implement a lightweight Set which is similar
> to {{LightWeightGSet}}, but not a fixed size ({{LightWeightGSet}} uses fix
> size for the entries array, usually it's a big value, an example is
> {{BlocksMap}}, this can avoid full gc since no need to resize), also we
> should be able to get Element through key.
> Following is comparison of memory footprint If we implement a lightweight set
> as described:
> We can save:
> {noformat}
> SIZE (bytes) ITEM
> 20 The Key: Long (12 bytes object overhead + 8
> bytes long)
> 12 HashMap Entry object overhead
> 4 reference to the key in Entry
> 4 reference to the value in Entry
> 4 hash in Entry
> {noformat}
> Total: -44 bytes
> We need to add:
> {noformat}
> SIZE (bytes) ITEM
> 4 a reference to next element in ReplicaInfo
> {noformat}
> Total: +4 bytes
> So totally we can save 40bytes for each block replica
> And currently one finalized replica needs around 46 bytes (notice: we ignore
> memory alignment here).
> We can save 1 - (4 + 46) / (44 + 46) = *45%* memory for each block replica
> in DataNode.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)