[
https://issues.apache.org/jira/browse/HDFS-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16065184#comment-16065184
]
Misha Dmitriev commented on HDFS-12042:
---------------------------------------
Yes [~jojochuang], you are correct. The essence of the patch is (1) instantiate
AbstractINodeDiffList.diffs lazily to avoid empty ArrayLists, and (2)
instantiate the above list and INodeDirectory.children() with very small
initial capacity to reduce waste due to unused array slots in ArrayLists.
I'll change DEFAULT_FILES_PER_DIRECTORY as you suggest.
Note that the jxray analysis shows that a pretty high amount of memory (5.3%)
could be further saved if we got rid of 1-element ArrayLists. They are
wasteful, because such an ArrayList still consumes at least ~60 bytes, whereas
it could be replaced with a simple pointer to the single object that it
contains. That is:
{code}
// Old code
ArrayList<X> foo;
X get(int i) { return foo.get(i); }
....
// New code
Object foo; // Can be either ArrayList<X> or pointer to a single X object
X get(int i) {
if (foo instanceof X) {
if (i != 0) throw Exception("Wrong index!");
return (X) foo;
} else {
return ((ArrayList<X>) foo).get(i);
}
}
{code}
I've done such optimizations in the past and it can be done here. But as you
can see, this results in a somewhat tricky code. Let me know what you think
about this.
> Reduce memory used by snapshot diff data structures
> ---------------------------------------------------
>
> Key: HDFS-12042
> URL: https://issues.apache.org/jira/browse/HDFS-12042
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Misha Dmitriev
> Assignee: Misha Dmitriev
> Attachments: HDFS-12042.01.patch, HDFS-12042.02.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several
> million HDFS files/directories, NN needs a lot of memory. Some of that memory
> is wasted due to suboptimal data structures, such as empty or under-populated
> ArrayLists, etc. Analyzing one heap dump with jxray (www.jxray.com), we
> observed the following problems with data structures:
> {code}
> 9. BAD COLLECTIONS
> Total collections: 99,707,902 Bad collections: 88,799,760 Overhead:
> 9,063,898K (18.2%)
> Top bad collections:
> Ovhd Problem Num objs Type
> -------------------------------------------------
> 3,056,014K (6.1%) small 29435572 j.u.ArrayList
> 2,641,373K (5.3%) 1-elem 21837906 j.u.ArrayList
> 864,215K (1.7%) 1-elem 5291813 j.u.TreeSet
> 808,456K (1.6%) 1-elem 3045847 j.u.HashMap
> 602,470K (1.2%) empty 18549109 j.u.ArrayList
> 441,563K (0.9%) empty 4356975 j.u.TreeSet
> 373,088K (0.7%) empty 5297007 j.u.HashMap
> 270,324K (0.5%) small 931394 j.u.HashMap
> {code}
> The data structures created by HDFS code that suffer from the above problems
> are, in particular:
> {code}
> 4,228,182K (8.5%): j.u.ArrayList: 19412263 of small 2,111,087K (4.2%),
> 12932408 of 1-elem 1,717,585K (3.4%), 12784310 of empty 399,509K (0.8%)
> <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs
> <--
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <--
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <--
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <--
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
> <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> {code}
> and
> {code}
> 575,557K (1.2%): j.u.ArrayList: 4363271 of 1-elem 409,056K (0.8%), 2439001
> of small 166,482K (0.3%)
> <-- org.apache.hadoop.hdfs.server.namenode.INodeDirectory.children <--
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <--
> org.apache.hadoop.util.LightWeightGSet.entries <--
> org.apache.hadoop.hdfs.server.namenode.INodeMap.map <--
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <--
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <--
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.this$0
> <-- org.apache.hadoop.util.Daemon.target <--
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <--
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <--
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.this$0
> <-- org.apache.hadoop.util.Daemon.target <-- j.l.Thread[] <--
> j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static:
> org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> {code}
> There are several different reference chains that all lead to
> FileDiffList.diffs or INodeDirectory.children. The total percentage of memory
> wasted by these data structures in the analyzed dump is about 12%. By
> creating these lists lazily and/or with capacity that better matches their
> actual size, we should be able to reclaim a significant part of these 12%.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]