[ https://issues.apache.org/jira/browse/HDFS-12051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338473#comment-16338473 ]
Manoj Govindassamy commented on HDFS-12051: ------------------------------------------- Thanks for working on this [~mi...@cloudera.com]. Few comments on HDFS-12051.07.patch {{NameCache.java}} * line 97: {{cache = new byte[cacheSize][];}} Since this will take up a contiguous space, we need to restrict the cache size to much lesser size than your current MAX size of 1 << 30. Your thoughts? * {{#cache}} is now following the {{open addressing}} model. Any reasons why you moved to this model compared to your initial design? * {{#put()}} ** line 118: the first time cache fill .. shouldn't it be a new byte array name constructed from the passed in name? Why use the same caller passed in name? ** With the {{open addressing}} model, when you overwrite the cache slot with new names, there could be INodes which are already referring to this name and are cut from the cache. * I don't see any cache invalidation even when the INodes are removed. This takes up memory. Though not huge, design wise its not clean to leave the cache with stale values and incur cache lookup penalty in the future put() * {{#getSize()}} since there is no cache invalidation, and since this open addressing model, the size returned is not right. * line 149: {{cacheSizeFor}} is this roundUp or roundDown to the nearest 2 power. Please add the link to {{HashMap#tableSizeFor()}} in the comment to show where the code is inspired from. > Reimplement NameCache in NameNode: Intern duplicate byte[] arrays (mainly > those denoting file/directory names) to save memory > ----------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-12051 > URL: https://issues.apache.org/jira/browse/HDFS-12051 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Misha Dmitriev > Assignee: Misha Dmitriev > Priority: Major > Attachments: HDFS-12051.01.patch, HDFS-12051.02.patch, > HDFS-12051.03.patch, HDFS-12051.04.patch, HDFS-12051.05.patch, > HDFS-12051.06.patch, HDFS-12051.07.patch > > > When snapshot diff operation is performed in a NameNode that manages several > million HDFS files/directories, NN needs a lot of memory. Analyzing one heap > dump with jxray (www.jxray.com), we observed that duplicate byte[] arrays > result in 6.5% memory overhead, and most of these arrays are referenced by > {{org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name}} > and {{org.apache.hadoop.hdfs.server.namenode.INodeFile.name}}: > {code:java} > 19. DUPLICATE PRIMITIVE ARRAYS > Types of duplicate objects: > Ovhd Num objs Num unique objs Class name > 3,220,272K (6.5%) 104749528 25760871 byte[] > .... > 1,841,485K (3.7%), 53194037 dup arrays (13158094 unique) > 3510556 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 2228255 > of byte[8](48, 48, 48, 48, 48, 48, 95, 48), 357439 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 237395 of byte[8](48, 48, 48, 48, 48, 49, > 95, 48), 227853 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), > 179193 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...), 169487 > of byte[8](48, 48, 48, 48, 48, 50, 95, 48), 145055 of byte[17](112, 97, 114, > 116, 45, 109, 45, 48, 48, 48, ...), 128134 of byte[8](48, 48, 48, 48, 48, 51, > 95, 48), 108265 of byte[17](112, 97, 114, 116, 45, 109, 45, 48, 48, 48, ...) > ... and 45902395 more arrays, of which 13158084 are unique > <-- > org.apache.hadoop.hdfs.server.namenode.INodeFileAttributes$SnapshotCopy.name > <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiff.snapshotINode > <-- {j.u.ArrayList} <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs <-- > org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs > <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- > org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- ... (1 > elements) ... <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > 409,830K (0.8%), 13482787 dup arrays (13260241 unique) > 430 of byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 353 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 352 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 350 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 342 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 341 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 340 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 337 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...), 334 of > byte[32](116, 97, 115, 107, 95, 49, 52, 57, 55, 48, ...) > ... and 13479257 more arrays, of which 13260231 are unique > <-- org.apache.hadoop.hdfs.server.namenode.INodeFile.name <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- > org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0 > <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java > Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER > .... > {code} > There are several other places in NameNode code which also produce duplicate > {{byte[]}} arrays. > To eliminate this duplication and reclaim memory, we will need to write a > small class similar to StringInterner, but designed specifically for byte[] > arrays. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org