[ 
https://issues.apache.org/jira/browse/HDFS-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16064036#comment-16064036
 ] 

Hadoop QA commented on HDFS-12042:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 17 unchanged - 1 fixed = 23 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12042 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874579/HDFS-12042.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 394d0c52e9c1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 144753e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20052/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20052/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20052/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20052/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce memory used by snapshot diff data structures
> ---------------------------------------------------
>
>                 Key: HDFS-12042
>                 URL: https://issues.apache.org/jira/browse/HDFS-12042
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Misha Dmitriev
>            Assignee: Misha Dmitriev
>         Attachments: HDFS-12042.01.patch
>
>
> When snapshot diff operation is performed in a NameNode that manages several 
> million HDFS files/directories, NN needs a lot of memory. Some of that memory 
> is wasted due to suboptimal data structures, such as empty or under-populated 
> ArrayLists, etc. Analyzing one heap dump with jxray (www.jxray.com), we 
> observed the following problems with data structures:
> {code}
> 9. BAD COLLECTIONS
> Total collections: 99,707,902  Bad collections: 88,799,760  Overhead: 
> 9,063,898K (18.2%)
> Top bad collections:
>     Ovhd           Problem     Num objs      Type
> -------------------------------------------------
> 3,056,014K (6.1%)      small     29435572     j.u.ArrayList
> 2,641,373K (5.3%)     1-elem     21837906     j.u.ArrayList
> 864,215K (1.7%)     1-elem      5291813     j.u.TreeSet
> 808,456K (1.6%)     1-elem      3045847     j.u.HashMap
> 602,470K (1.2%)      empty     18549109     j.u.ArrayList
> 441,563K (0.9%)      empty      4356975     j.u.TreeSet
> 373,088K (0.7%)      empty      5297007     j.u.HashMap
> 270,324K (0.5%)      small       931394     j.u.HashMap
> {code}
> The data structures created by HDFS code that suffer from the above problems 
> are, in particular:
> {code}
>   4,228,182K (8.5%): j.u.ArrayList: 19412263 of small 2,111,087K (4.2%), 
> 12932408 of 1-elem 1,717,585K (3.4%), 12784310 of empty 399,509K (0.8%)
>      <-- org.apache.hadoop.hdfs.server.namenode.snapshot.FileDiffList.diffs 
> <-- 
> org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature.diffs 
> <-- org.apache.hadoop.hdfs.server.namenode.INode$Feature[] <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.features <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo.bc <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap$1.entries <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.blocks <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.blocksMap <-- 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.this$0
>  <-- j.l.Thread[] <-- j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java 
> Static: org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> {code}
> and
> {code}
>   575,557K (1.2%): j.u.ArrayList: 4363271 of 1-elem 409,056K (0.8%), 2439001 
> of small 166,482K (0.3%)
>      <-- org.apache.hadoop.hdfs.server.namenode.INodeDirectory.children <-- 
> org.apache.hadoop.util.LightWeightGSet$LinkedElement[] <-- 
> org.apache.hadoop.util.LightWeightGSet.entries <-- 
> org.apache.hadoop.hdfs.server.namenode.INodeMap.map <-- 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <-- 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <-- 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.this$0
>  <-- org.apache.hadoop.util.Daemon.target <-- 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.inodeMap <-- 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.dir <-- 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.this$0
>  <-- org.apache.hadoop.util.Daemon.target <-- j.l.Thread[] <-- 
> j.l.ThreadGroup.threads <-- j.l.Thread.group <-- Java Static: 
> org.apache.hadoop.fs.FileSystem$Statistics.STATS_DATA_CLEANER
> {code}
> There are several different reference chains that all lead to 
> FileDiffList.diffs or INodeDirectory.children. The total percentage of memory 
> wasted by these data structures in the analyzed dump is about 12%. By 
> creating these lists lazily and/or with capacity that better matches their 
> actual size, we should be able to reclaim a significant part of these 12%.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to