[
https://issues.apache.org/jira/browse/HDFS-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14695178#comment-14695178
]
Hadoop QA commented on HDFS-8859:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 19m 21s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:green}+1{color} | tests included | 0m 0s | The patch appears to
include 3 new or modified test files. |
| {color:green}+1{color} | javac | 7m 52s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 9m 51s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 23s | The applied patch does
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle | 1m 50s | The applied patch generated 6
new checkstyle issues (total was 12, now 16). |
| {color:red}-1{color} | whitespace | 0m 1s | The patch has 2 line(s) that
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install | 1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 33s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 4m 29s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests | 22m 33s | Tests failed in
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 76m 49s | Tests failed in hadoop-hdfs. |
| | | 145m 35s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ha.TestZKFailoverController |
| | hadoop.net.TestNetUtils |
| | hadoop.hdfs.TestReplication |
| | hadoop.hdfs.TestSafeMode |
| | hadoop.hdfs.TestDatanodeRegistration |
| | hadoop.hdfs.tools.TestDebugAdmin |
| | hadoop.hdfs.TestSetrepIncreasing |
| | hadoop.hdfs.TestDatanodeReport |
| | hadoop.hdfs.TestDFSShellGenericOptions |
| | hadoop.hdfs.TestParallelRead |
| | hadoop.hdfs.tools.TestStoragePolicyCommands |
| | hadoop.hdfs.TestDFSRemove |
| | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
| | hadoop.hdfs.web.TestWebHdfsTokens |
| | hadoop.hdfs.TestHFlush |
| | hadoop.hdfs.TestPersistBlocks |
| | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
| | hadoop.hdfs.TestEncryptedTransfer |
| | hadoop.hdfs.TestQuota |
| | hadoop.hdfs.TestDFSClientFailover |
| | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
| | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl |
| | hadoop.hdfs.tools.TestDFSAdmin |
| | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
| | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
| | hadoop.hdfs.web.TestWebHDFS |
| | hadoop.hdfs.TestFileAppend |
| | hadoop.hdfs.TestFileLengthOnClusterRestart |
| |
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary |
| | hadoop.hdfs.TestFSOutputSummer |
| | hadoop.hdfs.TestEncryptionZonesWithHA |
| | hadoop.hdfs.TestBlockReaderFactory |
| | hadoop.hdfs.TestDFSFinalize |
| | hadoop.hdfs.TestDisableConnCache |
| | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
| | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr |
| | hadoop.hdfs.web.TestHttpsFileSystem |
| | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
| | hadoop.hdfs.web.TestWebHDFSAcl |
| | hadoop.hdfs.TestHDFSTrash |
| | hadoop.hdfs.TestDistributedFileSystem |
| | hadoop.hdfs.TestDataTransferKeepalive |
| | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer |
| | hadoop.hdfs.web.TestWebHDFSForHA |
| | hadoop.hdfs.TestBlockMissingException |
| | hadoop.hdfs.TestPipelines |
| | hadoop.hdfs.TestRenameWhileOpen |
| | hadoop.hdfs.TestFileCreationClient |
| | hadoop.hdfs.TestEncryptionZones |
| | hadoop.hdfs.TestFileAppend3 |
| | hadoop.hdfs.TestBalancerBandwidth |
| | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
| | hadoop.hdfs.TestSeekBug |
| | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
| | hadoop.hdfs.TestBlockReaderLocal |
| | hadoop.hdfs.TestListFilesInFileContext |
| | hadoop.hdfs.web.TestWebHDFSXAttr |
| | hadoop.hdfs.TestFileStatus |
| | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
| Timed out tests | org.apache.hadoop.hdfs.TestFileCreation |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12750254/HDFS-8859.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 53bef9c |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/artifact/patchprocess/diffcheckstylehadoop-common.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/artifact/patchprocess/whitespace.txt
|
| hadoop-common test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/artifact/patchprocess/testrun_hadoop-common.txt
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/11987/console |
This message was automatically generated.
> Improve DataNode ReplicaMap memory footprint to save about 45%
> --------------------------------------------------------------
>
> Key: HDFS-8859
> URL: https://issues.apache.org/jira/browse/HDFS-8859
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: datanode
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Critical
> Attachments: HDFS-8859.001.patch, HDFS-8859.002.patch,
> HDFS-8859.003.patch, HDFS-8859.004.patch
>
>
> By using following approach we can save about *45%* memory footprint for each
> block replica in DataNode memory (This JIRA only talks about *ReplicaMap* in
> DataNode), the details are:
> In ReplicaMap,
> {code}
> private final Map<String, Map<Long, ReplicaInfo>> map =
> new HashMap<String, Map<Long, ReplicaInfo>>();
> {code}
> Currently we use a HashMap {{Map<Long, ReplicaInfo>}} to store the replicas
> in memory. The key is block id of the block replica which is already
> included in {{ReplicaInfo}}, so this memory can be saved. Also HashMap Entry
> has a object overhead. We can implement a lightweight Set which is similar
> to {{LightWeightGSet}}, but not a fixed size ({{LightWeightGSet}} uses fix
> size for the entries array, usually it's a big value, an example is
> {{BlocksMap}}, this can avoid full gc since no need to resize), also we
> should be able to get Element through key.
> Following is comparison of memory footprint If we implement a lightweight set
> as described:
> We can save:
> {noformat}
> SIZE (bytes) ITEM
> 20 The Key: Long (12 bytes object overhead + 8
> bytes long)
> 12 HashMap Entry object overhead
> 4 reference to the key in Entry
> 4 reference to the value in Entry
> 4 hash in Entry
> {noformat}
> Total: -44 bytes
> We need to add:
> {noformat}
> SIZE (bytes) ITEM
> 4 a reference to next element in ReplicaInfo
> {noformat}
> Total: +4 bytes
> So totally we can save 40bytes for each block replica
> And currently one finalized replica needs around 46 bytes (notice: we ignore
> memory alignment here).
> We can save 1 - (4 + 46) / (44 + 46) = *45%* memory for each block replica
> in DataNode.
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)