[
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14939361#comment-14939361
]
Hadoop QA commented on HDFS-9053:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 19m 49s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:green}+1{color} | tests included | 0m 0s | The patch appears to
include 8 new or modified test files. |
| {color:green}+1{color} | javac | 8m 5s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 10m 16s | There were no new javadoc
warning messages. |
| {color:red}-1{color} | release audit | 0m 16s | The applied patch generated
1 release audit warnings. |
| {color:green}+1{color} | checkstyle | 2m 5s | There were no new checkstyle
issues. |
| {color:green}+1{color} | whitespace | 0m 10s | The patch has no lines that
end in whitespace. |
| {color:green}+1{color} | install | 1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with
eclipse:eclipse. |
| {color:green}+1{color} | findbugs | 4m 23s | The patch does not introduce
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests | 7m 46s | Tests failed in
hadoop-common. |
| {color:red}-1{color} | hdfs tests | 200m 8s | Tests failed in hadoop-hdfs. |
| | | 255m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12764524/HDFS-9053.004.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 5db371f |
| Release Audit |
https://builds.apache.org/job/PreCommit-HDFS-Build/12758/artifact/patchprocess/patchReleaseAuditProblems.txt
|
| hadoop-common test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/12758/artifact/patchprocess/testrun_hadoop-common.txt
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/12758/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/12758/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/12758/console |
This message was automatically generated.
> Support large directories efficiently using B-Tree
> --------------------------------------------------
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Critical
> Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053
> (BTree).patch, HDFS-9053.001.patch, HDFS-9053.002.patch, HDFS-9053.003.patch,
> HDFS-9053.004.patch
>
>
> This is a long standing issue, we were trying to improve this in the past.
> Currently we use an ArrayList for the children under a directory, and the
> children are ordered in the list, for insert/delete, the time complexity is
> O\(n), (the search is O(log n), but insertion/deleting causes re-allocations
> and copies of arrays), for large directory, the operations are expensive. If
> the children grow to 1M size, the ArrayList will resize to > 1M capacity, so
> need > 1M * 8bytes = 8M (the reference size is 8 for 64-bits system/JVM)
> continuous heap memory, it easily causes full GC in HDFS cluster where
> namenode heap memory is already highly used. I recap the 3 main issues:
> # Insertion/deletion operations in large directories are expensive because
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to
> solve the problem suggested by [~shv].
> So the target of this JIRA is to implement a low memory footprint B-Tree and
> use it to replace ArrayList.
> If the elements size is not large (less than the maximum degree of B-Tree
> node), the B-Tree only has one root node which contains an array for the
> elements. And if the size grows large enough, it will split automatically,
> and if elements are removed, then B-Tree nodes can merge automatically (see
> more: https://en.wikipedia.org/wiki/B-tree). It will solve the above 3
> issues.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)