[
https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14740814#comment-14740814
]
Yi Liu commented on HDFS-9053:
------------------------------
*1. Performance*
I choose 1024 as the B-Tree degree. For ArrayList in tests, the elements are in
order, so for insertion/deletion/getting, we get the index using binary search
first and do insertion/deletion/getting, this is the same behavior as in
INodeDirectory.
>From the performance data below, B-Tree is much more better on
>insertion/deletion, and almost same for get, for iteration, it's a bit worse
>(since it's not to iterate continuous memory), but the iterations are fast
>enough, so we can ignore.
-- *Insert (random)* (in milliseconds)
||Data Size||B-Tree||ArrayList||
|1K|4|4|
|64K|71|613|
|512K|494|38022|
|1M|1365|151855|
|2M|3584|688158|
-- *Delete (random)* (in milliseconds)
||Data Size||B-Tree||ArrayList||
|1K|4|4|
|64K|75|643|
|512K|658|40463|
|1M|1427|161492|
|2M|3724|718581|
-- *Get (random)* (in milliseconds)
||Data Size||B-Tree||ArrayList||
|1K|1|1|
|64K|16|17|
|512K|279|295|
|1M|759|755|
|2M|2055|2047|
-- *Iteration* (in milliseconds)
||Data Size||B-Tree||ArrayList||
|1K|n/a|n/a|
|64K|9|3|
|512K|30|16|
|1M|61|23|
|2M|143|40|
*2. Memory*
As stated in the description, B-Tree is very good because it solves #1, #3
issues from memory aspect.
B-Tree may have few object overhead and additional array to store references to
sub-trees. The overhead is relatively very small for large directories; for
small directories, the overhead is small itself. Furthermore it's directory, so
few overhead is acceptable, besides, I already try best effort to reduce the
overhead while implementing.
> Support large directories efficiently using B-Tree
> --------------------------------------------------
>
> Key: HDFS-9053
> URL: https://issues.apache.org/jira/browse/HDFS-9053
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Yi Liu
> Assignee: Yi Liu
> Priority: Critical
> Attachments: HDFS-9053 (BTree).patch
>
>
> This is a long standing issue, we were trying to improve this in the past.
> Currently we use an ArrayList for the children under a directory, and the
> children are ordered in the list, for insert/delete/search, the time
> complexity is O(log n), but insertion/deleting causes re-allocations and
> copies of big arrays, so the operations are costly. For example, if the
> children grow to 1M size, the ArrayList will resize to > 1M capacity, so need
> > 1M * 4bytes = 4M continuous heap memory, it easily causes full GC in HDFS
> cluster where namenode heap memory is already highly used. I recap the 3
> main issues:
> # Insertion/deletion operations in large directories are expensive because
> re-allocations and copies of big arrays.
> # Dynamically allocate several MB continuous heap memory which will be
> long-lived can easily cause full GC problem.
> # Even most children are removed later, but the directory INode still
> occupies same size heap memory, since the ArrayList will never shrink.
> This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to
> solve the problem suggested by [~shv].
> So the target of this JIRA is to implement a low memory footprint B-Tree and
> use it to replace ArrayList.
> If the elements size is not large (less than the maximum degree of B-Tree
> node), the B-Tree only has one root node which contains an array for the
> elements. And if the size grows large enough, it will split automatically,
> and if elements are removed, then B-Tree nodes can merge automatically (see
> more: https://en.wikipedia.org/wiki/B-tree). It will solve the above 3
> issues.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)