[ 
https://issues.apache.org/jira/browse/HDFS-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497816#comment-16497816
 ] 

Gang Xie commented on HDFS-8865:
--------------------------------

We want to port this path to our 2.4 release. But we have a little concern 
about the mem pressure after multi-threaded, especially when it starts and 
transition to active state immediately, I.E. after the full block report. Do we 
have any mem pressure test result about this?

One idea to optimize this is not to held the children list since the namespace 
is not changed during this update.

> Improve quota initialization performance
> ----------------------------------------
>
>                 Key: HDFS-8865
>                 URL: https://issues.apache.org/jira/browse/HDFS-8865
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Major
>             Fix For: 2.8.0, 3.0.0-alpha1, 2.6.6, 2.7.5
>
>         Attachments: HDFS-8865.branch-2.6.01.patch, 
> HDFS-8865.branch-2.6.patch, HDFS-8865.branch-2.7.patch, HDFS-8865.patch, 
> HDFS-8865.v2.checkstyle.patch, HDFS-8865.v2.patch, HDFS-8865.v3.patch, 
> HDFS-8865_branch-2.6.patch, HDFS-8865_branch-2.7.patch
>
>
> After replaying edits, the whole file system tree is recursively scanned in 
> order to initialize the quota. For big name space, this can take a very long 
> time.  Since this is done during namenode failover, it also affects failover 
> latency.
> By using the Fork-Join framework, I was able to greatly reduce the 
> initialization time.  The following is the test result using the fsimage from 
> one of the big name nodes we have.
> || threads || seconds||
> | 1 (existing) | 55|
> | 1 (fork-join) | 68 |
> | 4 | 16 |
> | 8 | 8 |
> | 12 | 6 |
> | 16 | 5 |
> | 20 | 4 |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to