[ 
https://issues.apache.org/jira/browse/HDFS-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16188666#comment-16188666
 ] 

Yongjun Zhang commented on HDFS-8865:
-------------------------------------

Thanks [~xiaochen] for working on this.

+1 on the new revs for 2.7.

Question on 2.6: Good to add the test to the 2.6 new patch. There is only one 
new test for this jira in 2.7 and beyond. Though it helps testing by including 
additional tests, should the other tests come with backporting different jiras 
(HDFS-10843 etc)? 

Minor: when uploading new patches for the same branch in the future, suggest to 
include a version string.


> Improve quota initialization performance
> ----------------------------------------
>
>                 Key: HDFS-8865
>                 URL: https://issues.apache.org/jira/browse/HDFS-8865
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>             Fix For: 2.8.0, 3.0.0-alpha1
>
>         Attachments: HDFS-8865_branch-2.6.patch, HDFS-8865.branch-2.6.patch, 
> HDFS-8865_branch-2.7.patch, HDFS-8865.branch-2.7.patch, HDFS-8865.patch, 
> HDFS-8865.v2.checkstyle.patch, HDFS-8865.v2.patch, HDFS-8865.v3.patch
>
>
> After replaying edits, the whole file system tree is recursively scanned in 
> order to initialize the quota. For big name space, this can take a very long 
> time.  Since this is done during namenode failover, it also affects failover 
> latency.
> By using the Fork-Join framework, I was able to greatly reduce the 
> initialization time.  The following is the test result using the fsimage from 
> one of the big name nodes we have.
> || threads || seconds||
> | 1 (existing) | 55|
> | 1 (fork-join) | 68 |
> | 4 | 16 |
> | 8 | 8 |
> | 12 | 6 |
> | 16 | 5 |
> | 20 | 4 |



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to