[ 
https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063007#comment-17063007
 ] 

Fengnan Li commented on HDFS-15196:
-----------------------------------

[~elgoiri] somehow there is no further comment from Hadoop QA about the build. 
[~csun] helped me to trigger this build: 
https://builds.apache.org/job/PreCommit-HDFS-Build/28985/console with the below 
output

{quote}
+1 overall

| Vote |        Subsystem |  Runtime   | Comment
============================================================================
|   0  |          reexec  |   0m 20s   | Docker mode activated. 
|      |                  |            | Prechecks 
|  +1  |         @author  |   0m  0s   | The patch does not contain any @author 
|      |                  |            | tags.
|  +1  |      test4tests  |   0m  0s   | The patch appears to include 2 new or 
|      |                  |            | modified test files.
|      |                  |            | trunk Compile Tests 
|  +1  |      mvninstall  |  19m 48s   | trunk passed 
|  +1  |         compile  |   0m 28s   | trunk passed 
|  +1  |      checkstyle  |   0m 20s   | trunk passed 
|  +1  |         mvnsite  |   0m 32s   | trunk passed 
|  +1  |    shadedclient  |  15m 22s   | branch has no errors when building and 
|      |                  |            | testing our client artifacts.
|  +1  |        findbugs  |   1m 10s   | trunk passed 
|  +1  |         javadoc  |   0m 30s   | trunk passed 
|      |                  |            | Patch Compile Tests 
|  +1  |      mvninstall  |   0m 27s   | the patch passed 
|  +1  |         compile  |   0m 23s   | the patch passed 
|  +1  |           javac  |   0m 23s   | the patch passed 
|  +1  |      checkstyle  |   0m 14s   | the patch passed 
|  +1  |         mvnsite  |   0m 27s   | the patch passed 
|  +1  |      whitespace  |   0m  0s   | The patch has no whitespace issues. 
|  +1  |    shadedclient  |  14m  4s   | patch has no errors when building and 
|      |                  |            | testing our client artifacts.
|  +1  |        findbugs  |   1m  7s   | the patch passed 
|  +1  |         javadoc  |   0m 26s   | the patch passed 
|      |                  |            | Other Tests 
|  +1  |            unit  |   7m  0s   | hadoop-hdfs-rbf in the patch passed. 
|  +1  |      asflicense  |   0m 25s   | The patch does not generate ASF 
|      |                  |            | License warnings.
|      |                  |  64m  6s   | 
{quote}

> RBF: RouterRpcServer getListing cannot list large dirs correctly
> ----------------------------------------------------------------
>
>                 Key: HDFS-15196
>                 URL: https://issues.apache.org/jira/browse/HDFS-15196
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Fengnan Li
>            Assignee: Fengnan Li
>            Priority: Critical
>         Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, 
> HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, 
> HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, 
> HDFS-15196.008.patch, HDFS-15196.009.patch
>
>
> In RouterRpcServer, getListing function is handled as two parts:
>  # Union all partial listings from destination ns + paths
>  # Append mount points for the dir to be listed
> In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT 
> (with default value 1k), the batch listing will be used and the startAfter 
> will be used to define the boundary of each batch listing. However, step 2 
> here will add existing mount points, which will mess up with the boundary of 
> the batch, thus making the next batch startAfter wrong.
> The fix is just to append the mount points when there is no more batch query 
> necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to