[
https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063473#comment-17063473
]
Íñigo Goiri edited comment on HDFS-15196 at 3/20/20, 3:56 PM:
--------------------------------------------------------------
Not sure what's wrong with Yetus...
[^HDFS-15196.009.patch] LGTM.
I minor comment would be to do in MockResolver:
{code}
// a simplified version of MountTableResolver implementation
for (String key : this.locations.keySet()) {
if (key.startsWith(path)) {
String child = key.substring(path.length());
if (child.length() > 0) {
// only take children so remove parent path and /
mounts.add(key.substring(path.length()+1));
}
}
}
if (mounts.isEmpty()) {
mounts = null;
}
{code}
Which preserves part of the original code.
[~ayushtkn] anything else in your side?
was (Author: elgoiri):
Not sure what's wrong with Yetus...
[^HDFS-15196.009.patch] LGTM.
I minor comment would be to do in MockResolver:
{code}
// a simplified version of MountTableResolver implementation
for (String key : this.locations.keySet()) {
if (key.startsWith(path)) {
String child = key.substring(path.length());
if (child.length() > 0) {
// only take children so remove parent path and /
mounts.add(key.substring(path.length()+1));
}
}
}
if (mounts.isEmpty()) {
mounts = null;
}
{code}
Which preserves part of the original code.
> RBF: RouterRpcServer getListing cannot list large dirs correctly
> ----------------------------------------------------------------
>
> Key: HDFS-15196
> URL: https://issues.apache.org/jira/browse/HDFS-15196
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Fengnan Li
> Assignee: Fengnan Li
> Priority: Critical
> Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch,
> HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch,
> HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch,
> HDFS-15196.008.patch, HDFS-15196.009.patch
>
>
> In RouterRpcServer, getListing function is handled as two parts:
> # Union all partial listings from destination ns + paths
> # Append mount points for the dir to be listed
> In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT
> (with default value 1k), the batch listing will be used and the startAfter
> will be used to define the boundary of each batch listing. However, step 2
> here will add existing mount points, which will mess up with the boundary of
> the batch, thus making the next batch startAfter wrong.
> The fix is just to append the mount points when there is no more batch query
> necessary.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]