[
https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17070392#comment-17070392
]
Íñigo Goiri commented on HDFS-15196:
------------------------------------
I was referring to:
{code}
// Append router mount point only under either of the two cases:
// 1) current mount point is between startAfter and cutoff lastName.
// 2) there are no remaining entries from subclusters and this mount
// point is bigger than all files from subclusters
// This is to make sure that the following batch of
// getListing call will use the correct startAfter, which is lastName
if ((child.compareTo(DFSUtil.bytes2String(startAfter)) > 0 &&
child.compareTo(lastName) <= 0) ||
(remainingEntries == 0 && child.compareTo(lastName) > 0)) {
// This may overwrite existing listing entries with the mount point
// TODO don't add if already there?
nnListing.put(child, dirStatus);
}
{code}
We could do something like:
{code}
/**
* Check if we should append the mount point at the end. This should be done
* under either of the two cases:
* 1) current mount point is between startAfter and cutoff lastName.
* 2) there are no remaining entries from subclusters and this mount
* point is bigger than all files from subclusters
* This is to make sure that the following batch of
* getListing call will use the correct startAfter, which is lastName
* @param child
* @param lastName
* @param startAfter
* @param remainingEntries
* @return True if the mount point should be appended.
*/
private static boolean shouldAppendMountPoint(
String child, String lastName, byte[] startAfter,
int remainingEntries) {
if (child.compareTo(DFSUtil.bytes2String(startAfter)) > 0 &&
child.compareTo(lastName) <= 0) {
return true;
}
if (remainingEntries == 0 && child.compareTo(lastName) > 0) {
return true;
}
return false;
}
...
if (shouldAppendMountPoint()) {
// This may overwrite existing listing entries with the mount point
// TODO don't add if already there?
nnListing.put(child, dirStatus);
}
{code}
> RBF: RouterRpcServer getListing cannot list large dirs correctly
> ----------------------------------------------------------------
>
> Key: HDFS-15196
> URL: https://issues.apache.org/jira/browse/HDFS-15196
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Fengnan Li
> Assignee: Fengnan Li
> Priority: Critical
> Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch,
> HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch,
> HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch,
> HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch,
> HDFS-15196.011.patch, HDFS-15196.012.patch
>
>
> In RouterRpcServer, getListing function is handled as two parts:
> # Union all partial listings from destination ns + paths
> # Append mount points for the dir to be listed
> In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT
> (with default value 1k), the batch listing will be used and the startAfter
> will be used to define the boundary of each batch listing. However, step 2
> here will add existing mount points, which will mess up with the boundary of
> the batch, thus making the next batch startAfter wrong.
> The initial fix is just to append the mount points when there is no more
> batch query necessary, but this will break the order of returned entries.
> Therefore more complex logic is added to make sure the order is kept. At the
> same time the remainingEntries variable inside DirectoryListing is also
> updated to include the remaining mount points.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]