[ 
https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17050523#comment-17050523
 ] 

Fengnan Li commented on HDFS-15196:
-----------------------------------

Agree the order should be preserved. Just to be clear, I list out cases below.

*Expected*: the order was lexicographical order of children names, including 
all namenode listing and mount points, even with batch listings.

*Before this patch*: the order was lexicographical of children names plus mount 
points, with some namenode listings skipped or infinite loop.

*After this patch*: before the last batch, the order is lexicographical. For 
the last batch, mount points is added into the structure which is a TreeMap 
indexed by the children names, thus the order is kept in this batch but not 
across batches.  I think there are two ways this can be solved. 
 # Clients call router once and all of the batch listing is controlled between 
router and namenode, thus with a global TreeMap the order is kept before 
sending back to clients.
 # Client side needs a change, ex. DistributedFileSystem.java to use a treemap 
instead of list to maintain the order.

I think 1 is the way since we don't want to change the client logic 

What are your thoughts? [~elgoiri] [~ayushtkn]

 
{quote}We may want to also test the limit in the Router side.
{quote}
What is the limit to test? 

 

> RBF: RouterRpcServer getListing cannot list large dirs correctly
> ----------------------------------------------------------------
>
>                 Key: HDFS-15196
>                 URL: https://issues.apache.org/jira/browse/HDFS-15196
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Fengnan Li
>            Assignee: Fengnan Li
>            Priority: Critical
>         Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, 
> HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, 
> HDFS-15196.005.patch
>
>
> In RouterRpcServer, getListing function is handled as two parts:
>  # Union all partial listings from destination ns + paths
>  # Append mount points for the dir to be listed
> In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT 
> (with default value 1k), the batch listing will be used and the startAfter 
> will be used to define the boundary of each batch listing. However, step 2 
> here will add existing mount points, which will mess up with the boundary of 
> the batch, thus making the next batch startAfter wrong.
> The fix is just to append the mount points when there is no more batch query 
> necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to