[ 
https://issues.apache.org/jira/browse/HDDS-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-898:
------------------------------
    Attachment: HDDS-898.003.patch

> Continue token should contain the previous dir in Ozone s3g object list
> -----------------------------------------------------------------------
>
>                 Key: HDDS-898
>                 URL: https://issues.apache.org/jira/browse/HDDS-898
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>         Attachments: HDDS-898.001.patch, HDDS-898.002.patch, 
> HDDS-898.003.patch
>
>
> Let's imagine we have the following keys:
> test/dir1/file1
> test/dir2/file1
> test/dir2/file2
> test/dir3/file1
> With the object list endpoint (separator=/) we will return with a list where 
> the directories are also added (they are added during the iteration):
> *test/dir1 (directory/prefix entry)
> *test/dir2 (directory/prefix entry)
> *test/dir3 (directory/prefix entry)
> test/dir3/file1
> Now limit the results to 2:
> First call:
> test/dir1/file1 --> this should be added to the results as dir1/ (first 
> result)
> test/dir2/file1 --> this should be added to the restuls as dir2/ (second 
> result)
> the iteration can be continued from key: test/dir2/file2
> Second call, with continue token, continue the iteration from test/dir2/file2
> test/dir2/file2 --> this will be added as dir2 (!!! duplicate here !!!) as we 
> have no information if it has already been added or not
> test/dir3/file1 --> will be dir3
> Summary: we don't know if the dynamic dir entry is already added or not.
> Solution: we can add this information to the encoded continue token and 
> decode it at the next iteration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to