[ 
https://issues.apache.org/jira/browse/HADOOP-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13860103#comment-13860103
 ] 

Steve Loughran commented on HADOOP-10195:
-----------------------------------------

Looks reasonable. Have you tested this against throttled endpoints like 
rackspace UK? The many-small-file operations used to hit problems at delete 
time, and we may want to increase the test timeouts there

> swiftfs object list stops at 10000 objects
> ------------------------------------------
>
>                 Key: HADOOP-10195
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10195
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.3.0
>            Reporter: David Dobbins
>            Assignee: David Dobbins
>         Attachments: hadoop-10195.patch, hadoop-10195.patch
>
>
> listing objects in a container in swift is limited to 10000 objects per 
> request. swiftfs only makes one request and is therefore limited to the first 
> 10000 objects in the container, ignoring any remaining objects



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to