[ 
https://issues.apache.org/jira/browse/HADOOP-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16484119#comment-16484119
 ] 

Steve Loughran commented on HADOOP-15489:
-----------------------------------------

No benchmark data here; just realised the issue exists if you are doing things 
like treewalks of a filesystem not yet imported into S3Guard. Although the S3 
LIST calls get the data of the descendants, the DDB table isn't updated. This 
is underperformant

> S3Guard to self update on directory listings of S3
> --------------------------------------------------
>
>                 Key: HADOOP-15489
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15489
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>         Environment: s3guard
>            Reporter: Steve Loughran
>            Priority: Major
>
> S3Guard updates its table on a getFileStatus call, but not on a directory 
> listing.
> While this makes directory listings faster (no need to push out an update), 
> it slows down subsequent queries of the files, such as a sequence of:
> {code}
> statuses = s3a.listFiles(dir)
> for (status: statuses) {
>   if (status.isFile) {
>       try(is = s3a.open(status.getPath())) {
>         ... do something
>       }
> }
> {code}
> this is because the open() is doing the getFileStatus check, even after the 
> listing.
> Updating the DDB tables after a listing would give those reads a speedup, 
> albeit at the expense of initiating a (bulk) update in the list call. Of 
> course, we could consider making that async, though that design (essentially 
> a write-buffer) would require the buffer to be checked in the reads too. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to