[ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16458.
-------------------------------------
    Fix Version/s: 3.3.0
       Resolution: Fixed

resolved in trunk. As noted in the commit


Includes
-S3A glob scans don't bother trying to resolve symlinks
-stack traces don't get lost in getFileStatuses() when exceptions are wrapped
-debug level logging of what is up in Globber
-Includes a test of LocatedFileStatus in S3A, though I've got some better ideas 
there (i.e. make it a scale test)
-Contains HADOOP-13373. Add S3A implementation of FSMainOperationsBaseTest.
-ITestRestrictedReadAccess tests incomplete read access to files.

This adds a builder API for constructing globbers which other stores can use
so that they too can skip symlink resolution when not needed.

> LocatedFileStatusFetcher scans failing intermittently against S3 store
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-16458
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16458
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>         Environment: S3 + S3Guard
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>             Fix For: 3.3.0
>
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Reply via email to