Aaron Fabbri commented on HADOOP-15469:

And even before change, files could arrive in job output directory during the 
commit process.  So the window is just larger, right?  My interpretation is: 
the job driver / app. master is enforcing the "at most once commit" anyways, so 
this is more of a sanity check.  I think the docs already spell out the 
conflict behavior as happening in job setup.


I will apply the patch and run tests while i'm in meetings today, but am ok 
with you committing this now based on it being a fairly small change.  Will 
shout if I see any issues.

> S3A directory committer commit job fails if _temporary directory created 
> under dest
> -----------------------------------------------------------------------------------
>                 Key: HADOOP-15469
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15469
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>         Environment: spark test runs
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: HADOOP-15469-001.patch
> The directory staging committer fails in commit job if any temporary 
> files/dirs have been created. Spark work can create such a dir for placement 
> of absolute files.
> This is because commitJob() looks for the dest dir existing, not containing 
> non-hidden files.
> As the comment says, "its kind of superfluous". More specifically, it means 
> jobs which would commit with the classic committer & overwrite=false will fail
> Proposed fix: remove the check

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to