[ https://issues.apache.org/jira/browse/HADOOP-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Steve Loughran updated HADOOP-15469: ------------------------------------ Resolution: Fixed Fix Version/s: 3.1.1 Status: Resolved (was: Patch Available) thanks, committed > S3A directory committer commit job fails if _temporary directory created > under dest > ----------------------------------------------------------------------------------- > > Key: HADOOP-15469 > URL: https://issues.apache.org/jira/browse/HADOOP-15469 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.1.0 > Environment: spark test runs > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Major > Fix For: 3.1.1 > > Attachments: HADOOP-15469-001.patch > > > The directory staging committer fails in commit job if any temporary > files/dirs have been created. Spark work can create such a dir for placement > of absolute files. > This is because commitJob() looks for the dest dir existing, not containing > non-hidden files. > As the comment says, "its kind of superfluous". More specifically, it means > jobs which would commit with the classic committer & overwrite=false will fail > Proposed fix: remove the check -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org