[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764799#comment-17764799
 ] 

ASF GitHub Bot commented on MAPREDUCE-7448:
-------------------------------------------

steveloughran commented on code in PR #6038:
URL: https://github.com/apache/hadoop/pull/6038#discussion_r1324843146


##########
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:
##########
@@ -158,6 +158,11 @@ public FileOutputCommitter(Path outputPath,
         "output directory:" + skipCleanup + ", ignore cleanup failures: " +
         ignoreCleanupFailures);
 
+    if (algorithmVersion == 1 && skipCleanup) {
+        LOG.warn("Skip cleaning up when using FileOutputCommitter V1 can lead 
to unexpected behaviors. " +
+                "For example, committing several times may be allowed 
falsely.");

Review Comment:
   "Skip cleaning up when using FileOutputCommitter V1 may corrupt the output".
   
   there's another option here: we just ignore the setting on v1 jobs?
   it's only there because directory deletion is so O(files) on GCS, and it 
targets v2 because that same file-by-file operation means that directory rename 
is never atomic; you may as well use the already unsafe v2 algorithm. 





> Inconsistent Behavior for FileOutputCommitter V1 to commit successfully many 
> times
> ----------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-7448
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7448
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: ConfX
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: reproduce.sh
>
>
> h2. What happened
> I turned on {{mapreduce.fileoutputcommitter.cleanup.skipped=true}} and then 
> the version 1 of {{FileOutputCommitter}} can commit several times, which is 
> unexpected.
> h2. Where's the problem
> In {{{}FileOutputCommitter.commitJobInternal{}}},
> {noformat}
> if (algorithmVersion == 1) {
>         for (FileStatus stat: getAllCommittedTaskPaths(context)) {
>           mergePaths(fs, stat, finalOutput, context);
>         }
>       }      if (skipCleanup) {
>         LOG.info("Skip cleanup the _temporary folders under job's output " +
>             "directory in commitJob.");
> ...{noformat}
> Here if we skip cleanup, the _temporary folder would not be deleted and the 
> _SUCCESS file would also not be created, which cause the {{mergePaths}} next 
> time to not fail.
> h2. How to reproduce
>  # set {{{}mapreduce.fileoutputcommitter.cleanup.skipped{}}}={{{}true{}}}
>  # run 
> {{org.apache.hadoop.mapred.TestFileOutputCommitter#testCommitterWithDuplicatedCommitV1}}
> you should observe
> {noformat}
> java.lang.AssertionError: Duplicate commit successful: wrong behavior for 
> version 1.
>     at org.junit.Assert.fail(Assert.java:89)
>     at 
> org.apache.hadoop.mapred.TestFileOutputCommitter.testCommitterWithDuplicatedCommitInternal(TestFileOutputCommitter.java:295)
>     at 
> org.apache.hadoop.mapred.TestFileOutputCommitter.testCommitterWithDuplicatedCommitV1(TestFileOutputCommitter.java:269){noformat}
> For an easy reproduction, run the reproduce.sh in the attachment.
> We are happy to provide a patch if this issue is confirmed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to