danny0405 commented on a change in pull request #4753:
URL: https://github.com/apache/hudi/pull/4753#discussion_r801310288



##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/SparkRDDWriteClient.java
##########
@@ -299,7 +299,8 @@ protected void completeCompaction(HoodieCommitMetadata 
metadata, JavaRDD<WriteSt
                                     HoodieTable<T, JavaRDD<HoodieRecord<T>>, 
JavaRDD<HoodieKey>, JavaRDD<WriteStatus>> table,
                                     String compactionCommitTime) {
     this.context.setJobStatus(this.getClass().getSimpleName(), "Collect 
compaction write status and commit compaction");
-    List<HoodieWriteStat> writeStats = 
writeStatuses.map(WriteStatus::getStat).collect();
+    List<HoodieWriteStat> writeStats = 
metadata.getPartitionToWriteStats().entrySet().stream().flatMap(e ->
+        e.getValue().stream()).collect(Collectors.toList());

Review comment:
       > @zhangyue19921010 good catch! @danny0405 this is a double-writing 
problem -- de-referencing RDDs second time here (after we've done it in the 
commit) we creating new files here, which causes divergence.
   
   @alexeykudinkin  @zhangyue19921010 Thanks for the explanation, i forget that 
the RDD computation is lazy ~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to