aokolnychyi commented on a change in pull request #2466:
URL: https://github.com/apache/iceberg/pull/2466#discussion_r612048915



##########
File path: spark3/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java
##########
@@ -368,6 +374,23 @@ private void commitWithSnapshotIsolation(OverwriteFiles 
overwriteFiles,
     }
   }
 
+  private class CompactionWrite extends BaseBatchWrite {
+    private final String jobID;
+
+    private CompactionWrite(String jobID) {
+      this.jobID = jobID;
+    }
+
+    @Override
+    public void commit(WriterCommitMessage[] messages) {
+      List<DataFile> newDataFiles = 
Lists.newArrayListWithExpectedSize(messages.length);
+      for (DataFile file : files(messages)) {
+        newDataFiles.add(file);
+      }
+      DataCompactionJobCoordinator.commitJob(table, jobID, newDataFiles);

Review comment:
       Internally, we are doing actual commits during writes. However, we want 
to process different partitions using separate jobs. For now, we simply 
generate a number of commits for a single round of compaction. How can we 
handle this place if we should be able to commit everything as one operation if 
the user asks for that?
   
   In our implementation, we do synchronization on the driver to make sure only 
one job is trying to commit even though we submit and run a number of partition 
jobs concurrently. No need to generate extra conflicts.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to