lw309637554 commented on a change in pull request #2275:
URL: https://github.com/apache/hudi/pull/2275#discussion_r548046978



##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/UpsertPartitioner.java
##########
@@ -140,11 +151,15 @@ private void assignInserts(WorkloadProfile profile, 
HoodieEngineContext context)
     Map<String, List<SmallFile>> partitionSmallFilesMap =
         getSmallFilesForPartitions(new ArrayList<String>(partitionPaths), 
context);
 
+    Map<String, Set<String>> partitionPathToPendingClusteringFileGroupsId = 
getPartitionPathToPendingClusteringFileGroupsId();
+
     for (String partitionPath : partitionPaths) {
       WorkloadStat pStat = profile.getWorkloadStat(partitionPath);
       if (pStat.getNumInserts() > 0) {
+        // exclude the small file in pending clustering, because in pending 
clustering file not support update now.

Review comment:
       insert to small files update will conflict to clustering. resolved this 
case:
   f1, f2, f3 are file groups in partition.
   Assume there is pending clustering on all file groups f1, f2, f3.
   f3 is a small file. So we buildProfile would assign inserts to f3.
   applying update strategy will throw error because f3 is included.
   Instead, we may want to change buildProfile to exclude file groups that are 
in pending clustering. So, new file f4 would be created in step#3 and ingestion 
can continue. This way inserts can continue without errors.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to