satishkotha commented on a change in pull request #2263:
URL: https://github.com/apache/hudi/pull/2263#discussion_r545555370



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/compact/strategy/LogFileSizeBasedCompactionStrategy.java
##########
@@ -40,21 +36,6 @@
 public class LogFileSizeBasedCompactionStrategy extends 
BoundedIOCompactionStrategy
     implements Comparator<HoodieCompactionOperation> {
 
-  private static final String TOTAL_LOG_FILE_SIZE = "TOTAL_LOG_FILE_SIZE";

Review comment:
       I refactored it based on other review feedback. There are unit tests for 
this. So I am reasonably confident theres no errors. I'll keep such changes for 
another PR next time. Thanks

##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/SparkBulkInsertHelper.java
##########
@@ -59,25 +58,39 @@ public static SparkBulkInsertHelper newInstance() {
   }
 
   @Override
-  public HoodieWriteMetadata<JavaRDD<WriteStatus>> 
bulkInsert(JavaRDD<HoodieRecord<T>> inputRecords,
-                                                              String 
instantTime,
-                                                              HoodieTable<T, 
JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, JavaRDD<WriteStatus>> table,
-                                                              
HoodieWriteConfig config,
-                                                              
BaseCommitActionExecutor<T, JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, 
JavaRDD<WriteStatus>, R> executor,
-                                                              boolean 
performDedupe,
-                                                              
Option<BulkInsertPartitioner<T>> userDefinedBulkInsertPartitioner) {
+  public HoodieWriteMetadata<JavaRDD<WriteStatus>> bulkInsert(final 
JavaRDD<HoodieRecord<T>> inputRecords, final String instantTime, final 
HoodieTable<T, JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, 
JavaRDD<WriteStatus>> table, final HoodieWriteConfig config, final 
BaseCommitActionExecutor<T, JavaRDD<HoodieRecord<T>>, JavaRDD<HoodieKey>, 
JavaRDD<WriteStatus>, R> executor, final boolean performDedupe, final 
Option<BulkInsertPartitioner<T>> userDefinedBulkInsertPartitioner) {

Review comment:
       Done

##########
File path: hudi-common/src/main/avro/HoodieClusteringGroup.avsc
##########
@@ -40,6 +40,11 @@
          }],
          "default": null
       },
+      {
+         "name":"numOutputGroups",

Review comment:
       Changed it to numOutputFileGroups




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to