the-other-tim-brown commented on code in PR #9379:
URL: https://github.com/apache/hudi/pull/9379#discussion_r1286196041


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/metadata/SparkHoodieBackedTableMetadataWriter.java:
##########
@@ -119,68 +115,20 @@ protected void initRegistry() {
 
   @Override
   protected void commit(String instantTime, Map<MetadataPartitionType, 
HoodieData<HoodieRecord>> partitionRecordsMap) {
-    commitInternal(instantTime, partitionRecordsMap, Option.empty());
+    commitInternal(instantTime, partitionRecordsMap, false, Option.empty());
   }
 
+  @Override
+  protected JavaRDD<HoodieRecord> 
convertRecordsToWriteClientInput(HoodieData<HoodieRecord> records) {
+    return HoodieJavaRDD.getJavaRDD(records);
+  }
+
+  @Override
   protected void bulkCommit(
       String instantTime, MetadataPartitionType partitionType, 
HoodieData<HoodieRecord> records,
       int fileGroupCount) {
     SparkHoodieMetadataBulkInsertPartitioner partitioner = new 
SparkHoodieMetadataBulkInsertPartitioner(fileGroupCount);
-    commitInternal(instantTime, Collections.singletonMap(partitionType, 
records), Option.of(partitioner));
-  }
-
-  private void commitInternal(String instantTime, Map<MetadataPartitionType, 
HoodieData<HoodieRecord>> partitionRecordsMap,

Review Comment:
   Yes, it turns out the implementation is almost identical for all three of 
our clients so I moved it to a common method



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to