the-other-tim-brown commented on code in PR #14340:
URL: https://github.com/apache/hudi/pull/14340#discussion_r2590137268


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/io/HoodieWriteMergeHandle.java:
##########
@@ -371,15 +371,16 @@ public void write(HoodieRecord<T> oldRecord) {
       // writing the first record. So make a copy of the record to be merged
       HoodieRecord<T> newRecord = keyToNewRecords.get(key).newInstance();
       try {
-        BufferedRecord<T> oldBufferedRecord = 
BufferedRecords.fromHoodieRecord(oldRecord, oldSchema, 
readerContext.getRecordContext(), props, orderingFields, false);
-        BufferedRecord<T> newBufferedRecord = 
BufferedRecords.fromHoodieRecord(newRecord, newSchema, 
readerContext.getRecordContext(), props, orderingFields, deleteContext);
+        BufferedRecord<T> oldBufferedRecord = 
BufferedRecords.fromHoodieRecord(oldRecord, 
HoodieSchema.fromAvroSchema(oldSchema), readerContext.getRecordContext(), 
props, orderingFields, false);

Review Comment:
   Can we update the `oldSchema` and `newSchema` to `HoodieSchema` instead of 
doing the conversion inline here?



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/index/HoodieIndexUtils.java:
##########
@@ -571,10 +572,10 @@ public static <R> HoodieData<HoodieRecord<R>> 
mergeForPartitionUpdatesAndDeletio
             return Collections.singletonList(incoming).iterator();
           }
           HoodieRecord<R> existing = existingOpt.get();
-          Schema writeSchema = writerSchema.get();
+          HoodieSchema writeSchema = 
HoodieSchema.fromAvroSchema(writerSchema.get());

Review Comment:
   Can we update the `writeSchema` to be `HoodieSchema`? I see that is 
`Serializable` so we won't need to wrap it in the `SerializableSchema`



##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/commit/BaseWriteHelper.java:
##########
@@ -150,9 +151,9 @@ protected static <T> HoodieRecord<T> 
reduceRecords(TypedProperties props, Buffer
     try {
       // NOTE: The order of previous and next is uncertain within a batch in 
"reduceByKey".
       // If the return value is empty, it means the previous should be chosen.
-      BufferedRecord<T> newBufferedRecord = 
BufferedRecords.fromHoodieRecord(next, schema, recordContext, props, 
orderingFieldNames, deleteContext);
+      BufferedRecord<T> newBufferedRecord = 
BufferedRecords.fromHoodieRecord(next, HoodieSchema.fromAvroSchema(schema), 
recordContext, props, orderingFieldNames, deleteContext);

Review Comment:
   Can we just pass in the HoodieSchema to this method?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to