nsivabalan commented on code in PR #8490:
URL: https://github.com/apache/hudi/pull/8490#discussion_r1185086041


##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/common/model/HoodieSparkRecord.java:
##########
@@ -384,28 +387,35 @@ private static boolean hasMetaFields(StructType 
structType) {
   private static HoodieRecord<InternalRow> 
convertToHoodieSparkRecord(StructType structType, HoodieSparkRecord record, 
boolean withOperationField) {
     return convertToHoodieSparkRecord(structType, record,
         Pair.of(HoodieRecord.RECORD_KEY_METADATA_FIELD, 
HoodieRecord.PARTITION_PATH_METADATA_FIELD),
-        withOperationField, Option.empty());
+        withOperationField, Option.empty(), Option.empty());
   }
 
   private static HoodieRecord<InternalRow> 
convertToHoodieSparkRecord(StructType structType, HoodieSparkRecord record, 
boolean withOperationField,
-      Option<String> partitionName) {
+      Option<String> partitionName, Option<StructType> 
structTypeWithoutMetaFields) {
     return convertToHoodieSparkRecord(structType, record,
         Pair.of(HoodieRecord.RECORD_KEY_METADATA_FIELD, 
HoodieRecord.PARTITION_PATH_METADATA_FIELD),
-        withOperationField, partitionName);
+        withOperationField, partitionName, structTypeWithoutMetaFields);
   }
 
   /**
    * Utility method to convert bytes to HoodieRecord using schema and payload 
class.
    */
   private static HoodieRecord<InternalRow> 
convertToHoodieSparkRecord(StructType structType, HoodieSparkRecord record, 
Pair<String, String> recordKeyPartitionPathFieldPair,
-      boolean withOperationField, Option<String> partitionName) {
+      boolean withOperationField, Option<String> partitionName, 
Option<StructType> structTypeWithoutMetaFields) {

Review Comment:
   can you enhance the java doc on when and how to use this method. for eg, 
when should the last arg be set? 
   or should we introduce overloaded method. 



##########
hudi-common/src/main/java/org/apache/hudi/common/model/HoodieAvroIndexedRecord.java:
##########
@@ -153,29 +152,11 @@ public HoodieRecord wrapIntoHoodieRecordPayloadWithParams(
       Option<Pair<String, String>> simpleKeyGenFieldsOpt,
       Boolean withOperation,
       Option<String> partitionNameOp,
-      Boolean populateMetaFields) {
+      Boolean populateMetaFields,
+      Option<Schema> schemaWithoutMetaFields) {
     String payloadClass = ConfigUtils.getPayloadClass(props);
     String preCombineField = ConfigUtils.getOrderingField(props);
-    return HoodieAvroUtils.createHoodieRecordFromAvro(data, payloadClass, 
preCombineField, simpleKeyGenFieldsOpt, withOperation, partitionNameOp, 
populateMetaFields);
-  }
-
-  public HoodieRecord wrapIntoHoodieRecordPayloadWithoutMetaFields(
-      Schema recordSchema,
-      Schema schemaWithoutMetaFields,
-      Properties props,
-      Option<Pair<String, String>> simpleKeyGenFieldsOpt,
-      Boolean withOperation,
-      Option<String> partitionNameOp,
-      Boolean populateMetaFields) {
-    String payloadClass = ConfigUtils.getPayloadClass(props);
-    String preCombineField = ConfigUtils.getOrderingField(props);
-    return SpillableMapUtils.convertToHoodieRecordPayload2((GenericRecord) 
data,
-        payloadClass,
-        preCombineField,
-        
simpleKeyGenFieldsOpt.orElse(Pair.of(HoodieRecord.RECORD_KEY_METADATA_FIELD, 
HoodieRecord.PARTITION_PATH_METADATA_FIELD)),
-        withOperation,
-        partitionNameOp,
-        schemaWithoutMetaFields);
+    return HoodieAvroUtils.createHoodieRecordFromAvro(data, payloadClass, 
preCombineField, simpleKeyGenFieldsOpt, withOperation, partitionNameOp, 
populateMetaFields, schemaWithoutMetaFields);

Review Comment:
   So, do we have any follow ups for SparkRecord to work ? I see we made 
changes to avro based here. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to