prashantwason commented on a change in pull request #2424:
URL: https://github.com/apache/hudi/pull/2424#discussion_r555221595



##########
File path: hudi-common/src/main/java/org/apache/hudi/avro/HoodieAvroUtils.java
##########
@@ -292,53 +284,57 @@ public static GenericRecord stitchRecords(GenericRecord 
left, GenericRecord righ
     return result;
   }
 
-  /**
-   * Given a avro record with a given schema, rewrites it into the new schema 
while setting fields only from the old
-   * schema.
-   */
-  public static GenericRecord rewriteRecord(GenericRecord record, Schema 
newSchema) {
-    return rewrite(record, getCombinedFieldsToWrite(record.getSchema(), 
newSchema), newSchema);
-  }
-
   /**
    * Given a avro record with a given schema, rewrites it into the new schema 
while setting fields only from the new
    * schema.
+   * NOTE: Here, the assumption is that you cannot go from an evolved schema 
(schema with (N) fields)
+   * to an older schema (schema with (N-1) fields). All fields present in the 
older record schema MUST be present in the
+   * new schema and the default/existing values are carried over.
+   * This particular method does the following things :
+   * a) Create a new empty GenericRecord with the new schema.
+   * b) Set default values for all fields of this transformed schema in the 
new GenericRecord or copy over the data
+   * from the old schema to the new schema
+   * c) hoodie_metadata_fields have a special treatment. This is done because 
for code generated AVRO classes
+   * (only HoodieMetadataRecord), the avro record is a SpecificBaseRecord type 
instead of a GenericRecord.
+   * SpecificBaseRecord throws null pointer exception for record.get(name) if 
name is not present in the schema of the
+   * record (which happens when converting a SpecificBaseRecord without 
hoodie_metadata_fields to a new record with.
    */
-  public static GenericRecord 
rewriteRecordWithOnlyNewSchemaFields(GenericRecord record, Schema newSchema) {
-    return rewrite(record, new LinkedHashSet<>(newSchema.getFields()), 
newSchema);
-  }
-
-  private static GenericRecord rewrite(GenericRecord record, 
LinkedHashSet<Field> fieldsToWrite, Schema newSchema) {
+  public static GenericRecord rewriteRecord(GenericRecord oldRecord, Schema 
newSchema) {
     GenericRecord newRecord = new GenericData.Record(newSchema);
-    for (Schema.Field f : fieldsToWrite) {
-      if (record.get(f.name()) == null) {
+    boolean isSpecificRecord = oldRecord instanceof SpecificRecordBase;
+    for (Schema.Field f : newSchema.getFields()) {
+      if (!isMetadataField(f.name()) && oldRecord.get(f.name()) == null) {

Review comment:
       Maybe create a local variable with value of oldRecord.get(f.name()) as 
this is a hash lookup within GenericRecord. This value is used at many places 
in this function.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to