yihua commented on code in PR #9593:
URL: https://github.com/apache/hudi/pull/9593#discussion_r1325219598


##########
hudi-common/src/test/java/org/apache/hudi/model/TestHoodieAvroRecordMerger.java:
##########
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.model;
+
+import org.apache.hudi.common.HoodieJsonPayload;
+import org.apache.hudi.common.config.TypedProperties;
+import org.apache.hudi.common.model.HoodieAvroRecord;
+import org.apache.hudi.common.model.HoodieAvroRecordMerger;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.util.Option;
+import org.apache.hudi.common.util.collection.Pair;
+
+import org.apache.avro.Schema;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.Test;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.apache.hudi.model.TestUtil.SCHEMA;
+import static org.apache.hudi.model.TestUtil.generateData;
+
+public class TestHoodieAvroRecordMerger {
+  private static final HoodieAvroRecordMerger MERGER = 
HoodieAvroRecordMerger.INSTANCE;
+
+  @Test
+  public void testMergeWhenBothSidesAreGood() throws IOException {
+    List<HoodieAvroRecord> olderRecords = generateData(10);
+    List<HoodieAvroRecord> newerRecords = generateData(10);
+
+    for (int i = 0; i < olderRecords.size(); ++i) {
+      Option<Pair<HoodieRecord, Schema>> r = MERGER.merge(
+          Option.of(olderRecords.get(i)),
+          SCHEMA,
+          Option.of(newerRecords.get(i)),
+          SCHEMA,
+          new TypedProperties());
+      Assertions.assertEquals(r.get().getRight(), SCHEMA);
+      Assertions.assertEquals(
+          r.get().getLeft().getData(),
+          ((HoodieJsonPayload) 
newerRecords.get(i).getData()).getInsertValue(SCHEMA).get());
+    }
+  }
+
+  @Test
+  public void testMergeWhenOneSideIsEmpty() throws IOException {
+    List<HoodieAvroRecord> records = generateData(1);
+
+    for (HoodieAvroRecord record : records) {
+      Option<Pair<HoodieRecord, Schema>> r = MERGER.merge(
+          Option.empty(),

Review Comment:
   Could you also add test cases where the invalid record is represented as 
`HoodieEmptyRecord` or `HoodieRecord.SENTINEL` used by `HoodieAvroRecord`?



##########
hudi-common/src/test/java/org/apache/hudi/model/TestHoodieAvroRecordMerger.java:
##########
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.hudi.model;
+
+import org.apache.hudi.common.HoodieJsonPayload;
+import org.apache.hudi.common.config.TypedProperties;
+import org.apache.hudi.common.model.HoodieAvroRecord;
+import org.apache.hudi.common.model.HoodieAvroRecordMerger;
+import org.apache.hudi.common.model.HoodieRecord;
+import org.apache.hudi.common.util.Option;
+import org.apache.hudi.common.util.collection.Pair;
+
+import org.apache.avro.Schema;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.Test;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.apache.hudi.model.TestUtil.SCHEMA;
+import static org.apache.hudi.model.TestUtil.generateData;
+
+public class TestHoodieAvroRecordMerger {

Review Comment:
   We should also add an functional/integration test which uses the Avro/Spark 
record merger to implement the same payload/merging logic as @beyond1920 
mentioned and make sure that the upsert operation generates the same set of 
records for inserts, updates, and deletes (which is the criteria of approval on 
this PR), covering all new functionality of the new merge API.



##########
hudi-spark-datasource/hudi-spark-common/src/main/java/org/apache/hudi/HoodieSparkRecordMerger.java:
##########
@@ -38,44 +37,87 @@ public String getMergingStrategy() {
   }
 
   @Override
-  public Option<Pair<HoodieRecord, Schema>> merge(HoodieRecord older, Schema 
oldSchema, HoodieRecord newer, Schema newSchema, TypedProperties props) throws 
IOException {
-    ValidationUtils.checkArgument(older.getRecordType() == 
HoodieRecordType.SPARK);
-    ValidationUtils.checkArgument(newer.getRecordType() == 
HoodieRecordType.SPARK);
+  public Option<Pair<HoodieRecord, Schema>> merge(Option<HoodieRecord> older,
+                                                  Schema oldSchema,
+                                                  Option<HoodieRecord> newer,
+                                                  Schema newSchema,
+                                                  TypedProperties props) 
throws IOException {
+    boolean isValidNew = isValid(newer, newSchema, props);
+    boolean isValidOld = isValid(older, oldSchema, props);
 
-    if (newer instanceof HoodieSparkRecord) {
-      HoodieSparkRecord newSparkRecord = (HoodieSparkRecord) newer;
-      if (newSparkRecord.isDeleted()) {
-        // Delete record
-        return Option.empty();
-      }
+    if (!isValidOld && !isValidNew) {
+      return handleTrivialCase(older, oldSchema, newer, newSchema, props);
+    } else if (isValidOld && !isValidNew) {
+      return handleDeleteCase(older, oldSchema, newer, newSchema, props);
+    } else if (!isValidOld) {
+      return handleInsertCase(older, oldSchema, newer, newSchema, props);
     } else {
-      if (newer.getData() == null) {
-        // Delete record
-        return Option.empty();
-      }
-    }
-
-    if (older instanceof HoodieSparkRecord) {
-      HoodieSparkRecord oldSparkRecord = (HoodieSparkRecord) older;
-      if (oldSparkRecord.isDeleted()) {
-        // use natural order for delete record
-        return Option.of(Pair.of(newer, newSchema));
-      }
-    } else {
-      if (older.getData() == null) {
-        // use natural order for delete record
-        return Option.of(Pair.of(newer, newSchema));
-      }
-    }
-    if (older.getOrderingValue(oldSchema, 
props).compareTo(newer.getOrderingValue(newSchema, props)) > 0) {
-      return Option.of(Pair.of(older, oldSchema));
-    } else {
-      return Option.of(Pair.of(newer, newSchema));
+      return handleUpdateCase(older, oldSchema, newer, newSchema, props);
     }
   }
 
   @Override
   public HoodieRecordType getRecordType() {
     return HoodieRecordType.SPARK;
   }
+
+  private boolean isValid(Option<HoodieRecord> record, Schema schema, 
TypedProperties props) throws IOException {
+    return record.isPresent()
+        && !(record.get() instanceof HoodieEmptyRecord)
+        && !(record.get().isDelete(schema, props))
+        && record.get().getRecordType() == HoodieRecordType.SPARK;
+  }
+
+  /**
+   * Handle trivial case.
+   */
+  Option<Pair<HoodieRecord, Schema>> handleTrivialCase(
+      Option<HoodieRecord> olderIgnored,
+      Schema oldSchemaIgnored,
+      Option<HoodieRecord> newerIgnored,
+      Schema newSchemaIgnored,
+      TypedProperties propsIgnored) throws IOException {
+    return Option.empty();
+  }
+
+  /**
+   * Handle delete case.
+   */
+  Option<Pair<HoodieRecord, Schema>> handleDeleteCase(
+      Option<HoodieRecord> olderIgnored,
+      Schema oldSchemaIgnored,
+      Option<HoodieRecord> newerIgnored,
+      Schema newSchemaIgnored,
+      TypedProperties propsIgnored) throws IOException {
+    return Option.empty();

Review Comment:
   For the delete case, should the `HoodieEmptyRecord` be returned?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to