yihua commented on code in PR #10677:
URL: https://github.com/apache/hudi/pull/10677#discussion_r1508117749


##########
hudi-common/src/main/java/org/apache/hudi/avro/AvroSchemaUtils.java:
##########
@@ -428,25 +408,95 @@ public static void checkSchemaCompatible(
       boolean allowProjection,
       Set<String> dropPartitionColNames) throws SchemaCompatibilityException {
 
-    String errorMessage = null;
-
-    if (!allowProjection && !canProject(tableSchema, writerSchema, 
dropPartitionColNames)) {
-      errorMessage = "Column dropping is not allowed";
+    if (!allowProjection) {
+      List<Schema.Field> missingFields = findMissingFields(tableSchema, 
writerSchema, dropPartitionColNames);
+      if (!missingFields.isEmpty()) {
+        throw new 
MissingSchemaFieldException(missingFields.stream().map(Schema.Field::name).collect(Collectors.toList()));
+      }
     }
 
     // TODO(HUDI-4772) re-enable validations in case partition columns
     //                 being dropped from the data-file after fixing the write 
schema
-    if (dropPartitionColNames.isEmpty() && shouldValidate && 
!isSchemaCompatible(tableSchema, writerSchema)) {
-      errorMessage = "Failed schema compatibility check";
+    if (dropPartitionColNames.isEmpty() && shouldValidate) {
+      AvroSchemaCompatibility.SchemaPairCompatibility result =
+          AvroSchemaCompatibility.checkReaderWriterCompatibility(writerSchema, 
tableSchema, true);
+      if (result.getType() != 
AvroSchemaCompatibility.SchemaCompatibilityType.COMPATIBLE) {
+        throw new SchemaBackwardsCompatibilityException(result);
+      }
     }
+  }
 
-    if (errorMessage != null) {
-      String errorDetails = String.format(
-          "%s\nwriterSchema: %s\ntableSchema: %s",
-          errorMessage,
-          writerSchema,
-          tableSchema);
-      throw new SchemaCompatibilityException(errorDetails);
+  /**
+   * Validate whether the {@code incomingSchema} is a valid evolution of 
{@code tableSchema}.
+   *
+   * @param incomingSchema schema of the incoming dataset
+   * @param tableSchema latest table schema
+   */
+  public static void checkValidEvolution(Schema incomingSchema, Schema 
tableSchema) {
+    if (incomingSchema.getType() == Schema.Type.NULL) {
+      return;
+    }
+
+    //not really needed for `hoodie.write.set.null.for.missing.columns` but 
good to check anyway

Review Comment:
   Given the schema validation becomes more complex and we're at it, could you 
update doc/website on how schema is validated and what evolution is supported 
by default (including any caveats) in Hudi?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to