nsivabalan commented on a change in pull request #2350:
URL: https://github.com/apache/hudi/pull/2350#discussion_r548930146



##########
File path: 
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/TestTableSchemaEvolution.java
##########
@@ -107,10 +107,22 @@ public void testSchemaCompatibilityBasic() throws 
Exception {
     assertFalse(TableSchemaResolver.isSchemaCompatible(TRIP_EXAMPLE_SCHEMA, 
swappedFieldSchema),
         "Swapped fields are not compatible");
 
-    String typeChangeSchema = TRIP_SCHEMA_PREFIX + MAP_TYPE_SCHEMA + 
FARE_NESTED_SCHEMA
+    String typeChangeSchemaDisallowed = TRIP_SCHEMA_PREFIX + MAP_TYPE_SCHEMA + 
FARE_NESTED_SCHEMA
         + TIP_NESTED_SCHEMA.replace("string", "boolean") + TRIP_SCHEMA_SUFFIX;
-    assertFalse(TableSchemaResolver.isSchemaCompatible(TRIP_EXAMPLE_SCHEMA, 
typeChangeSchema),
-        "Field type change is not compatible");
+    assertFalse(TableSchemaResolver.isSchemaCompatible(TRIP_EXAMPLE_SCHEMA, 
typeChangeSchemaDisallowed),
+        "Incompatible field type change is not allowed");
+
+    // Array of allowed schema field type transitions
+    String[][] allowedFieldChanges = {
+        {"string", "bytes"}, {"bytes", "string"},
+        {"int", "long"}, {"int", "float"}, {"long", "float"}, 
+        {"int", "double"}, {"float", "double"}, {"long", "double"}};
+    for (String[] fieldChange : allowedFieldChanges) {

Review comment:
       can we also add vice versa i.e. long to int, and similar evolutions and 
ensure schema compatibility returns false. 

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/TableSchemaResolver.java
##########
@@ -329,9 +329,11 @@ public static boolean isSchemaCompatible(Schema oldSchema, 
Schema newSchema) {
       // All fields in the newSchema record can be populated from the 
oldSchema record
       return true;
     } else {
-      // Use the checks implemented by
+      // Use the checks implemented by Avro

Review comment:
       and if you agree on this, can we have a test to cover that 
scenario(which should fail if not for this patch) 

##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/table/TableSchemaResolver.java
##########
@@ -329,9 +329,11 @@ public static boolean isSchemaCompatible(Schema oldSchema, 
Schema newSchema) {
       // All fields in the newSchema record can be populated from the 
oldSchema record
       return true;
     } else {
-      // Use the checks implemented by
+      // Use the checks implemented by Avro

Review comment:
       @prashantwason : doesn't line 299 needs fixing too? basically any calls 
to SchemaCompatibility.* should need a fix wrt reader and writer schema arg 
right? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to