the-other-tim-brown commented on code in PR #13097:
URL: https://github.com/apache/hudi/pull/13097#discussion_r2031957652


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -1907,6 +1908,19 @@ private static Double castToDouble(Object val) {
 
   public static boolean isColumnTypeSupported(Schema schema, 
Option<HoodieRecordType> recordType) {
     Schema schemaToCheck = resolveNullableSchema(schema);
+    // Check for precision and scale if the schema has a logical decimal type.
+    LogicalType logicalType = schemaToCheck.getLogicalType();
+    if (logicalType != null && logicalType instanceof LogicalTypes.Decimal) {
+      LogicalTypes.Decimal decimalType = (LogicalTypes.Decimal) logicalType;
+      // The maximum allowed precision and scale as per the payload schema. 
See DecimalWrapper in HoodieMetadata.avsc:
+      // 
https://github.com/apache/hudi/blob/45dedd819e56e521148bde51a3dfa4e472ea70cd/hudi-common/src/main/avro/HoodieMetadata.avsc#L247
+      final int maxPrecision = 30;
+      final int maxScale = 15;
+      if (decimalType.getPrecision() > maxPrecision || decimalType.getScale() 
> maxScale) {

Review Comment:
   The logic here is not correct. You need `decimalType.getPrecision() + 
(maxScale - decimalType.getScale())` > maxPrecision || decimalType.getScale() > 
maxScale)` so you account for the change in precision when the scale is updated.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to