cloud-fan commented on a change in pull request #31319:
URL: https://github.com/apache/spark/pull/31319#discussion_r564226244
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
##########
@@ -268,18 +268,27 @@ private[parquet] class ParquetRowConverter(
}
// For INT32 backed decimals
- case t: DecimalType if
parquetType.asPrimitiveType().getPrimitiveTypeName == INT32 =>
- new ParquetIntDictionaryAwareDecimalConverter(t.precision, t.scale,
updater)
+ case _: DecimalType if
parquetType.asPrimitiveType().getPrimitiveTypeName == INT32 =>
+ val metadata = parquetType.asPrimitiveType().getDecimalMetadata
+ val precision = metadata.getPrecision()
+ val scale = metadata.getScale()
+ new ParquetIntDictionaryAwareDecimalConverter(precision, scale,
updater)
Review comment:
same question as the avro PR: how do we handle the precision/scale
inconsistency between the decimal value and the catalyst decimal type?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]