johanl-db commented on code in PR #44803:
URL: https://github.com/apache/spark/pull/44803#discussion_r1462941494
##########
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java:
##########
@@ -151,18 +151,17 @@ private boolean isLazyDecodingSupported(
// rebasing.
switch (typeName) {
case INT32: {
- boolean isDate = logicalTypeAnnotation instanceof
DateLogicalTypeAnnotation;
- boolean isDecimal = logicalTypeAnnotation instanceof
DecimalLogicalTypeAnnotation;
+ boolean isDecimal = sparkType instanceof DecimalType;
boolean needsUpcast = sparkType == LongType || sparkType == DoubleType
||
- (isDate && sparkType == TimestampNTZType) ||
Review Comment:
This was redundant since reading an INT32 as TimestampNTZType necessarily
requires converting the value. The fact that this only happens for parquet
dates isn't really relevant here and with the current change this would be the
only case where we look at the parquet type annotation which is a bit confusing.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]