cloud-fan commented on a change in pull request #28572:
URL: https://github.com/apache/spark/pull/28572#discussion_r427276916
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
##########
@@ -3071,15 +3071,31 @@ class Analyzer(
case p => p transformExpressions {
case u @ UpCast(child, _, _) if !child.resolved => u
- case UpCast(child, dt: AtomicType, _)
+ case UpCast(_, target, _) if target != DecimalType &&
!target.isInstanceOf[DataType] =>
+ throw new AnalysisException(
+ s"UpCast only support DecimalType as AbstractDataType yet, but
got: $target")
+
+ case UpCast(child, target, walkedTypePath) if target == DecimalType
+ && child.dataType.isInstanceOf[DecimalType] =>
+ assert(walkedTypePath.nonEmpty,
+ "object DecimalType should only be used inside ExpressionEncoder")
+ // SPARK-31750: for the case where data type is explicitly known,
e.g, spark.read
+ // .parquet("/tmp/file").as[BigDecimal], we will have UpCast(child,
Decimal(38, 18)),
+ // where child's data type can be, e.g. Decimal(38, 0). In this kind
of case, we
+ // actually should not do cast otherwise it will cause precision
lost. Thus, we should
+ // eliminate the UpCast here to avoid precision lost.
+ child
+
+ case u @ UpCast(child, _, _)
Review comment:
nit: `case Upcast(child, target: AtomicType, _) if ...`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]