harshmotw-db commented on code in PR #47808:
URL: https://github.com/apache/spark/pull/47808#discussion_r1730248080
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/IntervalUtils.scala:
##########
@@ -902,7 +902,7 @@ object IntervalUtils extends SparkIntervalUtils {
case DAY => Decimal(v / MICROS_PER_DAY)
case HOUR => Decimal(v / MICROS_PER_HOUR)
case MINUTE => Decimal(v / MICROS_PER_MINUTE)
- case SECOND => Decimal(v, Decimal.MAX_LONG_DIGITS, 6)
+ case SECOND => Decimal(v, Decimal.MAX_LONG_DIGITS + 1, 6)
Review Comment:
This function is called [when casting from DT Interval to
decimal](https://github.com/apache/spark/blob/0c18fc072b05671bc9c74a43de49b563a1ef7907/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala#L993-L998).
The logic is that the interval is first cast to a decimal of arbitrary
precision and scale and then these arbitrary parameters are changed to the
target parameters. However, currently, in the specific case where end-field is
SECOND, and the interval happens to be large (19 digits, which is the worst
case), it doesn't fit into a decimal of precision `Decimal.MAX_LONG_DIGITS`
which is 18.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]