MaxGekk commented on a change in pull request #25871: [SPARK-29190][SQL]
Optimize `extract`/`date_part` for the milliseconds `field`
URL: https://github.com/apache/spark/pull/25871#discussion_r326866973
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
##########
@@ -465,8 +465,7 @@ object DateTimeUtils {
* is expressed in microseconds since the epoch.
*/
def getMilliseconds(timestamp: SQLTimestamp, timeZone: TimeZone): Decimal = {
- val micros = Decimal(getMicroseconds(timestamp, timeZone))
- (micros / Decimal(MICROS_PER_MILLIS)).toPrecision(8, 3)
+ Decimal(getMicroseconds(timestamp, timeZone), 8, 3)
Review comment:
I think so, `getMicroseconds` returns an int in the range [0, 60000000) for
which Decimal(..., 8, 3) is always valid, for example:
```scala
scala> Decimal(60000000, 8, 3)
res1: org.apache.spark.sql.types.Decimal = 60000.000
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]