MaxGekk commented on a change in pull request #25336: [SPARK-28017][SQL] 
Support additional levels of truncations by DATE_TRUNC/TRUNC
URL: https://github.com/apache/spark/pull/25336#discussion_r310473373
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
 ##########
 @@ -1545,14 +1558,14 @@ case class TruncTimestamp(
   def this(format: Expression, timestamp: Expression) = this(format, 
timestamp, None)
 
   override def eval(input: InternalRow): Any = {
-    evalHelper(input, maxLevel = DateTimeUtils.TRUNC_TO_SECOND) { (t: Any, 
level: Int) =>
+    evalHelper(input, maxLevel = DateTimeUtils.TRUNC_TO_MICROSECOND) { (t: 
Any, level: Int) =>
 
 Review comment:
   The `evalHelper()` and `codeGenHelper()` check that id of truncation level 
is in the valid range **[id <= 17 and id != -1]**: 
https://github.com/apache/spark/blob/caa23e3efd3f4422e8f599b30bec3ef1fb33c03c/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala#L1406
   
   They don't check limits semantically. We could assign special constants for 
the limit to not confuse readers in the future.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to