snuyanzin commented on code in PR #17677:
URL: https://github.com/apache/flink/pull/17677#discussion_r864566895


##########
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/calls/FloorCeilCallGen.scala:
##########
@@ -105,13 +105,27 @@ class FloorCeilCallGen(
             case _ =>
               operand.resultType.getTypeRoot match {
                 case LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE =>
-                  val longTerm = s"${terms.head}.getMillisecond()"
-                  s"""
-                     |$TIMESTAMP_DATA.fromEpochMillis(
-                     |  ${qualifyMethod(arithmeticIntegralMethod.get)}(
-                     |    $longTerm,
-                     |    (long) ${unit.startUnit.multiplier.intValue()}))
+                  val millis = s"${terms.head}.getMillisecond()"
+
+                  unit match {
+                    case MILLISECOND =>
+                      val nanos =
+                        
s"${qualifyMethod(arithmeticIntegralMethod.get)}(${terms.head}.getNanoOfMillisecond(),
 " +

Review Comment:
   The problem is that `org.apache.flink.table.data.TimestampData` contains 
these `millis`, `nanoOfMillis` in 2 separate variables 
https://github.com/apache/flink/blob/90e98ba7c858b740365fc9ffdf0afc7d791373f2/flink-table/flink-table-common/src/main/java/org/apache/flink/table/data/TimestampData.java#L46-L50
 .
   It means that to get `nanos of second` it is required to get both `nano of 
millisecond` and `millis` and then convert to `nanos of second` since there is 
no corresponding methods.
   
   Or another way is extending `org.apache.flink.table.data.TimestampData` to 
provide getter of `nanos of second` and probably the ability of being 
initialized by `nanos of second`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to