lzqdename commented on issue #7180: [FLINK-11010] [TABLE] Flink SQL timestamp 
is inconsistent with currentProcessingTime()
URL: https://github.com/apache/flink/pull/7180#issuecomment-445667532
 
 
   In the Flink
   to get process time  ,the result is OK
   then , In the Flink,the result is transfromed several times
   for example , Long ->Timestamp ->Long->Timestamp 
   
   when Long ->Timestamp,use code as follows:**[In SqlFunctions]**
   public static java.sql.Time internalToTime(int v) {
       return new java.sql.Time(v - LOCAL_TZ.getOffset(v));
   }
   
   when Timestamp->Long,use code as follows:**[In SqlFunctions]**
     // mainly intended for java.sql.Timestamp but works for other dates also
     public static long toLong(java.util.Date v, TimeZone timeZone) {
       final long time = v.getTime();
       return time + timeZone.getOffset(time);
     }
   
   
   In the final write step,use code as follows:
   protected long _timestamp(Date value) { 
       return value == null ? 0L : **value.getTime()**;
    }
   the position is 
:org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.DateSerializer._timestamp
 (DateSerializer.java:41)
   
   now ,letme think, we use two different class to handle the transformation 
between Long and Timestamp,
   and the first class SqlFunctions imports TimeZone factor,
   but the second class ,TimeZone factor not imported
   
   It is not consitent when handling time transformation!
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to