JingsongLi commented on a change in pull request #10426: [FLINK-15062][orc] Orc 
reader should use java.sql.Timestamp to read for respecting time zone
URL: https://github.com/apache/flink/pull/10426#discussion_r354134181
 
 

 ##########
 File path: 
flink-formats/flink-orc/src/main/java/org/apache/flink/orc/vector/OrcTimestampColumnVector.java
 ##########
 @@ -39,8 +41,8 @@ public OrcTimestampColumnVector(TimestampColumnVector 
vector) {
        @Override
        public SqlTimestamp getTimestamp(int i, int precision) {
                int index = vector.isRepeating ? 0 : i;
-               return SqlTimestamp.fromEpochMillis(
-                               vector.time[index],
-                               SqlTimestamp.isCompact(precision) ? 0 : 
vector.nanos[index] % 1_000_000);
+               Timestamp timestamp = new Timestamp(vector.time[index]);
 
 Review comment:
   Original one directly use the underlying `long` and `int` to construct 
`SqlTimestamp`.
   But hive orc is using `java.sql.Timestamp` to construct underlying data. You 
can understand like:
   ```
   java.sql.Timestamp orcTimestamp;
   SqlTimestamp.fromEpochMillis(orcTimestamp.getTime(), orcTimestamp.getNano());
   VS
   SqlTimestamp.fromString(orcTimestamp.toString());
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to