jonvex commented on code in PR #14076:
URL: https://github.com/apache/hudi/pull/14076#discussion_r2440743500
##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/MercifulJsonToRowConverter.java:
##########
@@ -275,8 +278,7 @@ public Pair<Boolean, Object> handleStringValue(String
value) {
},
value, schema);
if (result.getLeft()) {
- // timestamp in spark sql doesn't support precision to the micro.
- return Pair.of(true, new Timestamp(((Long) result.getRight()) / 1000));
+ return Pair.of(true, DateTimeUtils.microsToInstant((Long)
result.getRight()));
Review Comment:
That comment is wrong. Spark uses micros as the in memory representation.
Timestamp is java.sql.Timestamp so maybe that is the confusion?
java.sql.Timestamp doesn't have a constructor that takes in micros, but it
stores down to nanoseconds I think. So this util method will construct the
java.sql.Timestamp without losing precision
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]