saikocat commented on a change in pull request #30902:
URL: https://github.com/apache/spark/pull/30902#discussion_r569201154



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -408,6 +421,23 @@ object JdbcUtils extends Logging {
       (rs: ResultSet, row: InternalRow, pos: Int) =>
         row.setFloat(pos, rs.getFloat(pos + 1))
 
+
+    // SPARK-33888 - sql TIME type represents as physical int in millis
+    // Represents a time of day, with no reference to a particular calendar,
+    // time zone or date, with a precision of one millisecond.

Review comment:
       Hmm, I agree with you on the user experience part (`hour` function and 
all). I think it is hard (near impossible) to introduce new DataType (Time - 
HH:MM:SS.sss display) and another function to convert an int without date 
portion to Timestamp (most conversion is parse string) as it gonna take through 
multiple level of approvals and testings.
   
   Another reason why I went with Int was that with TimestampType, it breaks 
compatibility with Avro logical & Spark type converter 
(https://github.com/apache/spark/blob/master/external/avro/src/main/scala/org/apache/spark/sql/avro/SchemaConverters.scala).
   
   I'm not sure if a `time_to_timestamp()` helper function would be a better 
compromise or revert back the 2 MRs?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to