singhpk234 commented on code in PR #37025:
URL: https://github.com/apache/spark/pull/37025#discussion_r910579873
##########
sql/core/src/test/scala/org/apache/spark/sql/connector/SupportsCatalogOptionsSuite.scala:
##########
@@ -322,6 +323,12 @@ class SupportsCatalogOptionsSuite extends QueryTest with
SharedSparkSession with
timestamp = Some("2019-01-29 00:37:58")), df3.toDF())
checkAnswer(load("t", Some(catalogName), version = None,
timestamp = Some("2021-01-29 00:37:58")), df4.toDF())
+
+ // load with timestamp in number format
+ checkAnswer(load("t", Some(catalogName), version = None,
+ timestamp = Some(MICROSECONDS.toSeconds(ts1).toString)), df3.toDF())
Review Comment:
My thought was considering seconds as the timestamp works in Spark SQL, it
should work in data frame path also.
> I don't think it was designed to support microsecond as the timestamp
string
I now see, spark-sql AST builder un-intentionally creates a literal
expression of longType in this scenario, which fits in the code flow path and
works when we create a TimeTravelSpec and trigger the casting. Where as the
data-frame code flow always forced us by design to return it in string format
(implying date format should only given).
Should I update the jira / pr description ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]