HyukjinKwon commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r428543986
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
.checkValue(_ > 0, "The timeout value must be positive")
.createWithDefault(10L)
+ val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+ buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+ .doc("when true,use legacy numberic can convert to timestamp")
+ .version("3.0.0")
+ .booleanConf
+ .createWithDefault(false)
+
+ val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+ buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+ .internal()
+ .doc("The legacy only works when
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+ "when true,the value will be interpreted as seconds,which follow
spark style," +
+ "when false,value is interpreted as milliseconds,which follow hive
style")
Review comment:
There's no such workload that can be migrated without touching anything
in practice from A to B system where A doesn't guarantee full compatibility
with B. I don't have a good idea for your workload.
I don't think this is only the case where it needs some fixes when you
migrate from Hive to Spark. Spark doesn't target the full compatibility by
design. We could think about some non-invasive fixes practically but this fix
seems no.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]