cloud-fan commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r429223302
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
.checkValue(_ > 0, "The timeout value must be positive")
.createWithDefault(10L)
+ val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+ buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+ .doc("when true,use legacy numberic can convert to timestamp")
+ .version("3.0.0")
+ .booleanConf
+ .createWithDefault(false)
+
+ val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+ buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+ .internal()
+ .doc("The legacy only works when
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+ "when true,the value will be interpreted as seconds,which follow
spark style," +
+ "when false,value is interpreted as milliseconds,which follow hive
style")
Review comment:
+1 to simply forbid cast long to timestamp in Spark. Hive compatibility
is not strong enough to justify the change, as other people may keep adding new
behaviors for compatibility with other systems, and this can be end-less.
Instead, I think it's better to forbid this non-standard cast. You can find
all the places that need to change, with explicit error from Spark. And you can
add Hive UDF as @bart-samwel suggested, if you need to fallback to Hive.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]