HyukjinKwon commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r428428811



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
       .checkValue(_ > 0, "The timeout value must be positive")
       .createWithDefault(10L)
 
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+    buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+      .doc("when true,use legacy numberic can convert to timestamp")
+      .version("3.0.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+    buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+      .internal()
+      .doc("The legacy only works when 
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+        "when true,the value will be  interpreted as seconds,which follow 
spark style," +
+        "when false,value is interpreted as milliseconds,which follow hive 
style")

Review comment:
       Sorry but I can't still follow why Spark should follow Hive style even 
by default. Most likely the legacy users are already depending on this 
behaviour, and few users might had to do the workaround by themselves. I don't 
think even `cast(ts as long)` is a standard and an widely accepted behaviour. 
-1 from me.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to