GuoPhilipse commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r428481765



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
       .checkValue(_ > 0, "The timeout value must be positive")
       .createWithDefault(10L)
 
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+    buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+      .doc("when true,use legacy numberic can convert to timestamp")
+      .version("3.0.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+    buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+      .internal()
+      .doc("The legacy only works when 
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+        "when true,the value will be  interpreted as seconds,which follow 
spark style," +
+        "when false,value is interpreted as milliseconds,which follow hive 
style")

Review comment:
       Hi HyukjinKwon
   thanks for reviewing,we discussed the pain point when we move to spark in 
#28568,  i mean we can adopt both the compatibility  flag and adding functions, 
for using the function, user need to modify tasks one by one with the casting 
compatibility  flag turning off,unfortunally ,we have  almost hundred thousand 
tasks migrating from hive to spark, so with a flag ,we will first fail the 
tasks if it had CAST_NUMERIC_TO_TIMESTAMP,if user really want to use, we will 
suggest the NEW adding three functions for them,maybe it's a good way to avoid 
the case when the task has been succeed,while the casting result is wrong,which 
is more serious,maybe other brothers meet the same headache problem,so i hope 
we will embracing spark better with this patch,




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to