[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user HyukjinKwon commented on the issue: https://github.com/apache/spark/pull/19380 I'd close this for now. Optionally, we ask this case and discuss in the mailing list if this is important. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/19380 This itself is certainly not a bug. The type is on purpose and certainly the answer is correct given the type. You are arguing for a new function called something else but you can also do this with a UDF --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/19380 Currently, we are following Hive for these built-in functions. See https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF Maybe we can wait and see whether more users have the same requests? Then, we can see whether we should introduce new functions or introduce a SQLConf. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user httfighter commented on the issue: https://github.com/apache/spark/pull/19380 I understand everyone's worries.But i hava few thoughts. Firstly, the native unix_timestamp itself supports the "-MM-dd HH:mm:ss.SSS" form of the date, but the result is lost in milliseconds when i use it. Obviously, it's a bug. It will give users the wrong results I think this should be fixed. Secondly, unix_timestamp, from_unixtime and to_unix_timestamp all have the similar bug, but related methods only these three. I think the data type of unix time of the three methods should be defined as DoubleTypeï¼not for LongType. Or in milliseconds which will bring more problems. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user gatorsmile commented on the issue: https://github.com/apache/spark/pull/19380 The workaround is to let users write a UDF to handle these cases --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user ouyangxiaochen commented on the issue: https://github.com/apache/spark/pull/19380 In fact, there are many scenarios that need to be accurate to milliseconds, should we try to solve this problem together? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user srowen commented on the issue: https://github.com/apache/spark/pull/19380 This would break compatibility with Spark and other engines like Hive. This shoudl be closed. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user viirya commented on the issue: https://github.com/apache/spark/pull/19380 We also have `FromUnixTime` and seems the data type of unix time is defined as `LongType` across those unix time expressions. We shouldn't change just one expression so there is inconsistency. As @HyukjinKwon said, we also need to not break backward compatibility. Besides, for RDMS support, I only found MySQL has direct unix_timestamp support like this. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user ouyangxiaochen commented on the issue: https://github.com/apache/spark/pull/19380 Since the RDMS keep the milliseconds, we should follow it. This proposal LGTM. @gatorsmile CC --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user httfighter commented on the issue: https://github.com/apache/spark/pull/19380 In RDMS , unix_timestamp method can keep the milliseconds. For example, execute the command as follows select unix_timestamp("2017-10-10 10:10:20.111") from test; you can get the resultï¼ 1490667020.111 But the native unix_timestamp method of Spark will be lost milliseconds, we want to keep the milliseconds. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #19380: [SPARK-22157] [SQL] The uniux_timestamp method handles t...
Github user AmplabJenkins commented on the issue: https://github.com/apache/spark/pull/19380 Can one of the admins verify this patch? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org