gengliangwang edited a comment on issue #25137: [SPARK-28348][SQL] Decimal precision promotion for binary arithmetic with casted decimal type URL: https://github.com/apache/spark/pull/25137#issuecomment-511822926 I am actually slightly -1 with this proposal. If this is OK, in following case ``` scala> Seq(2147483647).toDF("c").createOrReplaceTempView("foobar") // 2147483647 is max int value, so the data type of foobar is Int scala> spark.sql("select cast(c*c as long) from foobar").show() +-----------------------+ |CAST((c * c) AS BIGINT)| +-----------------------+ | 1| +-----------------------+ ``` We might need to have a similar new rule to convert the SQL statement as ``` spark.sql("select cast(c as long)*cast(c as Long) from foobar").show() +---------------------------------------+ |(CAST(c AS BIGINT) * CAST(c AS BIGINT))| +---------------------------------------+ | 4611686014132420609| +---------------------------------------+ ``` I also tried PostgreSQL ``` create table t(i int); insert into t values(2147483647); select cast(i*i as bigint) from t; ``` And it shows error `integer out of range`. So, for now, I don't think there is such optimization in PostgreSQL.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
