cloud-fan commented on code in PR #36857: URL: https://github.com/apache/spark/pull/36857#discussion_r898735214
########## sql/core/src/test/resources/sql-tests/results/ansi/cast.sql.out: ########## @@ -838,3 +838,71 @@ struct<> -- !query output org.apache.spark.SparkArithmeticException [CAST_OVERFLOW] The value INTERVAL '1000000' SECOND of the type "INTERVAL SECOND" cannot be cast to "SMALLINT" due to an overflow. Use `try_cast` to tolerate overflow and return NULL instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this error. + + +-- !query +select cast(interval '-1' year as decimal(10, 0)) +-- !query schema +struct<CAST(INTERVAL '-1' YEAR AS DECIMAL(10,0)):decimal(10,0)> +-- !query output +-1 + + +-- !query +select cast(interval '1.000001' second as decimal(10, 6)) +-- !query schema +struct<CAST(INTERVAL '01.000001' SECOND AS DECIMAL(10,6)):decimal(10,6)> +-- !query output +1.000001 + + +-- !query +select cast(interval '08:11:10.001' hour to second as decimal(10, 4)) +-- !query schema +struct<CAST(INTERVAL '08:11:10.001' HOUR TO SECOND AS DECIMAL(10,4)):decimal(10,4)> +-- !query output +29470.0010 + + +-- !query +select cast(interval '1 01:02:03.1' day to second as decimal(8, 1)) +-- !query schema +struct<CAST(INTERVAL '1 01:02:03.1' DAY TO SECOND AS DECIMAL(8,1)):decimal(8,1)> +-- !query output +90123.1 + + +-- !query +select cast(interval '10.123' second as decimal(4, 2)) +-- !query schema +struct<CAST(INTERVAL '10.123' SECOND AS DECIMAL(4,2)):decimal(4,2)> +-- !query output +10.12 + + +-- !query +select cast(interval '10.005' second as decimal(4, 2)) +-- !query schema +struct<CAST(INTERVAL '10.005' SECOND AS DECIMAL(4,2)):decimal(4,2)> +-- !query output +10.01 Review Comment: @srielau I think the rounding behavior is correct. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
