[ 
https://issues.apache.org/jira/browse/SPARK-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15890249#comment-15890249
 ] 

Hyukjin Kwon commented on SPARK-19754:
--------------------------------------

Thank you for cc'ing me. It seems it returns as below in the current master

{code}
scala> sql("SELECT CAST(1.6 AS INT)").show()
+----------------+
|CAST(1.6 AS INT)|
+----------------+
|               1|
+----------------+
{code}

{code}
scala> sql("""SELECT CAST(get_json_object('{"a": 1.6}', '$.a') AS 
INT)""").show()
+---------------------------------------------+
|CAST(get_json_object({"a": 1.6}, $.a) AS INT)|
+---------------------------------------------+
|                                            1|
+---------------------------------------------+
{code}

It seems consistent and the result seems {{1}}. Could you check and confirm if 
I missed something [~jipumarino]?

> Casting to int from a JSON-parsed float rounds instead of truncating
> --------------------------------------------------------------------
>
>                 Key: SPARK-19754
>                 URL: https://issues.apache.org/jira/browse/SPARK-19754
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.3, 2.1.0
>            Reporter: Juan Pumarino
>            Priority: Minor
>
> When retrieving a float value from a JSON document, and then casting it to an 
> integer, Hive simply truncates it, while Spark is rounding up when the 
> decimal value is >= 5.
> In Hive, the following query returns {{1}}, whereas in a Spark shell the 
> result is {{2}}.
> {code}
> SELECT CAST(get_json_object('{"a": 1.6}', '$.a') AS INT)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to