[ 
https://issues.apache.org/jira/browse/SPARK-16646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15388897#comment-15388897
 ] 

Wenchen Fan commented on SPARK-16646:
-------------------------------------

some other thoughts about decimal type in Spark SQL:

1. In MySQL, the max scale is half of the max precision, so that 2 decimal type 
always have a legal wider type. In Postgres, it has a special decimal type, a 
decimal type without precision. It has nearly unlimited precision, so that 2 
decimal type always have a legal wider type too. However, in Spark SQL, the max 
precision and scale are both 38, which means 2 decimal type can not always have 
a legal wider type, we have to truncate.

2. the decimal type truncate logic in Spark SQL is kind of weird. e.g. 
decimal(76, 38) will be truncated to decimal(38, 38), which means we drop all 
the integral part.

> LEAST doesn't accept numeric arguments with different data types
> ----------------------------------------------------------------
>
>                 Key: SPARK-16646
>                 URL: https://issues.apache.org/jira/browse/SPARK-16646
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Cheng Lian
>            Assignee: Hyukjin Kwon
>
> {code:sql}
> SELECT LEAST(1, 1.5);
> {code}
> {noformat}
> Error: org.apache.spark.sql.AnalysisException: cannot resolve 'least(1, 
> CAST(2.1 AS DECIMAL(2,1)))' due to data type mismatch: The expressions should 
> all have the same type, got LEAST (ArrayBuffer(IntegerType, 
> DecimalType(2,1))).; line 1 pos 7 (state=,code=0)
> {noformat}
> This query works for 1.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to