[ 
https://issues.apache.org/jira/browse/SPARK-34212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17271069#comment-17271069
 ] 

Dongjoon Hyun commented on SPARK-34212:
---------------------------------------

I confirmed that this exists in Apache Spark 3.1.1 RC1, too.
{code:java}
spark-sql> select * from test_decimal;
10.000
Time taken: 3.436 seconds, Fetched 1 row(s)

spark-sql> select version();
3.1.1 53fe365edb948d0e05a5ccb62f349cd9fcb4bb5d
Time taken: 0.158 seconds, Fetched 1 row(s)
{code}

> For parquet table, after changing the precision and scale of decimal type in 
> hive, spark reads incorrect value
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-34212
>                 URL: https://issues.apache.org/jira/browse/SPARK-34212
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.5
>            Reporter: Yahui Liu
>            Priority: Major
>
> In Hive, 
> {code}
> create table test_decimal(amt decimal(18,2)) stored as parquet; 
> insert into test_decimal select 100;
> alter table test_decimal change amt amt decimal(19,3);
> {code}
> In Spark,
> {code}
> select * from test_decimal;
> {code}
> {code}
> +--------+
> |    amt |
> +--------+
> | 10.000 |
> +--------+
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to