[
https://issues.apache.org/jira/browse/SPARK-34212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17271098#comment-17271098
]
Dongjoon Hyun commented on SPARK-34212:
---------------------------------------
It seems that I found the root cause. I'll make a PR soon.
> For parquet table, after changing the precision and scale of decimal type in
> hive, spark reads incorrect value
> --------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-34212
> URL: https://issues.apache.org/jira/browse/SPARK-34212
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.5, 3.0.1, 3.1.1
> Reporter: Yahui Liu
> Priority: Major
> Labels: correctness
>
> In Hive,
> {code}
> create table test_decimal(amt decimal(18,2)) stored as parquet;
> insert into test_decimal select 100;
> alter table test_decimal change amt amt decimal(19,3);
> {code}
> In Spark,
> {code}
> select * from test_decimal;
> {code}
> {code}
> +--------+
> | amt |
> +--------+
> | 10.000 |
> +--------+
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]