[
https://issues.apache.org/jira/browse/SPARK-34212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yahui Liu updated SPARK-34212:
------------------------------
Summary: For parquet table, after changing the precision and scale of
decimal type in hive, spark reads incorrect value (was: For parquet table,
after changing the precision and scale of decimal type in hive, spark read
incorrect value)
> For parquet table, after changing the precision and scale of decimal type in
> hive, spark reads incorrect value
> --------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-34212
> URL: https://issues.apache.org/jira/browse/SPARK-34212
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.5
> Reporter: Yahui Liu
> Priority: Major
>
> In hive,
> 1) create table test_decimal(amt decimal(18,2)) stored as parquet;
> 2) insert into test_decimal select 100;
> 3) alter table test_decimal change amt amt decimal(19,3);
> In spark,
> 1) select * from test_decimal;
> +---------+--+
> | amt |
> +---------+--+
> | 10.000 |
> +---------+--+
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]