[
https://issues.apache.org/jira/browse/HUDI-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Kate Huber updated HUDI-4321:
-----------------------------
Fix Version/s: 1.1.0
(was: 1.0.0)
> Fix Hudi to not write in Parquet legacy format
> ----------------------------------------------
>
> Key: HUDI-4321
> URL: https://issues.apache.org/jira/browse/HUDI-4321
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Alexey Kudinkin
> Priority: Major
> Fix For: 1.1.0
>
>
> Currently Hudi have to write in Parquet legacy-format
> ("spark.sql.parquet.writeLegacyFormat") whenever schema contains Decimals,
> due to the fact that it relies on AvroParquetReader which is unable to read
> Decimals in the non-legacy format (ie it could only read Decimals encoded as
> FIXED_BYTE_ARRAY, and not as INT32/INT64)
> This leads to suboptimal storage footprint where for example on some datasets
> this could lead to a bloat of 10% or more.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)