[
https://issues.apache.org/jira/browse/HIVE-20523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771127#comment-16771127
]
Antal Sinkovits commented on HIVE-20523:
----------------------------------------
[~george.pachitariu] [~ashutoshc]
I've been working on the same issue, and I don't think we should go with this
approach. The reason, because there will be a discrepancy, between calculating
the raw data size with this logic, and by using footer scan, which parquet
provides.
I think HIVE-20079 and HIVE-21284 would give a more consistent value, on insert
and analyze.
What do you think?
> Improve table statistics for Parquet format
> -------------------------------------------
>
> Key: HIVE-20523
> URL: https://issues.apache.org/jira/browse/HIVE-20523
> Project: Hive
> Issue Type: Improvement
> Components: Physical Optimizer
> Reporter: George Pachitariu
> Assignee: George Pachitariu
> Priority: Minor
> Attachments: HIVE-20523.1.patch, HIVE-20523.10.patch,
> HIVE-20523.11.patch, HIVE-20523.12.patch, HIVE-20523.2.patch,
> HIVE-20523.3.patch, HIVE-20523.4.patch, HIVE-20523.5.patch,
> HIVE-20523.6.patch, HIVE-20523.7.patch, HIVE-20523.8.patch,
> HIVE-20523.9.patch, HIVE-20523.patch
>
>
> Right now, in the table basic statistics, the *raw data size* for a row with
> any data type in the Parquet format is 1. This is an underestimated value
> when columns are complex data structures, like arrays.
> Having tables with underestimated raw data size makes Hive assign less
> containers (mappers/reducers) to it, making the overall query slower.
> Heavy underestimation also makes Hive choose MapJoin instead of the
> ShuffleJoin that can fail with OOM errors.
> In this patch, I compute the columns data size better, taking into account
> complex structures. I followed the Writer implementation for the ORC format.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)