[
https://issues.apache.org/jira/browse/HUDI-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523216#comment-17523216
]
Alexey Kudinkin commented on HUDI-3891:
---------------------------------------
So the root-cause of this discrepancy in the amount of data read are 2 issues:
HUDI-3895
Missing sorting of the file-splits, resulting in invalid bin-packing of these.
HUDI-3896
Inability to apply `SchemaPrunning` optimization rule, since it relies on the
`HadoopFsRelation` being used. As a result Spark, when reading the table as raw
Parquet is able to effectively prune all but a single field requested from the
nested struct:
!image-2022-04-16-13-50-43-916.png|width=1446,height=155!!image-2022-04-16-13-50-43-956.png|width=1823,height=165!
> Investigate Hudi vs Raw Parquet table discrepancy
> -------------------------------------------------
>
> Key: HUDI-3891
> URL: https://issues.apache.org/jira/browse/HUDI-3891
> Project: Apache Hudi
> Issue Type: Task
> Reporter: Alexey Kudinkin
> Assignee: Alexey Kudinkin
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 0.11.0
>
> Attachments: image-2022-04-16-13-50-43-916.png,
> image-2022-04-16-13-50-43-956.png
>
>
> While benchmarking querying raw Parquet tables against Hudi tables, i've run
> the test against the same (Hudi) table:
> # In one query path i'm reading it as just a raw Parquet table
> # In another, i'm reading it as Hudi RO (read_optimized) table
> Surprisingly enough, those 2 diverge in the # of files being read:
>
> _Raw Parquet_
> !https://t18029943.p.clickup-attachments.com/t18029943/f700a129-35bc-4aaa-948c-9495392653f2/Screen%20Shot%202022-04-15%20at%205.20.41%20PM.png|width=1691,height=149!
>
> _Hudi_
> !https://t18029943.p.clickup-attachments.com/t18029943/d063c689-a254-45cf-8ba5-07fc88b354b6/Screen%20Shot%202022-04-15%20at%205.21.33%20PM.png|width=1673,height=142!
--
This message was sent by Atlassian Jira
(v8.20.1#820001)