[ 
https://issues.apache.org/jira/browse/IMPALA-2400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe McDonnell reassigned IMPALA-2400:
-------------------------------------

    Assignee:     (was: Joe McDonnell)

> Unpredictable locality behavior for reading Parquet files
> ---------------------------------------------------------
>
>                 Key: IMPALA-2400
>                 URL: https://issues.apache.org/jira/browse/IMPALA-2400
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Perf Investigation
>    Affects Versions: Impala 2.3.0
>            Reporter: Mostafa Mokhtar
>            Priority: Minor
>              Labels: ramp-up
>         Attachments: LocalRead.txt, RemoteRead.txt
>
>
> When running the query below I noticed exceptionally high variance even after 
> running "invalidate metadata". 
> select * from tpch_bin_flat_parquet_30.lineitem limit 10;
> * Fetched 10 row(s) in 1.08s
> WARNINGS: Read 139.48 MB of data across network that was expected to be 
> local. Block locality metadata for table 'tpch_bin_flat_parquet_30.lineitem' 
> may be stale. Consider running "INVALIDATE METADATA 
> `tpch_bin_flat_parquet_30`.`lineitem`".
> * Fetched 10 row(s) in 1.32s
> * Fetched 10 row(s) in 0.09s
> * Fetched 10 row(s) in 1.08s
> * "invalidate metadata"
> * Fetched 10 row(s) in 0.89s
> * Fetched 10 row(s) in 0.07s
> WARNINGS: Read 76.15 MB of data across network that was expected to be local. 
> Block locality metadata for table 'tpch_bin_flat_parquet_30.lineitem' may be 
> stale. Consider running "INVALIDATE METADATA 
> `tpch_bin_flat_parquet_30`.`lineitem`".
> * Fetched 10 row(s) in 1.11s
> * Fetched 10 row(s) in 0.73s
> * Fetched 10 row(s) in 0.09s
> The behavior above is tied to Parquet tables and doesn't repro against text 
> data.
> Profile files attached.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to