[ 
https://issues.apache.org/jira/browse/ARROW-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Shadrach updated ARROW-13369:
-------------------------------------
    Description: 
Reading a single partition of a parquet file via filters is significantly 
slower than reading the partition directly.
{code:java}
import pandas as pd
size = 100_000
df = pd.DataFrame({'a': [1, 2, 3] * size, 'b': [4, 5, 6] * size})
df.to_parquet('test.parquet', partition_cols=['a'])
%timeit pyarrow.parquet.read_table('test.parquet/a=1')
%timeit pyarrow.parquet.read_table('test.parquet', filters=[('a', '=', 1)])
{code}
gives the timings
{code:python}
2.57 ms ± 41.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
5.18 ms ± 148 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) {code}
Likewise, changing size to 1_000_000 in the above code gives
{code:python}
16.3 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
32.7 ms ± 1.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each){code}
Part of the docs for 
[read_table|https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html]
 states:

> Partition keys embedded in a nested directory structure will be exploited to 
>avoid loading files at all if they contain no matching rows.

>From this, I expected the performance to be roughly the same. 

  was:
Reading a single partition of a parquet file via filters is significantly 
slower than reading the partition directly.
{code:java}
import pandas as pd
size = 100_000
df = pd.DataFrame({'a': [1, 2, 3] * size, 'b': [4, 5, 6] * size})
df.to_parquet('test.parquet', partition_cols=['a'])
%timeit pd.read_parquet('test.parquet/a=1')
%timeit pd.read_parquet('test.parquet', filters=[('a', '=', 1)])
{code}
gives the timings
{code:python}
1.37 ms ± 46.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
2.41 ms ± 90.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 
 {code}
Likewise, changing size to 1_000_000 in the above code gives
{code:python}
4.94 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9.5 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 100 loops each){code}
Part of the docs for 
[read_table|https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html]
 states:

> Partition keys embedded in a nested directory structure will be exploited to 
>avoid loading files at all if they contain no matching rows.

>From this, I expected the performance to be roughly the same. 


> performance of read_table using filters on a partitioned parquet file
> ---------------------------------------------------------------------
>
>                 Key: ARROW-13369
>                 URL: https://issues.apache.org/jira/browse/ARROW-13369
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>    Affects Versions: 4.0.0
>            Reporter: Richard Shadrach
>            Priority: Minor
>
> Reading a single partition of a parquet file via filters is significantly 
> slower than reading the partition directly.
> {code:java}
> import pandas as pd
> size = 100_000
> df = pd.DataFrame({'a': [1, 2, 3] * size, 'b': [4, 5, 6] * size})
> df.to_parquet('test.parquet', partition_cols=['a'])
> %timeit pyarrow.parquet.read_table('test.parquet/a=1')
> %timeit pyarrow.parquet.read_table('test.parquet', filters=[('a', '=', 1)])
> {code}
> gives the timings
> {code:python}
> 2.57 ms ± 41.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
> 5.18 ms ± 148 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) {code}
> Likewise, changing size to 1_000_000 in the above code gives
> {code:python}
> 16.3 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
> 32.7 ms ± 1.02 ms per loop (mean ± std. dev. of 7 runs, 10 loops each){code}
> Part of the docs for 
> [read_table|https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html]
>  states:
> > Partition keys embedded in a nested directory structure will be exploited 
> >to avoid loading files at all if they contain no matching rows.
> From this, I expected the performance to be roughly the same. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to