[ 
https://issues.apache.org/jira/browse/ARROW-11469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277261#comment-17277261
 ] 

Joris Van den Bossche edited comment on ARROW-11469 at 2/2/21, 4:27 PM:
------------------------------------------------------------------------

[~Axelg1] Thanks for the report

We have had similar issues in the past (eg ARROW-9924, ARROW-9827), but it 
seems that some things deteriorated again. 

So as a temporary workaround, you can specify {{use_legacy_dataset=True}} to 
use the old code path (another alternative is using the single-file 
{{pq.ParquetFile}} interface, this will never have overhead for dealing with 
potentially more complicated datasets).

cc [~bkietz] There seems to be a lot of overhead being spent in the projection 
({{RecordBatchProjector}}, and specifically {{SetInputSchema}}, 
{{CheckProjectable}}, {{FieldRef}} finding, see the attached profile 
[^profile_wide300.svg] ), while in this case there is actually no projection 
happening.   





was (Author: jorisvandenbossche):
[~Axelg1] Thanks for the report

We have had similar issues in the past (eg ARROW-9924, ARROW-9827), but it 
seems that some things deteriorated again. 

So as a temporary workaround, you can specify {{use_legacy_dataset=True}} to 
use the old code path (another alternative is using the single-file 
{{pq.ParquetFile}} interface, this will never have overhead for dealing with 
potentially more complicated datasets).

cc [~bkietz] There seems to be a lot of overhead being spent in the projection 
({{RecordBatchProjector}}, and specifically {{SetInputSchema}}, 
{{CheckProjectable}}, {{FieldRef}} finding, see the attached profile), while in 
this case there is actually no projection happening.   [^profile_wide300.svg] 




> Performance degradation wide dataframes
> ---------------------------------------
>
>                 Key: ARROW-11469
>                 URL: https://issues.apache.org/jira/browse/ARROW-11469
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 1.0.0, 1.0.1, 2.0.0, 3.0.0
>            Reporter: Axel G
>            Priority: Minor
>         Attachments: profile_wide300.svg
>
>
> I noticed a relatively big performance degradation in version 1.0.0+ when 
> trying to load wide dataframes.
> For example you should be able to reproduce by doing:
> {code:java}
> import numpy as np
> import pandas as pd
> import pyarrow as pa
> import pyarrow.parquet as pq
> df = pd.DataFrame(np.random.rand(100, 10000))
> table = pa.Table.from_pandas(df)
> pq.write_table(table, "temp.parquet")
> %timeit pd.read_parquet("temp.parquet"){code}
> In version 0.17.0, this takes about 300-400 ms and for anything above and 
> including 1.0.0, this suddenly takes around 2 seconds.
>  
> Thanks for looking into this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to