[ 
https://issues.apache.org/jira/browse/ARROW-5993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890312#comment-16890312
 ] 

Wes McKinney commented on ARROW-5993:
-------------------------------------

This isn't actually a bug but it's a symptom of fully materializing a densely 
dictionary-encoded column.

I'm addressing this in my patch for ARROW-3772 and follow up patches which will 
allow this data to be read directly as dictionary-encoded without blowing up 
memory. I'll leave this issue open so we can validate the the patch once I get 
it submitted

> [Python] Reading a dictionary column from Parquet results in disproportionate 
> memory usage
> ------------------------------------------------------------------------------------------
>
>                 Key: ARROW-5993
>                 URL: https://issues.apache.org/jira/browse/ARROW-5993
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.14.0
>            Reporter: Daniel Haviv
>            Priority: Major
>              Labels: memory, parquet
>             Fix For: 1.0.0
>
>
> I'm using pyarrow to read a 40MB parquet file.
> When reading all of the columns besides the "body" columns, the process peaks 
> at 170MB.
> Reading only the "body" column results in over 6GB of memory used.
> I made the file publicly accessible: 
> s3://dhavivresearch/pyarrow/demofile.parquet
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to