[
https://issues.apache.org/jira/browse/ARROW-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17088344#comment-17088344
]
Joris Van den Bossche commented on ARROW-6976:
----------------------------------------------
[~Athlete_369] that can be possible, depending on your file. Parquet can be
highly compressed, so giving a big difference between file size and size in
memory for pandas. You can check the memory usage of your resulting pandas
DataFrame with {{df.info(memory_usage="deep")}}. How much does that indicate?
> Possible memory leak in pyarrow read_parquet
> --------------------------------------------
>
> Key: ARROW-6976
> URL: https://issues.apache.org/jira/browse/ARROW-6976
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.15.0
> Environment: linux ubuntu 18.04
> Reporter: david cottrell
> Priority: Critical
> Attachments: image-2019-10-23-16-17-20-739.png, pyarrow-master.png,
> pyarrow_0150.png
>
>
>
> Version and repro info in the gist below.
> Not sure if I'm not understanding something from this
> [https://arrow.apache.org/blog/2019/02/05/python-string-memory-0.12/]
> but there seems to be memory accumulation when that is exacerbated with
> higher arity objects like strings and dates (not datetimes).
>
> I was not able to reproduce the issue on MacOS. Downgrading to 0.14.1 seemed
> to "fix" or lessen the problem.
>
> [https://gist.github.com/cottrell/a3f95aa59408d87f925ec606d8783e62]
>
> Let me know if this post should go elsewhere.
> !image-2019-10-23-16-17-20-739.png!
>
> {code:java}
>
> {code}
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)