[
https://issues.apache.org/jira/browse/ARROW-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wes McKinney reassigned ARROW-6910:
-----------------------------------
Assignee: Wes McKinney
> [Python] pyarrow.parquet.read_table(...) takes up lots of memory which is not
> released until program exits
> ----------------------------------------------------------------------------------------------------------
>
> Key: ARROW-6910
> URL: https://issues.apache.org/jira/browse/ARROW-6910
> Project: Apache Arrow
> Issue Type: Bug
> Components: C++, Python
> Affects Versions: 0.15.0
> Reporter: V Luong
> Assignee: Wes McKinney
> Priority: Critical
> Fix For: 1.0.0, 0.15.1
>
> Attachments: arrow6910.png
>
>
> I realize that when I read up a lot of Parquet files using
> pyarrow.parquet.read_table(...), my program's memory usage becomes very
> bloated, although I don't keep the table objects after converting them to
> Pandas DFs.
> You can try this in an interactive Python shell to reproduce this problem:
> ```{python}
> from tqdm import tqdm
> from pyarrow.parquet import read_table
> PATH = '/tmp/big.snappy.parquet'
> for _ in tqdm(range(100)):
> read_table(PATH, use_threads=False, memory_map=False)
> (note that I'm not assigning the read_table(...) result to anything, so
> I'm not creating any new objects at all)
> ```
> During the For loop above, if you view the memory usage (e.g. using htop
> program), you'll see that it keeps creeping up. Either the program crashes
> during the 100 iterations, or if the 100 iterations complete, the program
> will still occupy a huge amount of memory, although no objects are kept. That
> memory is only released when you exit() from Python.
> This problem means that my compute jobs using PyArrow currently need to use
> bigger server instances than I think is necessary, which translates to
> significant extra cost.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)