[
https://issues.apache.org/jira/browse/ARROW-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
V Luong updated ARROW-6910:
---------------------------
Description:
I realize that when I read up a lot of Parquet files using
pyarrow.parquet.read_table(...), my program's memory usage becomes very
bloated, although I don't keep the table objects after converting them to
Pandas DFs.
You can try this in an interactive Python shell to reproduce this problem:
```{python}
from pyarrow.parquet import read_table
for path in paths_of_a_bunch_of_big_parquet_files:
read_table(path, use_threads=True, memory_map=False)
(note that I'm not assigning the read_table(...) result to anything, so I'm
not creating any new objects at all)
```
After that For loop above, if you view the memory using (e.g. using htop
program), you'll see that the Python program has taken up a lot of memory. That
memory is only released when you exit() from Python.
This problem means that my compute jobs using PyArrow currently need to use
bigger server instances than I think is necessary, which translates to
significant extra cost.
was:
I realize that when I read up a lot of Parquet files using
pyarrow.parquet.read_table(...), my program's memory usage becomes very
bloated, although I don't keep the table objects after converting them to
Pandas DFs.
You can try this in an interactive Python shell to reproduce this problem:
```{python}
from pyarrow.parquet import read_table
for path in paths_of_a_bunch_of_big_parquet_files:
read_table(path, use_threads=True, memory_map=False)
### note that I'm not assigning the read_table(...) result to anything, so
I'm not creating any new objects at all
```
After that For loop above, if you view the memory using (e.g. using htop
program), you'll see that the Python program has taken up a lot of memory. That
memory is only released when you exit() from Python.
This problem means that my compute jobs using PyArrow currently need to use
bigger server instances than I think is necessary, which translates to
significant extra cost.
> pyarrow.parquet.read_table(...) takes up lots of memory which is not released
> until program exits
> -------------------------------------------------------------------------------------------------
>
> Key: ARROW-6910
> URL: https://issues.apache.org/jira/browse/ARROW-6910
> Project: Apache Arrow
> Issue Type: Bug
> Affects Versions: 0.15.0
> Reporter: V Luong
> Priority: Critical
>
> I realize that when I read up a lot of Parquet files using
> pyarrow.parquet.read_table(...), my program's memory usage becomes very
> bloated, although I don't keep the table objects after converting them to
> Pandas DFs.
> You can try this in an interactive Python shell to reproduce this problem:
> ```{python}
> from pyarrow.parquet import read_table
> for path in paths_of_a_bunch_of_big_parquet_files:
> read_table(path, use_threads=True, memory_map=False)
> (note that I'm not assigning the read_table(...) result to anything, so
> I'm not creating any new objects at all)
> ```
> After that For loop above, if you view the memory using (e.g. using htop
> program), you'll see that the Python program has taken up a lot of memory.
> That memory is only released when you exit() from Python.
> This problem means that my compute jobs using PyArrow currently need to use
> bigger server instances than I think is necessary, which translates to
> significant extra cost.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)