[ 
https://issues.apache.org/jira/browse/ARROW-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954021#comment-16954021
 ] 

V Luong edited comment on ARROW-6910 at 10/17/19 6:51 PM:
----------------------------------------------------------

[~wesm] [~jorisvandenbossche] [~apitrou]  I've made a Parquet data set 
available at s3://public-parquet-test-data/big.parquet for testing. It's only 
moderately big. I repeatedly load various files thousands of time during 
iterative model training jobs that last for days. In 0.14.1 my long-running 
jobs succeeded, but in 0.15.0 the same jobs crashed after 30 mins or 1 hour. My 
inspection as shared above indicates that memory usage increases with the 
number of times read_table(...) is called and memory is not released, so 
long-running jobs would inevitably die.


was (Author: mbalearnstocode):
[~wesm][~jorisvandenbossche] I've made a Parquet data set available at 
s3://public-parquet-test-data/big.parquet for testing. It's only moderately 
big. I repeatedly load various files thousands of time during iterative model 
training jobs that last for days. In 0.14.1 my long-running jobs succeeded, but 
in 0.15.0 the same jobs crashed after 30 mins or 1 hour. My inspection as 
shared above indicates that memory usage increases with the number of times 
read_table(...) is called and is not released, so long-running jobs would 
inevitably die.

> [Python] pyarrow.parquet.read_table(...) takes up lots of memory which is not 
> released until program exits
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-6910
>                 URL: https://issues.apache.org/jira/browse/ARROW-6910
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++, Python
>    Affects Versions: 0.15.0
>            Reporter: V Luong
>            Priority: Critical
>             Fix For: 1.0.0, 0.15.1
>
>         Attachments: arrow6910.png
>
>
> I realize that when I read up a lot of Parquet files using 
> pyarrow.parquet.read_table(...), my program's memory usage becomes very 
> bloated, although I don't keep the table objects after converting them to 
> Pandas DFs.
> You can try this in an interactive Python shell to reproduce this problem:
> ```{python}
> from pyarrow.parquet import read_table
> for path in paths_of_a_bunch_of_big_parquet_files:
>     read_table(path, use_threads=True, memory_map=False)
>     (note that I'm not assigning the read_table(...) result to anything, so 
> I'm not creating any new objects at all)
> ```
> After that For loop above, if you view the memory using (e.g. using htop 
> program), you'll see that the Python program has taken up a lot of memory. 
> That memory is only released when you exit() from Python.
> This problem means that my compute jobs using PyArrow currently need to use 
> bigger server instances than I think is necessary, which translates to 
> significant extra cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to