[ 
https://issues.apache.org/jira/browse/ARROW-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968654#comment-16968654
 ] 

V Luong edited comment on ARROW-6910 at 11/6/19 8:08 PM:
---------------------------------------------------------

[~apitrou] [~wesm] [~jorisvandenbossche] I'm re-testing this issue using the 
newly-released 0.15.1, with the following code, in an interactive Python 3.7 
shell:

-----
from pyarrow.parquet import read_table
import os
from tqdm import tqdm


PARQUET_S3_PATH = 's3://public-parquet-test-data/big.snappy.parquet'
PARQUET_HTTP_PATH = 
'http://public-parquet-test-data.s3.amazonaws.com/big.snappy.parquet'
PARQUET_TMP_PATH = '/tmp/big.snappy.parquet'


os.system('wget --output-document={} {}'.format(PARQUET_TMP_PATH, 
PARQUET_HTTP_PATH))


for _ in tqdm(range(10)):
    read_table(
        source=PARQUET_TMP_PATH,
        columns=None,
        use_threads=False,
        metadata=None,
        use_pandas_metadata=False,
        memory_map=False,
        filesystem=None,
        filters=None)
-------

I observe the following mysterious behavior:
- If I don't do anything after the above For loop, the program still occupies 
8-10GB of memory and does not release it. I keep it at this idle state for a 
good 10-15 minutes and confirm that memory is still occupied.
- Then, I try to do something random, like "import pyarrow; 
print(pyarrow.__version__)" in the interactive shell, and then the memory is 
immediately released.

This behavior remains unintuitive to me, and it seems users still don't have a 
firm control on the memory used by PyArrow. Each read_table(...) call does not 
seem memory-neutral by default as of 0.15.1 yet. This means long-running 
iterative programs, especially ML training involving repeated loading up these 
files, will inevitably OOM.


was (Author: mbalearnstocode):
[~apitrou] [~wesm] [~jorisvandenbossche] I'm re-testing this issue using the 
newly-released 0.15.1, with the following code, in an interactive Python 3.7 
shell:

-----
from pyarrow.parquet import read_table
import os
from tqdm import tqdm


PARQUET_S3_PATH = 's3://public-parquet-test-data/big.snappy.parquet'
PARQUET_HTTP_PATH = 
'http://public-parquet-test-data.s3.amazonaws.com/big.snappy.parquet'
PARQUET_TMP_PATH = '/tmp/big.snappy.parquet'


os.system('wget --output-document={} {}'.format(PARQUET_TMP_PATH, 
PARQUET_HTTP_PATH))


for _ in tqdm(range(10)):
    read_table(
        source=PARQUET_TMP_PATH,
        columns=None,
        use_threads=False,
        metadata=None,
        use_pandas_metadata=False,
        memory_map=False,
        filesystem=None,
        filters=None)
---

I observe the following mysterious behavior:
- If I don't do anything after the above For loop, the program still occupies 
8-10GB of memory and does not release it. I keep it at this idle state for a 
good 10-15 minutes and confirm that memory is still occupied.
- Then, I try to do something random, like "import pyarrow; 
print(pyarrow.__version__)" in the interactive shell, and then the memory is 
immediately released.

This behavior remains unintuitive to me, and it seems users still don't have a 
firm control on the memory used by PyArrow. Each read_table(...) call does not 
seem memory-neutral by default as of 0.15.1 yet. This means long-running 
iterative programs, especially ML training involving repeated loading up these 
files, will inevitably OOM.

> [Python] pyarrow.parquet.read_table(...) takes up lots of memory which is not 
> released until program exits
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-6910
>                 URL: https://issues.apache.org/jira/browse/ARROW-6910
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++, Python
>    Affects Versions: 0.15.0
>            Reporter: V Luong
>            Assignee: Wes McKinney
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 1.0.0, 0.15.1
>
>         Attachments: arrow6910.png
>
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> I realize that when I read up a lot of Parquet files using 
> pyarrow.parquet.read_table(...), my program's memory usage becomes very 
> bloated, although I don't keep the table objects after converting them to 
> Pandas DFs.
> You can try this in an interactive Python shell to reproduce this problem:
> ```{python}
> from tqdm import tqdm
> from pyarrow.parquet import read_table
> PATH = '/tmp/big.snappy.parquet'
> for _ in tqdm(range(10)):
>     read_table(PATH, use_threads=False, memory_map=False)
>     (note that I'm not assigning the read_table(...) result to anything, so 
> I'm not creating any new objects at all)
> ```
> During the For loop above, if you view the memory usage (e.g. using htop 
> program), you'll see that it keeps creeping up. Either the program crashes 
> during the 10 iterations, or if the 10 iterations complete, the program will 
> still occupy a huge amount of memory, although no objects are kept. That 
> memory is only released when you exit() from Python.
> This problem means that my compute jobs using PyArrow currently need to use 
> bigger server instances than I think is necessary, which translates to 
> significant extra cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to