[ 
https://issues.apache.org/jira/browse/ARROW-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17580475#comment-17580475
 ] 

Will Jones commented on ARROW-17441:
------------------------------------

Going back to my original test with Parquet, it does seem like there some 
long-standing issue with Parquet reads and mimalloc. And a regression with the 
system allocator on MacOS?

Here is the original Parquet read test (so all buffers are allocated within 
Arrow, no numpy):
{code:python}
import os
import psutil
import time
import gc
process = psutil.Process(os.getpid())


import pyarrow.parquet as pq
import pyarrow as pa

def print_rss():
    print(f"RSS: {process.memory_info().rss:,} bytes")

pq_path = "tall.parquet"

print(f"memory_pool={pa.default_memory_pool().backend_name}")
print_rss()
print("reading table")
tab = pq.read_table(pq_path)
print_rss()
print("deleting table")
del tab
gc.collect()
print_rss()
print("releasing unused memory")
pa.default_memory_pool().release_unused()
print_rss()
print("waiting 10 seconds")
time.sleep(10)
print_rss()
{code}
Result in PyArrow 7.0.0:
{code:none}
ARROW_DEFAULT_MEMORY_POOL=mimalloc python test_pool2.py && \
    ARROW_DEFAULT_MEMORY_POOL=jemalloc python test_pool2.py && \
    ARROW_DEFAULT_MEMORY_POOL=system python test_pool2.py
memory_pool=mimalloc
RSS: 47,906,816 bytes
reading table
RSS: 2,077,507,584 bytes
deleting table
RSS: 2,071,887,872 bytes
releasing unused memory
RSS: 2,064,875,520 bytes
waiting 10 seconds
RSS: 1,862,352,896 bytes
memory_pool=jemalloc
RSS: 47,415,296 bytes
reading table
RSS: 2,704,965,632 bytes
deleting table
RSS: 70,746,112 bytes
releasing unused memory
RSS: 71,663,616 bytes
waiting 10 seconds
RSS: 71,663,616 bytes
memory_pool=system
RSS: 47,857,664 bytes
reading table
RSS: 2,705,408,000 bytes
deleting table
RSS: 71,106,560 bytes
releasing unused memory
RSS: 71,106,560 bytes
waiting 10 seconds
RSS: 71,106,560 bytes
{code}
Result in PyArrow 9.0.0:
{code:none}
> ARROW_DEFAULT_MEMORY_POOL=mimalloc python test_pool2.py && \
    ARROW_DEFAULT_MEMORY_POOL=jemalloc python test_pool2.py && \
    ARROW_DEFAULT_MEMORY_POOL=system python test_pool2.py
memory_pool=mimalloc
RSS: 48,037,888 bytes
reading table
RSS: 2,140,487,680 bytes
deleting table
RSS: 2,149,711,872 bytes
releasing unused memory
RSS: 2,142,273,536 bytes
waiting 10 seconds
RSS: 1,710,981,120 bytes
memory_pool=jemalloc
RSS: 48,136,192 bytes
reading table
RSS: 2,681,274,368 bytes
deleting table
RSS: 71,942,144 bytes
releasing unused memory
RSS: 72,908,800 bytes
waiting 10 seconds
RSS: 72,908,800 bytes
memory_pool=system
RSS: 48,005,120 bytes
reading table
RSS: 2,847,965,184 bytes
deleting table
RSS: 1,440,071,680 bytes
releasing unused memory
RSS: 1,440,071,680 bytes
waiting 10 seconds
RSS: 1,440,071,680 bytes
{code}

> [Python] Memory kept after del and pool.released_unused()
> ---------------------------------------------------------
>
>                 Key: ARROW-17441
>                 URL: https://issues.apache.org/jira/browse/ARROW-17441
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>    Affects Versions: 9.0.0
>            Reporter: Will Jones
>            Priority: Major
>
> I was trying reproduce another issue involving memory pools not releasing 
> memory, but encountered this confusing behavior: if I create a table, then 
> call {{{}del table{}}}, and then {{{}pool.release_unused(){}}}, I still see 
> significant memory usage. On mimalloc in particular, I see no meaningful drop 
> in memory usage on either call.
> Am I missing something? My understanding prior has been that memory will be 
> held onto by a memory pool, but will be forced free by release_unused; and 
> that system memory pool should release memory immediately. But neither of 
> those seem true.
> {code:python}
> import os
> import psutil
> import time
> import gc
> process = psutil.Process(os.getpid())
> import numpy as np
> from uuid import uuid4
> import pyarrow as pa
> def gen_batches(n_groups=200, rows_per_group=200_000):
>     for _ in range(n_groups):
>         id_val = uuid4().bytes
>         yield pa.table({
>             "x": np.random.random(rows_per_group), # This will compress poorly
>             "y": np.random.random(rows_per_group),
>             "a": pa.array(list(range(rows_per_group)), type=pa.int32()), # 
> This compresses with delta encoding
>             "id": pa.array([id_val] * rows_per_group), # This compresses with 
> RLE
>         })
> def print_rss():
>     print(f"RSS: {process.memory_info().rss:,} bytes")
> print(f"memory_pool={pa.default_memory_pool().backend_name}")
> print_rss()
> print("reading table")
> tab = pa.concat_tables(list(gen_batches()))
> print_rss()
> print("deleting table")
> del tab
> gc.collect()
> print_rss()
> print("releasing unused memory")
> pa.default_memory_pool().release_unused()
> print_rss()
> print("waiting 10 seconds")
> time.sleep(10)
> print_rss()
> {code}
> {code:none}
> > ARROW_DEFAULT_MEMORY_POOL=mimalloc python test_pool.py && \
>     ARROW_DEFAULT_MEMORY_POOL=jemalloc python test_pool.py && \
>     ARROW_DEFAULT_MEMORY_POOL=system python test_pool.py
> memory_pool=mimalloc
> RSS: 44,449,792 bytes
> reading table
> RSS: 1,819,557,888 bytes
> deleting table
> RSS: 1,819,590,656 bytes
> releasing unused memory
> RSS: 1,819,852,800 bytes
> waiting 10 seconds
> RSS: 1,819,852,800 bytes
> memory_pool=jemalloc
> RSS: 45,629,440 bytes
> reading table
> RSS: 1,668,677,632 bytes
> deleting table
> RSS: 698,400,768 bytes
> releasing unused memory
> RSS: 699,023,360 bytes
> waiting 10 seconds
> RSS: 699,023,360 bytes
> memory_pool=system
> RSS: 44,875,776 bytes
> reading table
> RSS: 1,713,569,792 bytes
> deleting table
> RSS: 540,311,552 bytes
> releasing unused memory
> RSS: 540,311,552 bytes
> waiting 10 seconds
> RSS: 540,311,552 bytes
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to