[
https://issues.apache.org/jira/browse/ARROW-5086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16932883#comment-16932883
]
Antoine Pitrou commented on ARROW-5086:
---------------------------------------
I would be surprised if the kernel copies the memory into RSS. I think the
paged in memory simply accounts into RSS.
As long as there is no memory pressure, the kernel probably doesn't feel any
need to page out those pages.
> [Python] Space leak in ParquetFile.read_row_group()
> ----------------------------------------------------
>
> Key: ARROW-5086
> URL: https://issues.apache.org/jira/browse/ARROW-5086
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.12.1
> Reporter: Jakub Okoński
> Assignee: Wes McKinney
> Priority: Major
> Labels: parquet, pull-request-available
> Fix For: 0.15.0
>
> Attachments: all.png, all.png
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> I have a code pattern like this:
>
> reader = pq.ParquetFile(path)
> for ix in range(0, reader.num_row_groups):
> table = reader.read_row_group(ix, columns=self._columns)
> # operate on table
>
> But it leaks memory over time, only releasing it when the reader object is
> collected. Here's a workaround
>
> num_row_groups = pq.ParquetFile(path).num_row_groups
> for ix in range(0, num_row_groups):
> table = pq.ParquetFile(path).read_row_group(ix, columns=self._columns)
> # operate on table
>
> This puts an upper bound on memory usage and is what I'd expect from the
> code. I also put gc.collect() to the end of every loop.
>
> I charted out memory usage for a small benchmark that just copies a file, one
> row group at a time, converting to pandas and back to arrow on the writer
> path. Line in black is the first one, using a single reader object. Blue is
> instantiating a fresh reader in every iteration.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)