[
https://issues.apache.org/jira/browse/ARROW-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002975#comment-17002975
]
Wes McKinney commented on ARROW-7305:
-------------------------------------
See the following script
https://gist.github.com/wesm/193f644d10b5aee8c258b8f4f81c5161
Here is the output for me (off of master branch, I assume 0.15.1 is the same)
{code}
$ python arrow7305.py
Starting RSS: 102367232
Read CSV RSS: 154279936
Wrote Parquet RSS: 522485760
Waited 1 second RSS: 161763328
Read CSV RSS: 164732928
Wrote Parquet RSS: 528371712
Waited 1 second RSS: 226361344
Read CSV RSS: 167698432
Wrote Parquet RSS: 528502784
Waited 1 second RSS: 226492416
Read CSV RSS: 172175360
Wrote Parquet RSS: 532971520
Waited 1 second RSS: 230961152
Read CSV RSS: 172093440
Wrote Parquet RSS: 532889600
Waited 1 second RSS: 230879232
Read CSV RSS: 230940672
Wrote Parquet RSS: 532992000
Waited 1 second RSS: 230981632
Read CSV RSS: 232812544
Wrote Parquet RSS: 534822912
Waited 1 second RSS: 232812544
Read CSV RSS: 235274240
Wrote Parquet RSS: 537608192
Waited 1 second RSS: 235577344
Read CSV RSS: 236883968
Wrote Parquet RSS: 531349504
Waited 1 second RSS: 229318656
Read CSV RSS: 231157760
Wrote Parquet RSS: 533168128
Waited 1 second RSS: 231157760
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
Waited 1 second RSS: 172433408
{code}
Here is the output from 0.14.1
{code}
$ python arrow7305.py
Starting RSS: 74477568
Read CSV RSS: 126550016
Wrote Parquet RSS: 129470464
Waited 1 second RSS: 129470464
Read CSV RSS: 132321280
Wrote Parquet RSS: 135151616
Waited 1 second RSS: 135151616
Read CSV RSS: 135155712
Wrote Parquet RSS: 133169152
Waited 1 second RSS: 133169152
Read CSV RSS: 135159808
Wrote Parquet RSS: 133230592
Waited 1 second RSS: 133230592
Read CSV RSS: 135217152
Wrote Parquet RSS: 135217152
Waited 1 second RSS: 135217152
Read CSV RSS: 139567104
Wrote Parquet RSS: 139567104
Waited 1 second RSS: 139567104
Read CSV RSS: 141398016
Wrote Parquet RSS: 133378048
Waited 1 second RSS: 133378048
Read CSV RSS: 137068544
Wrote Parquet RSS: 133234688
Waited 1 second RSS: 133234688
Read CSV RSS: 135221248
Wrote Parquet RSS: 135221248
Waited 1 second RSS: 135221248
Read CSV RSS: 139567104
Wrote Parquet RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
Waited 1 second RSS: 133234688
{code}
I've only begun to investigate but these changes have to do with the jemalloc
version upgrade and the changes that we made to configuration options. I don't
know what is causing the ~30-40MB difference in the baseline memory usage,
though (could be differences in aggregate shared library sizes). We changed
memory page management to be performed in the background which means that
memory is not released to the OS immediately as it was before but rather on a
short time delay as you can see.
The basic idea is that requesting memory from the operating system is
expensive, and so jemalloc is being a bit greedy about holding on to memory for
a short period of time because applications that use a lot of memory often
continue using a lot of memory, and this will result in improved performance.
An alternative to our current configuration would be to disable the
background_thread option and set the decay_ms to 0. This would likely yield
worse performance in some applications.
We're having to strike a delicate balance between having a piece of software
that performs well in real world scenarios while also offering predictable
resource utilization. It is hard to satisfy everyone.
> [Python] High memory usage writing pyarrow.Table with large strings to parquet
> ------------------------------------------------------------------------------
>
> Key: ARROW-7305
> URL: https://issues.apache.org/jira/browse/ARROW-7305
> Project: Apache Arrow
> Issue Type: Task
> Components: Python
> Affects Versions: 0.15.1
> Environment: Mac OSX
> Reporter: Bogdan Klichuk
> Priority: Major
> Labels: parquet
> Attachments: 50mb.csv.gz
>
>
> My case of datasets stored is specific. I have large strings (1-100MB each).
> Let's take for example a single row.
> 43mb.csv is a 1-row CSV with 10 columns. One column a 43mb string.
> When I read this csv with pandas and then dump to parquet, my script consumes
> 10x of the 43mb.
> With increasing amount of such rows memory footprint overhead diminishes, but
> I want to focus on this specific case.
> Here's the footprint after running using memory profiler:
> {code:java}
> Line # Mem usage Increment Line Contents
> ================================================
> 4 48.9 MiB 48.9 MiB @profile
> 5 def test():
> 6 143.7 MiB 94.7 MiB data = pd.read_csv('43mb.csv')
> 7 498.6 MiB 354.9 MiB data.to_parquet('out.parquet')
> {code}
> Is this typical for parquet in case of big strings?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)