[
https://issues.apache.org/jira/browse/PARQUET-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053574#comment-15053574
]
Ryan Blue commented on PARQUET-353:
-----------------------------------
Merged #295, which has a small change to Nitin's original fix. Thanks for
contributing this fix, [~nitin2goyal]!
> Compressors not getting recycled while writing parquet files, causing memory
> leak
> ---------------------------------------------------------------------------------
>
> Key: PARQUET-353
> URL: https://issues.apache.org/jira/browse/PARQUET-353
> Project: Parquet
> Issue Type: Bug
> Components: parquet-mr
> Affects Versions: 1.6.0, 1.7.0, 1.8.0
> Reporter: Nitin Goyal
> Fix For: 1.9.0
>
>
> Compressors are not getting recycled while writing parquet files. This is
> causing native/physical memory leak in my spark app which is parquet write
> intensive since its creating new compressors everytime i write parquet files.
> The actual code issue is that we are creating 'codecFactory' in
> 'getRecordWriter' method of ParquetOutputFormat.java but not calling
> codecFactory.release() which is responsible for recycling compressors.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)