[ 
https://issues.apache.org/jira/browse/PARQUET-108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934654#comment-14934654
 ] 

Abhilash L L commented on PARQUET-108:
--------------------------------------

Hello,

    We are creating parquet files from within our MR code as one of the 
MultipleOutputs. We observer that it starts giving OOM when the number of 
partitions exceeded. We are using 
http://mvnrepository.com/artifact/com.twitter/parquet-avro/1.6.0

    Will this work in our flow and not necessarily being written via hive ?


PS: Sorry for adding a comment on a resolved ticket 

> Parquet Memory Management in Java
> ---------------------------------
>
>                 Key: PARQUET-108
>                 URL: https://issues.apache.org/jira/browse/PARQUET-108
>             Project: Parquet
>          Issue Type: Improvement
>            Reporter: Brock Noland
>            Assignee: Dong Chen
>             Fix For: 1.6.0
>
>
> As discussed in HIVE-7685, Hive + Parquet often runs out memory when writing 
> to many Hive partitions. This is quite problematic for our users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to