[ 
https://issues.apache.org/jira/browse/PARQUET-344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712050#comment-14712050
 ] 

Quentin Francois commented on PARQUET-344:
------------------------------------------

Thanks for your comments. Up to now we have been using a limit on the number of 
rows (see the Github PR attached) but I agree it would probably make more sense 
to limit the raw data size per row group as you suggested.

At this point, we'll probably stick to what we've made so far with the limit on 
the number of rows as it works well for us, but we could add a task to our 
roadmap and work on a fix to add a limit for the raw data size instead.

> Limit the number of rows per block and per split
> ------------------------------------------------
>
>                 Key: PARQUET-344
>                 URL: https://issues.apache.org/jira/browse/PARQUET-344
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-mr
>            Reporter: Quentin Francois
>
> We use Parquet to store raw metrics data and then query this data with 
> Hadoop-Pig. 
> The issue is that sometimes we end up with small Parquet files (~80mo) that 
> contain more than 300 000 000 rows, usually because of a constant metric 
> which results in a very good compression. Too good. As a result we have a 
> very few number of maps that process up to 10x more rows than the other maps 
> and we lose the benefits of the parallelization. 
> The fix for that has two components I believe:
> 1. Be able to limit the number of rows per Parquet block (in addition to the 
> size limit).
> 2. Be able to limit the number of rows per split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to