[
https://issues.apache.org/jira/browse/PARQUET-344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644711#comment-14644711
]
Ryan Blue commented on PARQUET-344:
-----------------------------------
[~QuentinFra], you can currently set the row group size and HDFS block size.
That allows you to make smaller row groups and control the parallelism.
* {{parquet.block.size}} - the target row group size, which we try to be
slightly under
* {{dfs.blocksize}} - sets the HDFS block size. Make this a whole-number
multiple of the row group size
Is that sufficient for your use case, or do you think that a limit in terms of
number of rows would be better? We can certainly add that, but I'm not sure
it's a good idea. When you set the end row group size in bytes, you don't have
to know what compression ratio you're going to get.
> Limit the number of rows per block and per split
> ------------------------------------------------
>
> Key: PARQUET-344
> URL: https://issues.apache.org/jira/browse/PARQUET-344
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-mr
> Reporter: Quentin Francois
> Original Estimate: 504h
> Remaining Estimate: 504h
>
> We use Parquet to store raw metrics data and then query this data with
> Hadoop-Pig.
> The issue is that sometimes we end up with small Parquet files (~80mo) that
> contain more than 300 000 000 rows, usually because of a constant metric
> which results in a very good compression. Too good. As a result we have a
> very few number of maps that process up to 10x more rows than the other maps
> and we lose the benefits of the parallelization.
> The fix for that has two components I believe:
> 1. Be able to limit the number of rows per Parquet block (in addition to the
> size limit).
> 2. Be able to limit the number of rows per split.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)