[
https://issues.apache.org/jira/browse/PARQUET-344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644738#comment-14644738
]
Quentin Francois commented on PARQUET-344:
------------------------------------------
Thx [~rdblue]. In our case, to solve the problem we would need to decrease the
parquet.block.size to something like a few mb... I believe that decreasing the
block size to such a value would significantly reduce the compression
performance we are looking for from Parquet. We found that 128mb is a good
block size for most of our data except for a small fraction of it that creates
the small files with hundreds of millions of rows I mentioned.
So limiting the number of rows would just be a "safety" parameter that would
enable us to keep a decent block size and at the same time not ending up with a
few small files with hundreds of millions of rows.
I am not sure I am very clear...
> Limit the number of rows per block and per split
> ------------------------------------------------
>
> Key: PARQUET-344
> URL: https://issues.apache.org/jira/browse/PARQUET-344
> Project: Parquet
> Issue Type: Improvement
> Components: parquet-mr
> Reporter: Quentin Francois
> Original Estimate: 504h
> Remaining Estimate: 504h
>
> We use Parquet to store raw metrics data and then query this data with
> Hadoop-Pig.
> The issue is that sometimes we end up with small Parquet files (~80mo) that
> contain more than 300 000 000 rows, usually because of a constant metric
> which results in a very good compression. Too good. As a result we have a
> very few number of maps that process up to 10x more rows than the other maps
> and we lose the benefits of the parallelization.
> The fix for that has two components I believe:
> 1. Be able to limit the number of rows per Parquet block (in addition to the
> size limit).
> 2. Be able to limit the number of rows per split.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)