[ 
https://issues.apache.org/jira/browse/PARQUET-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15060774#comment-15060774
 ] 

Daniel Weeks commented on PARQUET-409:
--------------------------------------

I definitely think it's worth exposing as a configurable property.  However, I 
haven't seen an issue where these checks are producing bad row group sizes.

I have seen some outlier datasets that have rows in excess of 10MB, but only a 
few records per file.  With that kind of size, you could get disproportionately 
sized row groups given enough records.

> InternalParquetRecordWriter doesn't use min/max row counts
> ----------------------------------------------------------
>
>                 Key: PARQUET-409
>                 URL: https://issues.apache.org/jira/browse/PARQUET-409
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>    Affects Versions: 1.8.1
>            Reporter: Ryan Blue
>             Fix For: 1.9.0
>
>
> PARQUET-99 added settings to control the min and max number of rows between 
> size checks when flushing pages, and a setting to control whether to always 
> use a static size (the min). The [InternalParquetRecordWriter has similar 
> checks|https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordWriter.java#L143]
>  that don't use those settings. We should determine if it should update it to 
> use those settings or similar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to