[
https://issues.apache.org/jira/browse/PARQUET-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17758861#comment-17758861
]
ASF GitHub Bot commented on PARQUET-2342:
-----------------------------------------
wgtmac commented on code in PR #1135:
URL: https://github.com/apache/parquet-mr/pull/1135#discussion_r1305294522
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetOutputFormat.java:
##########
@@ -146,6 +146,7 @@ public static enum JobSummaryLevel {
public static final String MAX_PADDING_BYTES =
"parquet.writer.max-padding";
public static final String MIN_ROW_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.row.check.min";
public static final String MAX_ROW_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.row.check.max";
+ public static final String VALUE_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.count.check";
Review Comment:
```suggestion
public static final String VALUE_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.value.count.check";
```
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetOutputFormat.java:
##########
@@ -146,6 +146,7 @@ public static enum JobSummaryLevel {
public static final String MAX_PADDING_BYTES =
"parquet.writer.max-padding";
public static final String MIN_ROW_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.row.check.min";
public static final String MAX_ROW_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.row.check.max";
+ public static final String VALUE_COUNT_FOR_PAGE_SIZE_CHECK =
"parquet.page.size.count.check";
Review Comment:
Please also update the doc here:
https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/README.md
> Parquet writer produced a corrupted file due to page value count overflow
> -------------------------------------------------------------------------
>
> Key: PARQUET-2342
> URL: https://issues.apache.org/jira/browse/PARQUET-2342
> Project: Parquet
> Issue Type: Bug
> Components: parquet-mr
> Reporter: Zamil Majdy
> Priority: Major
>
> Parquet writer only checks the number of rows and the page size to decide
> whether it needs to fit a content to be written in a single page.
> In the case of a composite column (ex: array/map) with a lot of nulls, it is
> possible to create 2billions+ values while under the default page-size &
> row-count threshold (1MB, 20000rows)
>
> Repro using Spark:
> {{ val dir = "/tmp/anyrandomDirectory"}}
> {{ spark.range(0, 20000, 1, 1)}}
> {{ .selectExpr("array_repeat(cast(null as binary), 110000) as n")}}
> {{ .write}}
> {{ .mode("overwrite")}}
> {{ .save(dir)}}
> {{ val result = spark}}
> {{ .sql(s"select * from parquet.`$dir` limit 1000")}}
> {{ .collect() // This will break}}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)