[ 
https://issues.apache.org/jira/browse/PARQUET-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17758638#comment-17758638
 ] 

ASF GitHub Bot commented on PARQUET-2342:
-----------------------------------------

majdyz opened a new pull request, #1135:
URL: https://github.com/apache/parquet-mr/pull/1135

   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET-2342) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
     - https://issues.apache.org/jira/browse/PARQUET-2342
     - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
     - `testMemColumnBinaryExceedIntMaxValue` on 
parquet-column/src/test/java/org/apache/parquet/column/mem/TestMemColumn.java
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
     - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Parquet writer produced a corrupted file due to page value count overflow
> -------------------------------------------------------------------------
>
>                 Key: PARQUET-2342
>                 URL: https://issues.apache.org/jira/browse/PARQUET-2342
>             Project: Parquet
>          Issue Type: Bug
>          Components: parquet-mr
>            Reporter: Zamil Majdy
>            Priority: Major
>
> Parquet writer only checks the number of rows and the page size to decide 
> whether it needs to fit a content to be written in a single page. 
> In the case of a composite column (ex: array/map) with a lot of nulls, it is 
> possible to create 2billions+ values while under the default page-size & 
> row-count threshold (1MB, 20000rows)
>  
> Repro using Spark:
> {{      val dir = "/tmp/anyrandomDirectory"}}
> {{      spark.range(0, 20000, 1, 1)}}
> {{        .selectExpr("array_repeat(cast(null as binary), 110000) as n")}}
> {{        .write}}
> {{        .mode("overwrite")}}
> {{        .save(dir)}}
> {{      val result = spark}}
> {{        .sql(s"select * from parquet.`$dir` limit 1000")}}
> {{        .collect() // This will break}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to