[
https://issues.apache.org/jira/browse/SPARK-36696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17412595#comment-17412595
]
Gidon Gershinsky edited comment on SPARK-36696 at 9/9/21, 2:29 PM:
-------------------------------------------------------------------
The [fix|https://github.com/apache/parquet-mr/pull/925] for PARQUET-2078 solves
this problem. But the Arrow folks need to fix the `RowGroup.offset`
computation, since it might affect some of the encrypted files (if they are
read by Spark).
was (Author: gershinsky):
The [fix|https://github.com/apache/parquet-mr/pull/925] for PARQUET-2078 solves
this problem. But the Arrow folks need to fix the `RowGroup.offset`
computation, since it might affect some of the encrypted files.
> spark.read.parquet loads empty dataset
> --------------------------------------
>
> Key: SPARK-36696
> URL: https://issues.apache.org/jira/browse/SPARK-36696
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.2.0
> Reporter: Takuya Ueshin
> Priority: Blocker
> Attachments: example.parquet
>
>
> Here's a parquet file Spark 3.2/master can't read properly.
> The file was stored by pandas and must contain 3650 rows, but Spark
> 3.2/master returns an empty dataset.
> {code:python}
> >>> import pandas as pd
> >>> len(pd.read_parquet('/path/to/example.parquet'))
> 3650
> >>> spark.read.parquet('/path/to/example.parquet').count()
> 0
> {code}
> I guess it's caused by the parquet 1.12.0.
> When I reverted two commits related to the parquet 1.12.0 from branch-3.2:
> -
> [https://github.com/apache/spark/commit/e40fce919ab77f5faeb0bbd34dc86c56c04adbaa]
> -
> [https://github.com/apache/spark/commit/cbffc12f90e45d33e651e38cf886d7ab4bcf96da]
> it reads the data successfully.
> We need to add some workaround, or revert the commits.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]