[ 
https://issues.apache.org/jira/browse/SPARK-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814061#comment-16814061
 ] 

Ruslan Dautkhanov commented on SPARK-15719:
-------------------------------------------

[~lian cheng] quick question on this part from the description -

{quote}
when schema merging is enabled, we need to read footers of all files anyway to 
do the merge
{quote}
Is that still accurate in current Spark 2.3/  2.4? 
I was looking ParquetFileFormat.inferSchema and it does look at 
`_common_metadata` and `_metadata` files here - 

https://github.com/apache/spark/blob/v2.4.1/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L231

or Spark would still need to look at all files in all partitions, not actually 
all parquet files? 

Thank you.

> Disable writing Parquet summary files by default
> ------------------------------------------------
>
>                 Key: SPARK-15719
>                 URL: https://issues.apache.org/jira/browse/SPARK-15719
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Cheng Lian
>            Assignee: Cheng Lian
>            Priority: Major
>              Labels: release_notes, releasenotes
>             Fix For: 2.0.0
>
>
> Parquet summary files are not particular useful nowadays since
> # when schema merging is disabled, we assume schema of all Parquet part-files 
> are identical, thus we can read the footer from any part-files.
> # when schema merging is enabled, we need to read footers of all files anyway 
> to do the merge.
> On the other hand, writing summary files can be expensive because footers of 
> all part-files must be read and merged. This is particularly costly when 
> appending small dataset to large existing Parquet dataset.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to