Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7238#discussion_r35087470
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala ---
    @@ -407,7 +410,17 @@ private[sql] class ParquetRelation2(
           val filesToTouch =
             if (shouldMergeSchemas) {
               // Also includes summary files, 'cause there might be empty 
partition directories.
    -          (metadataStatuses ++ commonMetadataStatuses ++ 
dataStatuses).toSeq
    +
    +          // If skipMergePartFiles config is true, we assume that all 
part-files are the same for
    +          // their schema with summary files, so we ignore them when 
merging schema.
    +          // If the config is false, which is the default setting, we 
merge all part-files.
    +          val needMerged: Seq[FileStatus] =
    --- End diff --
    
    After double thinking about this, I feel that this configuration can be 
pretty dangerous... Actually it's not unusual that various tools/systems fail 
to write Parquet summary files after writing actual Parquet data. There are at 
least two common cases:
    
    - Hive uses `NullOutputCommitter`, which bypasses 
`ParquetOutputCommitter.commitJob()`, where summary files are written. So 
Parquet tables written by Hive never have summary files.
    - When appending Parquet data to some directory containing existing data, 
if newly written files have different user defined metadata from the old one 
(different values are assigned to the same key), Parquet simply gives up 
writing summary files (see [1] [1], [2] [2], and [PARQUET-194] [3]).
    
    So, it's quite possible that we have multiple partition directories, and 
some of them don't have summary files. With this flag enabled, part-files in 
those directories are completely ignored. Actually this is exactly the same 
situation illustrated in the test case you added.
    
    [1]: 
https://github.com/apache/parquet-mr/blob/apache-parquet-1.7.0/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/metadata/GlobalMetaData.java
    [2]: 
https://github.com/apache/parquet-mr/blob/apache-parquet-1.7.0/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetOutputCommitter.java#L57-L65
    [3]: https://issues.apache.org/jira/browse/PARQUET-194


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to