Github user liancheng commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7154#discussion_r35293980
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala ---
    @@ -345,14 +345,23 @@ private[sql] class ParquetRelation2(
         // Schema of the whole table, including partition columns.
         var schema: StructType = _
     
    +    // Cached leaves
    +    val cachedLeaves: mutable.Map[Path, FileStatus] = new 
mutable.HashMap[Path, FileStatus]()
    +
         /**
          * Refreshes `FileStatus`es, footers, partition spec, and table schema.
          */
         def refresh(): Unit = {
           // Lists `FileStatus`es of all leaf nodes (files) under all base 
directories.
    +      // We only care about new FileStatus and updated FileStatus.
           val leaves = cachedLeafStatuses().filter { f =>
    -        isSummaryFile(f.getPath) ||
    -          !(f.getPath.getName.startsWith("_") || 
f.getPath.getName.startsWith("."))
    +        (isSummaryFile(f.getPath) ||
    +          !(f.getPath.getName.startsWith("_") || 
f.getPath.getName.startsWith("."))) &&
    +          (!cachedLeaves.contains(f.getPath) ||
    +            cachedLeaves(f.getPath).getModificationTime < 
f.getModificationTime)
    +      }.map { f =>
    +        cachedLeaves += (f.getPath -> f)
    +        f
    --- End diff --
    
    This is a good point. After reading footers and merging schemas of of new 
files and updated files, we also need to merge the result schema with the old 
schema, because some columns may be missing in new files and/or updated files.
    
    Actually I found it might be difficult to define the "correctness" of the 
merged schema. Take the following scenario as an example:
    
    1. Initially there is file `f0`, which comes with a single column `c0`.
    
       Merged schema: `c0`
    
    2. File `f1` is added, which contains a single conlumn `c1`
    
       Merged schema: `c0`, `c1`
    
    3. Removing `f0`
    
       Which is the "correct" merged schema now?
    
       a. `c0`, `c1`
       b. `c1`
    
       I tend to use (a), because removing existing columns can be dangerous, 
and may confuse down stream systems. But currently Spark SQL uses (b). Also, we 
need to take metastore schema into account for Parquet relations converted from 
metastore Parquet tables.
    
    I think this issue is too complicated to be fixed in this PR. I agree with 
you that we should keep this PR simple and just re-read all the footers for 
now. It's already strictly better than the current implementation, not mention 
that schema merging has been significantly accelerated by #7396.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to