Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18849#discussion_r134038745
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
    @@ -1175,6 +1205,27 @@ private[spark] class HiveExternalCatalog(conf: 
SparkConf, hadoopConf: Configurat
         client.listFunctions(db, pattern)
       }
     
    +  /** Detect whether a table is stored with Hive-compatible metadata. */
    +  private def isHiveCompatible(table: CatalogTable): Boolean = {
    --- End diff --
    
    > The value of this new compatibility conf/flag becomes invalid.
    
    Also, that's not true. While it's true that old Spark versions can still 
corrupt these tables, this property is supposed to be a reliable way to detect 
compatibility going forward, so that if there are more cases similar to this, 
they can be handled without having to make guesses about whether the table is 
compatible or not.
    
    So, in my view, as long as old Spark versions don't get rid of the property 
when altering tables, and it seems they don't, it's beneficial to have this 
explicit compatibility flag.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to