Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18849#discussion_r134027875
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
    @@ -1175,6 +1205,27 @@ private[spark] class HiveExternalCatalog(conf: 
SparkConf, hadoopConf: Configurat
         client.listFunctions(db, pattern)
       }
     
    +  /** Detect whether a table is stored with Hive-compatible metadata. */
    +  private def isHiveCompatible(table: CatalogTable): Boolean = {
    --- End diff --
    
    > why is it bad to have the compatibility bit be a table property?
    
    This compatibility bit is only processable/parsable to the Spark SQL 
(V2.3+, if we added it). If the other Spark SQL engine (e.g., V2.2) share the 
same metastore, they can make a schema change by altering the table properties 
(i.e., the behavior before this PR). Then, it will break the assumption we made 
here. The value of this new compatibility conf/flag becomes invalid. 
    
    So far, the safest way to check the compatibility is to compare the schema. 
If you think it is not enough, we can add the same thing we do for CREATE TABLE


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to