Github user cloud-fan commented on a diff in the pull request:
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
    @@ -447,17 +461,10 @@ private[spark] class HiveExternalCatalog(conf: 
SparkConf, hadoopConf: Configurat
         } else {
           getProviderFromTableProperties(table).map { provider =>
             assert(provider != "hive", "Hive serde table should not save 
provider in table properties.")
    -        // SPARK-15269: Persisted data source tables always store the 
location URI as a storage
    -        // property named "path" instead of standard Hive `dataLocation`, 
because Hive only
    -        // allows directory paths as location URIs while Spark SQL data 
source tables also
    -        // allows file paths. So the standard Hive `dataLocation` is 
meaningless for Spark SQL
    -        // data source tables.
    -        // Spark SQL may also save external data source in Hive compatible 
format when
    -        // possible, so that these tables can be directly accessed by 
Hive. For these tables,
    -        // `dataLocation` is still necessary. Here we also check for input 
format because only
    -        // these Hive compatible tables set this field.
    -        val storage = if (table.tableType == EXTERNAL && {
    - = None)
    +        // Data source table always put its location URI(if it has) in 
table properties, to work
    +        // around a hive metastore issue. We should read it back before 
return the table metadata.
    +        val storage = if (table.tableType == EXTERNAL) {
    --- End diff --
    what do you mean? `HiveExternalCatlaog.alterTable` always add this location 

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to