GitHub user budde opened a pull request:

    https://github.com/apache/spark/pull/17229

    [SPARK-19611][SQL] Introduce configurable table schema inference

    Add a new configuration option that allows Spark SQL to infer a 
case-sensitive schema from a Hive Metastore table's data files when a 
case-sensitive schema can't be read from the table properties.
    
    - Add spark.sql.hive.caseSensitiveInferenceMode param to SQLConf
    - Add schemaPreservesCase field to CatalogTable (set to false when schema 
can't
      successfully be read from Hive table props)
    - Perform schema inference in HiveMetastoreCatalog if schemaPreservesCase is
      false, depending on spark.sql.hive.caseSensitiveInferenceMode
    - Add alterTableSchema() method to the ExternalCatalog interface
    - Add HiveSchemaInferenceSuite tests
    - Refactor and move ParquetFileForamt.meregeMetastoreParquetSchema() as
      HiveMetastoreCatalog.mergeWithMetastoreSchema
    - Move schema merging tests from ParquetSchemaSuite to 
HiveSchemaInferenceSuite
    
    [JIRA for this change](https://issues.apache.org/jira/browse/SPARK-19611)
    
    The tests in ```HiveSchemaInferenceSuite``` should verify that schema 
inference is working as expected. ```ExternalCatalogSuite``` has also been 
extended to cover the new ```alterTableSchema()``` API.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/budde/spark SPARK-19611-2.1

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/17229.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #17229
    
----
commit 27e9f68b3a704038911c0c93220728d97cddb299
Author: Budde <[email protected]>
Date:   2017-03-09T20:55:33Z

    [SPARK-19611][SQL] Introduce configurable table schema inference
    
    Add a new configuration option that allows Spark SQL to infer a 
case-sensitive schema from a Hive Metastore table's data files when a 
case-sensitive schema can't be read from the table properties.
    
    - Add spark.sql.hive.caseSensitiveInferenceMode param to SQLConf
    - Add schemaPreservesCase field to CatalogTable (set to false when schema 
can't
      successfully be read from Hive table props)
    - Perform schema inference in HiveMetastoreCatalog if schemaPreservesCase is
      false, depending on spark.sql.hive.caseSensitiveInferenceMode
    - Add alterTableSchema() method to the ExternalCatalog interface
    - Add HiveSchemaInferenceSuite tests
    - Refactor and move ParquetFileForamt.meregeMetastoreParquetSchema() as
      HiveMetastoreCatalog.mergeWithMetastoreSchema
    - Move schema merging tests from ParquetSchemaSuite to 
HiveSchemaInferenceSuite
    
    [JIRA for this change](https://issues.apache.org/jira/browse/SPARK-19611)
    
    The tests in ```HiveSchemaInferenceSuite``` should verify that schema 
inference is working as expected. ```ExternalCatalogSuite``` has also been 
extended to cover the new ```alterTableSchema()``` API.
    
    Author: Budde <[email protected]>
    
    Closes #16944 from budde/SPARK-19611.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to