Github user chenghao-intel commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4289#discussion_r24010127
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
 ---
    @@ -172,4 +172,19 @@ class InsertIntoHiveTableSuite extends QueryTest {
     
         sql("DROP TABLE hiveTableWithStructValue")
       }
    +  
    +  test("SPARK-5498:partition schema does not match table schema"){
    +    val testData = TestHive.sparkContext.parallelize(
    +      (1 to 10).map(i => TestData(i, i.toString)))
    +    testData.registerTempTable("testData")
    +    val tmpDir = Files.createTempDir()
    +    sql(s"CREATE TABLE table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' ")
    +    sql("INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') 
SELECT key,value FROM testData")
    +    sql("ALTER TABLE table_with_partition CHANGE COLUMN key key BIGINT")
    --- End diff --
    
    I just checked the [Hive 
Document](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable)
    It says:
    `The CASCADE|RESTRICT clause is available in Hive 0.15.0. ALTER TABLE 
CHANGE COLUMN with CASCADE command changes the columns of a table's metadata, 
and cascades the same change to all the partition metadata. RESTRICT is the 
default, limiting column change only to table metadata.`
    I guess in Hive 0.13.1, when table schema changed via `alter table`, only 
the table meta data will be updated, can you double check if above query works 
for Hive 0.13.1? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to