lidinghao commented on a change in pull request #25390: [SPARK-28662] [SQL] 
Create Hive Partitioned Table without specifying data type for partition 
columns will success in Spark 3.0
URL: https://github.com/apache/spark/pull/25390#discussion_r312701862
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala
 ##########
 @@ -985,7 +985,15 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder(conf) {
         } else {
           CreateTable(tableDescWithPartitionColNames, mode, Some(q))
         }
-      case None => CreateTable(tableDesc, mode, None)
+      case None =>
 
 Review comment:
   Hi, Yuanjian, thanks for the reasoning.
   Agree with you, Spark 2.4 and previous version,  will throw exception for 
partition column type missing,  
[#23376](https://github.com/apache/spark/pull/23376) brought  the current 
behavior, this PR  intend to check this case and throw exception.
   
   Don't have an  hive 3 environment on hand , so I add a unit test case in 
Hive 3.1 branch and run it ,  exception will be thrown as hive 2
   ```
   java.lang.RuntimeException: CREATE TABLE tbl(a int) PARTITIONED BY (b) 
STORED AS parquet; 
   failed: (responseCode = 40000, errorMessage = FAILED: ParseException line 
1:41 cannot recognize input near ')' 'STORED' 'AS' in column type, SQLState = 
42000, exception = line 1:41 cannot recognize input near ')' 'STORED' 'AS' in 
column type)
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to