HeartSaVioR commented on issue #28026: [SPARK-31257][SQL] Unify create table 
syntax (WIP)
URL: https://github.com/apache/spark/pull/28026#issuecomment-607003471
 
 
   > The create test in SparkSqlParserSuite highlighted an existing problem 
with spark.sql.legacy.createHiveTableByDefault.enabled. With that table 
property disabled (default to USING), Hive was still used to create a table 
because Hive's PARTITION BY syntax was used. I think this is very confusing 
behavior! (@HeartSaVioR probably agrees.)
   
   Sure, I totally agree. That's one of the issue I mentioned in mail thread, 
refer below link:
   
   
https://lists.apache.org/thread.html/ra2aa31ed7cbd4e34d3504adc97cae1301cc249bfb8f95565b808b0cb%40%3Cdev.spark.apache.org%3E
   
   It's wrong if someone simply thinks that if some words (`ROW FORMAT`, 
`STORED AS`) exist then Hive create table will be used. There're more 
differences on details, including the point @rdblue pointed out here. The issue 
didn't exist before, as `USING` clause plays as a marker to distinguish two 
syntaxes.
   
   > Because of this problem, I think we should either roll back the patch to 
make USING the default in 3.0 since it is not applied everywhere. That, or get 
this syntax unification into 3.0.
   
   That's in line with my proposal as well. The ideal approach would be 
reverting SPARK-30098 in Spark 3.0 and make it correct with syntax unification 
in further version. I've also proposed some alternatives (add a marker, turn on 
legacy config by default) but haven't heard any feedback.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to