[ 
https://issues.apache.org/jira/browse/SPARK-26215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16703042#comment-16703042
 ] 

Marco Gaido commented on SPARK-26215:
-------------------------------------

[~cloud_fan] thanks for pinging me. I agree on putting a rule. And I think if 
we want to do this, since it is a breaking change, 3.0 is the right version to 
do that. I am wondering if we should create an umbrella JIRA for SQL standard 
compliance in 3.0: I have also some PRs which we can now revisit (eg. failing 
on overflow) in order to achieve full (or at least better) SQL standard 
compliance. What do you think? Moreover, I think we should also decide which 
SQL standard we want to use: SQL2011 maybe?

> define reserved keywords after SQL standard
> -------------------------------------------
>
>                 Key: SPARK-26215
>                 URL: https://issues.apache.org/jira/browse/SPARK-26215
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Wenchen Fan
>            Priority: Major
>
> There are 2 kinds of SQL keywords: reserved and non-reserved. Reserved 
> keywords can't be used as identifiers.
> In Spark SQL, we are too tolerant about non-reserved keywors. A lot of 
> keywords are non-reserved and sometimes it cause ambiguity (IIRC we hit a 
> problem when improving the INTERVAL syntax).
> I think it will be better to just follow other databases or SQL standard to 
> define reserved keywords, so that we don't need to think very hard about how 
> to avoid ambiguity.
> For reference: https://www.postgresql.org/docs/8.1/sql-keywords-appendix.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to