Github user frreiss commented on the pull request:
https://github.com/apache/spark/pull/6208#issuecomment-103209039
Hi Mike,
Thanks for getting back to me on this, and thanks for pointing out the new
multi-dialect support added under SPARK-5213. I'm fine with putting ANSI
SQL under a separate dialect, but but it looks like there currently is no
dialect bundled with Spark that is compliant with any version of the SQL
standard. Are there plans to include such a dialect with Spark? SQL
compliance (particularly at the level of basic things like identifiers and
strings) is a very basic requirement. Spark won't be able to import DDL or
queries from any major database without a compliant parser. Users who are
accustomed to ANSI SQL will be frustrated if queries that work against
Oracle or DB2 won't parse on Spark.
Fred
From: Michael Armbrust <[email protected]>
To: apache/spark <[email protected]>
Cc: Frederick R Reiss/Almaden/IBM@IBMUS
Date: 05/18/2015 12:20 PM
Subject: Re: [spark] [SPARK-6649] [SQL] Made double quotes denote
identifiers (#6208)
Thanks for working on this, but we can't change the semantics of the SQL
parser or we'll break existing users queries. I suggest we close this issue
and you look at our new pluggable dialect support if this is a something
you need for compatibility.
â
Reply to this email directly or view it on GitHub.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]