GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/15346
[SPARK-17741][SQL] Grammar to parse top level and nested data fields
separately
## What changes were proposed in this pull request?
Currently we use the same rule to parse top level and nested data fields.
For example:
```
create table tbl_x(
id bigint,
nested struct<col1:string,col2:string>
)
```
Shows both syntaxes. In this PR we split this rule in a top-level and
nested rule.
Before this PR,
```
sql("CREATE TABLE my_tab(column1: INT)")
```
works fine.
After this PR, it will throw a `ParseException`:
```
scala> sql("CREATE TABLE my_tab(column1: INT)")
org.apache.spark.sql.catalyst.parser.ParseException:
no viable alternative at input 'CREATE TABLE my_tab(column1:'(line 1, pos
27)
```
## How was this patch tested?
Add new testcases in `SparkSqlParserSuite`.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jiangxb1987/spark cdt
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/15346.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #15346
----
commit af837823dc1d8cba57824e274e0a60d8a4f4e061
Author: jiangxingbo <[email protected]>
Date: 2016-10-04T12:25:14Z
seprate nested data type from columns.
commit d131c4e5014c0951c16dc97d76fe87f8cb04b3cc
Author: jiangxingbo <[email protected]>
Date: 2016-10-04T17:00:20Z
add more testcases.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]