[ https://issues.apache.org/jira/browse/SPARK-17164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429326#comment-15429326 ]
Herman van Hovell edited comment on SPARK-17164 at 8/20/16 11:13 AM: --------------------------------------------------------------------- I tried this in Hive enabled Spark 1.6: {noformat} sqlContext.sql("select * from a:b") {noformat} This results in the following exception: {noformat} NoViableAltException(9@[150:5: ( ( Identifier LPAREN )=> partitionedTableFunction | tableSource | subQuerySource | virtualTableSource )]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at org.antlr.runtime.DFA.predict(DFA.java:144) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3711) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1873) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1518) at org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45861) at org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41516) at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41402) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40413) ... {noformat} I have also tried this using the {{org.apache.spark.sql.catalyst.SqlParser}}. This also fails. You have to use backticks if you want to use such a table name, e.g.: {{`a:b`}} was (Author: hvanhovell): I tried this in Hive enabled Spark 1.6: {noformat} sqlContext.sql("select * from a:b") {noformat} This results in the following exception: {noformat} NoViableAltException(9@[150:5: ( ( Identifier LPAREN )=> partitionedTableFunction | tableSource | subQuerySource | virtualTableSource )]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at org.antlr.runtime.DFA.predict(DFA.java:144) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3711) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1873) at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1518) at org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45861) at org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41516) at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41402) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40413) ... {noformat} I have also tried this using the {{org.apache.spark.sql.catalyst.SqlParser}}. This also yielded the same result. You have to use backticks if you want to use such a table name, e.g.: {{`a:b`}} > Query with colon in the table name fails to parse in 2.0 > -------------------------------------------------------- > > Key: SPARK-17164 > URL: https://issues.apache.org/jira/browse/SPARK-17164 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Reporter: Sital Kedia > > Running a simple query with colon in table name fails to parse in 2.0 > {code} > == SQL == > SELECT * FROM a:b > ---------------^^^ > at > org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:197) > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:99) > at > org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:46) > at > org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53) > at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582) > at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682) > ... 48 elided > {code} > Please note that this is a regression from Spark 1.6 as the query runs fine > in 1.6. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org