Thanks Herman,
I didn't recognise the "user" is reserved word. it works now.

On 19 May 2016 at 08:02, Herman van Hövell tot Westerflier <
hvanhov...@questtec.nl> wrote:

> 'User' is a SQL2003 keyword. This is normally not a problem, except when
> you use it as a table alias (which you are doing). Change the alias or
> place it between backticks and you should be fine.
>
>
> 2016-05-18 23:51 GMT+02:00 JaeSung Jun <jaes...@gmail.com>:
>
>> It's spark 1.6.1 and hive 1.2.1 (spark-sql saying "SET
>> spark.sql.hive.version=1.2.1").
>>
>> Thanks
>>
>> On 18 May 2016 at 23:31, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>>> Which release of Spark / Hive are you using ?
>>>
>>> Cheers
>>>
>>> On May 18, 2016, at 6:12 AM, JaeSung Jun <jaes...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I'm working on custom data source provider, and i'm using fully
>>> qualified table name in FROM clause like following :
>>>
>>> SELECT user. uid, dept.name
>>> FROM userdb.user user, deptdb.dept
>>> WHERE user.dept_id = dept.id
>>>
>>> and i've got the following error :
>>>
>>> MismatchedTokenException(279!=26)
>>> at
>>> org.antlr.runtime.BaseRecognizer.recoverFromMismatchedToken(BaseRecognizer.java:617)
>>> at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.tableSource(HiveParser_FromClauseParser.java:4608)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3729)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1873)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1518)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45861)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:41516)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41402)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40413)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40283)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1590)
>>> at
>>> org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1109)
>>> at
>>> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
>>> at
>>> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
>>> at org.apache.spark.sql.hive.HiveQl$.getAst(HiveQl.scala:276)
>>> at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:303)
>>> at
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
>>> at
>>> org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
>>> at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
>>> at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
>>> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
>>> at
>>> scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>> at
>>> scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
>>> at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
>>> at
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>> at
>>> scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
>>> at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
>>> at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
>>> at
>>> scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
>>> at
>>> org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
>>> at org.apache.spark.sql.hive.HiveQl$.parseSql(HiveQl.scala:295)
>>>
>>> Any idea?
>>>
>>> Thanks
>>> Jason
>>>
>>>
>>
>

Reply via email to