Ray Hou created HIVE-21608: ------------------------------ Summary: SQL parsing error Key: HIVE-21608 URL: https://issues.apache.org/jira/browse/HIVE-21608 Project: Hive Issue Type: Bug Affects Versions: 1.1.0 Environment: Hive 1.1.0-cdh5.15.0 Subversion file:///data/jenkins/workspace/generic-package-centos64-7-0/topdir/BUILD/hive-1.1.0-cdh5.15.0 -r Unknown Compiled by jenkins on Thu May 24 04:17:02 PDT 2018 >From source with checksum 493255612021cd90286fcf5a3712d24e Reporter: Ray Hou
It is my first time to post here. Sorry if I made misoperation. When I write a SQL using a subquery and putting FROM ahead, it runs successfully. Like this: {code:java} FROM ( SELECT FCARPLATE, MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn FROM receipt2018h2 WHERE ( (FCARCLASS = 9 AND FCARTYPE = 6) OR (FCARCLASS = 8 AND FCARTYPE = 6) ) GROUP BY FCARPLATE ) e SELECT e.FCARPLATE WHERE e.maxn != e.minn ; {code} {color:#333333}But when I add an output instruction, it breaks down.{color} {code:java} INSERT OVERWRITE DIRECTORY '/sfsj/output' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE FROM ( SELECT FCARPLATE, MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn FROM receipt2018h2 WHERE ( (FCARCLASS = 9 AND FCARTYPE = 6) OR (FCARCLASS = 8 AND FCARTYPE = 6) ) GROUP BY FCARPLATE ) e SELECT e.FCARPLATE WHERE e.maxn != e.minn ; {code} {code:java} NoViableAltException(118@[]) at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:41622) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:40848) at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:40724) at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1530) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1066) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:201) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:524) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1358) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1475) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1287) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1277) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:226) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:175) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:389) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:634) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) FAILED: ParseException line 5:0 cannot recognize input near 'FROM' '(' 'SELECT' in statement{code} {color:#333333}However, once I adjust the order of the SQL above, it works!{color} {code:java} INSERT OVERWRITE DIRECTORY '/sfsj/output' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE SELECT e.FCARPLATE FROM ( SELECT FCARPLATE, MAX(FCARCLASS) AS maxn, MIN(FCARCLASS) AS minn FROM receipt2018h2 WHERE ( (FCARCLASS = 9 AND FCARTYPE = 6) OR (FCARCLASS = 8 AND FCARTYPE = 6) ) GROUP BY FCARPLATE ) e WHERE e.maxn != e.minn ; {code} {code:java} Query ID = root_20190412173939_55fe9030-8860-4193-bc85-e015def5b75e Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1099 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1554965284978_0367, Tracking URL = http://hbltmp01:8088/proxy/application_1554965284978_0367/ Kill Command = /opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hadoop/bin/hadoop job -kill job_1554965284978_0367 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 1099 2019-04-12 17:39:26,646 Stage-1 map = 0%, reduce = 0% 2019-04-12 17:39:37,312 Stage-1 map = 0%, reduce = 1%, Cumulative CPU 23.02 sec ... {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)