Seeing the same results on the current 1.62 release ... just wanted to
confirm.
Are there any work arounds? Do I need to wait for 2.0 for support ?
https://issues.apache.org/jira/browse/SPARK-12543
Thank you
--
View this message in context:
Hello,
I got the following query:
SELECT id,
Count(*) AS amount
FROM table1
GROUP BY id
HAVING amount = (SELECT Max(mamount)
FROM (SELECT id,
Count(*) AS mamount
FROM table1
So it dosen't matter which dialect im using? Caus i set spark.sql.dialect to
sql.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Subquery-in-having-clause-Spark-1-1-0-tp17401p17408.html
Sent from the Apache Spark User List mailing list archive at
Yeah, sorry for being unclear. Subquery expressions are not supported.
That particular error was coming from the Hive parser.
On Mon, Oct 27, 2014 at 4:03 PM, Daniel Klinger d...@web-computing.de wrote:
So it dosen't matter which dialect im using? Caus i set spark.sql.dialect
to
sql.
--