Re: Subquery in having-clause (Spark 1.1.0)

2016-07-20 Thread rickn
Seeing the same results on the current 1.62 release ... just wanted to confirm. Are there any work arounds? Do I need to wait for 2.0 for support ? https://issues.apache.org/jira/browse/SPARK-12543 Thank you -- View this message in context:

Subquery in having-clause (Spark 1.1.0)

2014-10-27 Thread Daniel Klinger
Hello, I got the following query: SELECT id, Count(*) AS amount FROM table1 GROUP BY id HAVING amount = (SELECT Max(mamount) FROM (SELECT id, Count(*) AS mamount FROM table1

Re: Subquery in having-clause (Spark 1.1.0)

2014-10-27 Thread Daniel Klinger
So it dosen't matter which dialect im using? Caus i set spark.sql.dialect to sql. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Subquery-in-having-clause-Spark-1-1-0-tp17401p17408.html Sent from the Apache Spark User List mailing list archive at

Re: Subquery in having-clause (Spark 1.1.0)

2014-10-27 Thread Michael Armbrust
Yeah, sorry for being unclear. Subquery expressions are not supported. That particular error was coming from the Hive parser. On Mon, Oct 27, 2014 at 4:03 PM, Daniel Klinger d...@web-computing.de wrote: So it dosen't matter which dialect im using? Caus i set spark.sql.dialect to sql. --