Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224590852
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
- In PySpark, when creating a `SparkSession` with
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`,
the builder was trying to update the `SparkConf` of the existing `SparkContext`
with configurations specified to the builder, but the `SparkContext` is shared
by all `SparkSession`s, so we should not update them. Since 3.0, the builder
come to not update the configurations. This is the same behavior as Java/Scala
API in 2.3 and above. If you want to update them, you need to update them prior
to creating a `SparkSession`.
+ - In Spark version 2.4 and earlier, HAVING without GROUP BY is treated
as WHERE. This means, `SELECT 1 FROM range(10) HAVING true` is executed as
`SELECT 1 FROM range(10) WHERE true` and returns 10 rows. This violates SQL
standard, and has been fixed in Spark 3.0. Since Spark 3.0, HAVING without
GROUP BY is treated as a global aggregate, which means `SELECT 1 FROM range(10)
HAVING true` will return only one row.
--- End diff --
Yes. We should add a legacy SQLConf
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]