Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/22696
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224592912
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
-
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224590852
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224590474
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224588988
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224544112
--- Diff: docs/sql-programming-guide.md ---
@@ -1894,6 +1894,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
-
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224491849
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by.sql ---
@@ -73,3 +73,10 @@ where b.z != b.z;
-- SPARK-24369 multiple distinct aggregat
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224485161
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -108,7 +108,7 @@ class PlanParserSuite extends An
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224457317
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -108,7 +108,7 @@ class PlanParserSuite extends Analy
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224422094
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by.sql ---
@@ -73,3 +73,9 @@ where b.z != b.z;
-- SPARK-24369 multiple distinct aggregati
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22696#discussion_r224421873
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by.sql ---
@@ -73,3 +73,9 @@ where b.z != b.z;
-- SPARK-24369 multiple distinct aggregati
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/22696
[SPARK-25708][SQL] HAVING without GROUP BY means global aggregate
## What changes were proposed in this pull request?
According to the SQL standard, when a query contains `HAVING`, it ind
12 matches
Mail list logo