[ 
https://issues.apache.org/jira/browse/SPARK-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Li resolved SPARK-22983.
-----------------------------
       Resolution: Fixed
    Fix Version/s: 2.3.0
                   2.2.2

> Don't push filters beneath aggregates with empty grouping expressions
> ---------------------------------------------------------------------
>
>                 Key: SPARK-22983
>                 URL: https://issues.apache.org/jira/browse/SPARK-22983
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0, 2.2.0, 2.3.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>            Priority: Critical
>              Labels: correctness
>             Fix For: 2.2.2, 2.3.0
>
>
> The following SQL query should return zero rows, but in Spark it actually 
> returns one row:
> {code}
> SELECT 1 from (
>   SELECT 1 AS z,
>   MIN(a.x)
>   FROM (select 1 as x) a
>   WHERE false
> ) b
> where b.z != b.z
> {code}
> The problem stems from the `PushDownPredicate` rule: when this rule 
> encounters a filter on top of an Aggregate operator, e.g. `Filter(Agg(...))`, 
> it removes the original filter and adds a new filter onto Aggregate's child, 
> e.g. `Agg(Filter(...))`. This is often okay, but the case above is a 
> counterexample: because there is no explicit `GROUP BY`, we are implicitly 
> computing a global aggregate over the entire table so the original filter was 
> not acting like a `HAVING` clause filtering the number of groups: if we push 
> this filter then it fails to actually reduce the cardinality of the Aggregate 
> output, leading to the wrong answer.
> A simple fix is to never push down filters beneath aggregates when there are 
> no grouping expressions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to