GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/11854
[SPARK-14032] [SQL] Eliminate Unnecessary Distinct/Aggregate
#### What changes were proposed in this pull request?
`Distinct` is an expensive operation. If possible, we should avoid it. This
PR is to eliminate `Distinct` (the `Aggregate` for `Distinct`) when the child
operators can guarantee the value uniqueness,
For example, in the following TPC-DS query 38, the left child of the first
`Intersect` is `Distinct`, and thus, we can remove the top `Distinct` after
converting `Intersect` to `Left-semi` + `Distinct`.
```SQL
select count(*) from (
select distinct c_last_name, c_first_name, d_date
from store_sales, date_dim, customer
where store_sales.ss_sold_date_sk = date_dim.d_date_sk
and store_sales.ss_customer_sk = customer.c_customer_sk
and d_month_seq between [DMS] and [DMS] + 11
intersect
select distinct c_last_name, c_first_name, d_date
from catalog_sales, date_dim, customer
where catalog_sales.cs_sold_date_sk = date_dim.d_date_sk
and catalog_sales.cs_bill_customer_sk = customer.c_customer_sk
and d_month_seq between [DMS] and [DMS] + 11
intersect
select distinct c_last_name, c_first_name, d_date
from web_sales, date_dim, customer
where web_sales.ws_sold_date_sk = date_dim.d_date_sk
and web_sales.ws_bill_customer_sk = customer.c_customer_sk
and d_month_seq between [DMS] and [DMS] + 11
) hot_cyst
```
Use a simplified query to show the plan.
```scala
df.distinct().intersect(df).intersect(df)
```
Before the fix, the optimized plan is like
```SQL
Aggregate [id#37,value#38], [id#37,value#38]
+- Join LeftSemi, Some(((id#37 <=> id#64) && (value#38 <=> value#65)))
:- Aggregate [id#37,value#38], [id#37,value#38]
: +- Join LeftSemi, Some(((id#37 <=> id#57) && (value#38 <=> value#58)))
: :- Aggregate [id#37,value#38], [id#37,value#38]
: : +- LocalRelation [id#37,value#38],
[[id1,1],[id1,1],[id,1],[id1,2]]
: +- LocalRelation [id#57,value#58], [[id1,1],[id1,1],[id,1],[id1,2]]
+- LocalRelation [id#64,value#65], [[id1,1],[id1,1],[id,1],[id1,2]]
```
After the fix, the optimized plan is like
```SQL
Join LeftSemi, Some(((id#37 <=> id#64) && (value#38 <=> value#65)))
:- Join LeftSemi, Some(((id#37 <=> id#57) && (value#38 <=> value#58)))
: :- Aggregate [id#37,value#38], [id#37,value#38]
: : +- LocalRelation [id#37,value#38], [[id1,1],[id1,1],[id,1],[id1,2]]
: +- LocalRelation [id#57,value#58], [[id1,1],[id1,1],[id,1],[id1,2]]
+- LocalRelation [id#64,value#65], [[id1,1],[id1,1],[id,1],[id1,2]]
```
#### How was this patch tested?
Added a few test cases
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/gatorsmile/spark eliminateIntersectDistinct
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/11854.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #11854
----
commit 37352f45a06a70ede6eaf10e8b57d97c0bcbf731
Author: gatorsmile <[email protected]>
Date: 2016-03-20T22:01:34Z
eliminate Distinct
commit 577bfe74ef9eba7ec9b73cad6be105f2ba90a846
Author: gatorsmile <[email protected]>
Date: 2016-03-20T22:22:23Z
Merge remote-tracking branch 'upstream/master' into
eliminateIntersectDistinct
commit 96d9d4e0310f2edde0463e458db75962253f2771
Author: gatorsmile <[email protected]>
Date: 2016-03-20T22:22:37Z
code clean.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]