[ 
https://issues.apache.org/jira/browse/DRILL-4771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431180#comment-15431180
 ] 

Khurram Faraaz commented on DRILL-4771:
---------------------------------------

Another scenario that is related to this issue. Note that we scan the same JSON 
data file twice, which is not optimal.

{noformat}
0: jdbc:drill:schema=dfs.tmp> select count(distinct key2), count(case when 
t.key2 = 'm' then key1 end) as cnt from `twoKeyJsn.json` t;
+---------+----------+
| EXPR$0  |   cnt    |
+---------+----------+
| 16      | 1874177  |
+---------+----------+
1 row selected (61.35 seconds)

0: jdbc:drill:schema=dfs.tmp> explain plan for select count(distinct key2), 
count(case when t.key2 = 'm' then key1 end) as cnt from `twoKeyJsn.json` t;
+------+------+
| text | json |
+------+------+
| 00-00    Screen
00-01      Project(EXPR$0=[$0], cnt=[$1])
00-02        Project(EXPR$0=[$1], cnt=[$0])
00-03          NestedLoopJoin(condition=[true], joinType=[inner])
00-05            StreamAgg(group=[{}], cnt=[$SUM0($0)])
00-07              StreamAgg(group=[{}], cnt=[COUNT($1)])
00-09                Project(key2=[$0], $f1=[CASE(=($0, 'm'), $1, null)])
00-11                  Scan(groupscan=[EasyGroupScan 
[selectionRoot=maprfs:/tmp/twoKeyJsn.json, numFiles=1, columns=[`key2`, 
`key1`], files=[maprfs:///tmp/twoKeyJsn.json]]])
00-04            StreamAgg(group=[{}], EXPR$0=[COUNT($0)])
00-06              HashAgg(group=[{0}])
00-08                HashAgg(group=[{0}])
00-10                  Scan(groupscan=[EasyGroupScan 
[selectionRoot=maprfs:/tmp/twoKeyJsn.json, numFiles=1, columns=[`key2`], 
files=[maprfs:///tmp/twoKeyJsn.json]]])
{noformat}

> Drill should avoid doing the same join twice if count(distinct) exists
> ----------------------------------------------------------------------
>
>                 Key: DRILL-4771
>                 URL: https://issues.apache.org/jira/browse/DRILL-4771
>             Project: Apache Drill
>          Issue Type: Improvement
>    Affects Versions: 1.2.0
>            Reporter: Gautam Kumar Parai
>            Assignee: Gautam Kumar Parai
>
> When the query has one distinct aggregate and one or more non-distinct 
> aggregates, the join instance need not produce the join-based plan. We can 
> generate multi-phase aggregates. Another approach would be to use grouping 
> sets. However, Drill is unable to support grouping sets and instead relies on 
> the join-based plan (see the plan below)
> {code}
> select emp.empno, count(*), avg(distinct dept.deptno) 
> from sales.emp emp inner join sales.dept dept 
> on emp.deptno = dept.deptno 
> group by emp.empno
> LogicalProject(EMPNO=[$0], EXPR$1=[$1], EXPR$2=[$3])
>   LogicalJoin(condition=[IS NOT DISTINCT FROM($0, $2)], joinType=[inner])
>     LogicalAggregate(group=[{0}], EXPR$1=[COUNT()])
>       LogicalProject(EMPNO=[$0], DEPTNO0=[$9])
>         LogicalJoin(condition=[=($7, $9)], joinType=[inner])
>           LogicalTableScan(table=[[CATALOG, SALES, EMP]])
>           LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
>     LogicalAggregate(group=[{0}], EXPR$2=[AVG($1)])
>       LogicalAggregate(group=[{0, 1}])
>         LogicalProject(EMPNO=[$0], DEPTNO0=[$9])
>           LogicalJoin(condition=[=($7, $9)], joinType=[inner])
>             LogicalTableScan(table=[[CATALOG, SALES, EMP]])
>             LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> {code}
> The more efficient form should look like this
> {code}
> select emp.empno, count(*), avg(distinct dept.deptno) 
> from sales.emp emp inner join sales.dept dept 
> on emp.deptno = dept.deptno 
> group by emp.empno
> LogicalAggregate(group=[{0}], EXPR$1=[SUM($2)], EXPR$2=[AVG($1)])
>   LogicalAggregate(group=[{0, 1}], EXPR$1=[COUNT()])
>     LogicalProject(EMPNO=[$0], DEPTNO0=[$9])
>       LogicalJoin(condition=[=($7, $9)], joinType=[inner])
>         LogicalTableScan(table=[[CATALOG, SALES, EMP]])
>         LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to