GitHub user wzhfy opened a pull request:

    https://github.com/apache/spark/pull/19855

    [SPARK-22662] [SQL] Failed to prune columns after rewriting predicate 
subquery

    ## What changes were proposed in this pull request?
    
    As a simple example:
    ```
    spark-sql> create table base (a int, b int) using parquet;
    Time taken: 0.066 seconds
    spark-sql> create table relInSubq ( x int, y int, z int) using parquet;
    Time taken: 0.042 seconds
    spark-sql> explain select a from base where a in (select x from relInSubq);
    == Physical Plan ==
    *Project [a#83]
    +- *BroadcastHashJoin [a#83], [x#85], LeftSemi, BuildRight
       :- *FileScan parquet default.base[a#83,b#84] Batched: true, Format: 
Parquet, Location: InMemoryFileIndex[hdfs://100.0.0.4:9000/wzh/base], 
PartitionFilters: [], PushedFilters: [], ReadSchema: struct<a:int,b:int>
       +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[0, int, 
true] as bigint)))
          +- *Project [x#85]
             +- *FileScan parquet default.relinsubq[x#85] Batched: true, 
Format: Parquet, Location: 
InMemoryFileIndex[hdfs://100.0.0.4:9000/wzh/relinsubq], PartitionFilters: [], 
PushedFilters: [], ReadSchema: struct<x:int>
    ```
    We only need column `a` in table `base`, but all columns (`a`, `b`) are 
fetched.
    
    The reason is that, in "Operator Optimizations" batch, `ColumnPruning` 
first produces a `Project` on table `base`, but then it's removed by 
`removeProjectBeforeFilter`. Because at that time, the predicate subquery is in 
filter form. Then, in "Rewrite Subquery" batch, `RewritePredicateSubquery` 
converts the subquery into a LeftSemi join, but this batch doesn't have the 
`ColumnPruning` rule. This results in reading all columns for the `base` table.
    
    ## How was this patch tested?
    Added a new test case.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wzhfy/spark column_pruning_subquery

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19855.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19855
    
----
commit 1f9fccff696137ae2dd26a50fcf81b7cd267338d
Author: Zhenhua Wang <[email protected]>
Date:   2017-11-30T08:46:21Z

    Column pruning after rewriting predicate subquery

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to