[ 
https://issues.apache.org/jira/browse/SPARK-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Armbrust resolved SPARK-11390.
--------------------------------------
       Resolution: Fixed
    Fix Version/s: 1.6.0

Issue resolved by pull request 9679
[https://github.com/apache/spark/pull/9679]

> Query plan with/without filterPushdown indistinguishable
> --------------------------------------------------------
>
>                 Key: SPARK-11390
>                 URL: https://issues.apache.org/jira/browse/SPARK-11390
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.5.1
>         Environment: All
>            Reporter: Vishesh Garg
>            Priority: Minor
>             Fix For: 1.6.0
>
>
> The execution plan of a query remains the same regardless of whether the 
> filterPushdown flag has been set to "true" or "false", as can be seen below: 
> ======
> scala> sqlContext.setConf("spark.sql.orc.filterPushdown", "false")
> scala>     sqlContext.sql("SELECT name FROM people WHERE age = 15").explain()
> == Physical Plan ==
> Project [name#6]
>  Filter (age#7 = 15)
>   Scan OrcRelation[hdfs://localhost:9000/user/spec/people][name#6,age#7]
> scala> sqlContext.setConf("spark.sql.orc.filterPushdown", "true")
> scala>     sqlContext.sql("SELECT name FROM people WHERE age = 15").explain()
> == Physical Plan ==
> Project [name#6]
>  Filter (age#7 = 15)
>   Scan OrcRelation[hdfs://localhost:9000/user/spec/people][name#6,age#7]
> ======
> Ideally, when the filterPushdown flag is set to "true", both the scan and the 
> filter nodes should be merged together to make it clear that the filtering is 
> being done by the data source itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to