[ 
https://issues.apache.org/jira/browse/SPARK-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999034#comment-13999034
 ] 

Michael Armbrust commented on SPARK-1487:
-----------------------------------------

PR here: https://github.com/apache/spark/pull/511/

> Support record filtering via predicate pushdown in Parquet
> ----------------------------------------------------------
>
>                 Key: SPARK-1487
>                 URL: https://issues.apache.org/jira/browse/SPARK-1487
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Andre Schumacher
>            Assignee: Andre Schumacher
>             Fix For: 1.1.0
>
>
> Parquet has support for column filters, which can be used to avoid reading 
> and de-serializing records that fail the column filter condition. This can 
> lead to potentially large savings, depending on the number of columns 
> filtered by and how many records actually pass the filter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to