Github user dongjoon-hyun commented on the issue:

    https://github.com/apache/spark/pull/18991
  
    Ur, it's not record-level filtering. Maybe, it's because I explained it too 
abstractly 
[here](https://github.com/apache/spark/pull/19943#discussion_r160251456
    ). It's stripe-level. So, the current ORC in Spark works in the same way 
with the current Parquet's behavior in Spark. Spark's assumption is just giving 
a **hint** to underlying data formats with 
`spark.sql.(orc|parquet).filterPushdown` and do the filtering later inside 
Spark.
    
    If both of you requires that, let's revisit later in 2.4 timeframe. When I 
reopened this two days ago, the purpose is just to make it sure for that option 
in Apache.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to