Github user rdblue commented on the issue:

    https://github.com/apache/spark/pull/21143
  
    I think we would only need `DataSourceReader` to implement 
`SupportsPushDownFilter` because it is primarily used to push filters to the 
data source and the query's filters are determined while planning.
    
    The interface that we need is for tasks to pass filters back to Spark. For 
that, I would just add a method to `InputPartition`: `residualFilter`. If that 
returns `Some(filter)` then Spark would generate a filter for the records 
coming out of the source.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to