Github user AndreSchumacher commented on the pull request:

    https://github.com/apache/spark/pull/511#issuecomment-41363813
  
    @marmbrus @mateiz Thanks a lot for the comments and the fast response.
    
    About the config setting: I would feel more comfortable setting a default 
after there has been some experience with realistic workloads and schemas. But 
I renamed it now, as suggested by Matei.
    
    The bigger changes in my last commit are now to keep track of what is 
actually pushed and why. Then the predicates which are "completely" pushed are 
removed inside the Planner. Note that attempting to push "A & B" can result 
only in "A" being pushed because B contains anything other than a simple 
comparison of a column value. In this case "A & B" should be kept for now 
(IMHO). There is still in advantage in pushing A since hopefully there are 
fewer records that pass the filter to the higher level.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to