Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/9399#issuecomment-152892964
One more consideration for this improvement, as we probably need to
optimize the filters by folding the expression, as the partition keys are
actually are the constant value in execution, simply adding the
`unhandledFilters` probably does not work for partition based data source. So I
am wondering if we can leave the `unhandledFilters` and `handledFilters` to
data source implementation itself, we can provide the utilities or the default
implementation for the common operations within the `buildScan`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]