Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/9399#discussion_r43755700
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
---
@@ -266,47 +267,75 @@ private[sql] object DataSourceStrategy extends
Strategy with Logging {
relation,
projects,
filterPredicates,
- (requestedColumns, pushedFilters) => {
- scanBuilder(requestedColumns, selectFilters(pushedFilters).toArray)
+ (requestedColumns, _, pushedFilters) => {
+ scanBuilder(requestedColumns, pushedFilters.toArray)
})
}
- // Based on Catalyst expressions.
+ // Based on Catalyst expressions. The `scanBuilder` function accepts
three arguments:
+ //
+ // 1. A `Seq[Attribute]`, containing all required column attributes,
used to handle traits like
+ // `PrunedFilteredScan`.
+ // 2. A `Seq[Expression]`, containing all gathered Catalyst filter
expressions, used by
+ // `CatalystScan`.
+ // 3. A `Seq[Filter]`, containing all data source `Filter`s that are
converted from (possibly a
+ // subset of) Catalyst filter expressions and can be handled by
`relation`.
--- End diff --
The first `Seq[Expression]` argument is only used to handle `CatalystScan`,
which is only left for experimenting purposes, no built-in concrete data
sources implement `CatalystScan` now. The second `Seq[Filter]` argument is used
to handle all other relation traits that support filter push-down, e.g.
`PrunedFilteredScan` and `HadoopFsRelation`. Added comments to explain this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]