Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22357#discussion_r216559045
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruning.scala
---
@@ -110,7 +110,17 @@ private[sql] object ParquetSchemaPruning extends
Rule[LogicalPlan] {
val projectionRootFields = projects.flatMap(getRootFields)
val filterRootFields = filters.flatMap(getRootFields)
- (projectionRootFields ++ filterRootFields).distinct
+ // Kind of expressions don't need to access any fields of a root
fields, e.g., `IsNotNull`.
+ // For them, if there are any nested fields accessed in the query, we
don't need to add root
+ // field access of above expressions.
+ // For example, for a query `SELECT name.first FROM contacts WHERE
name IS NOT NULL`,
+ // we don't need to read nested fields of `name` struct other than
`first` field.
--- End diff --
For the first query, the constrain is `employer is not null`.
When `employer.id` is not `null`, `employer` will always not be `null`; as
a result, this PR will work.
However, when `employer.id` is `null`, `employer` can be `null` or
`something`, so we need to check if `employer` is `something` to return a null
of `employer.id`.
I checked in the `ParquetFilter`, `IsNotNull(employer)` will be ignored
since it's not a valid parquet filter as parquet doesn't support pushdown on
the struct; thus, with this PR, this query will return wrong answer.
I think in this scenario, as @mallman suggested, we might need to read the
full data.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]