thinkharderdev opened a new pull request, #3380:
URL: https://github.com/apache/arrow-datafusion/pull/3380
# Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
Closes #3360
# Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
# What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
Putting this out there for comment since it's at a point where I am more or
less happy with the design and all the existing tests pass.
The key points for this PR:
1. Introduce a helper to build a `RowFilter` from a filter `Expr` pushed
down to the `ParquetExec`.
2. If a pruning predicate is available on the `ParquetExec`, build a
`RowFilter` (if we ca) and construct our `ParquetRecordBatchStream` with it.
To build a `RowFilter` we need to go through a few steps:
1. Take the pruning `Expr` (which has been combined into a single predicate)
and tear it back apart into separate predicates which are ANDed together.
2. For each predicate, first determine whether we can use it as a row
filter. We consider it valid for that purpose if it does not reference any
non-primitive columns (per @tustvold suggestion) and if it does not reference
any projected columns (which are not yet available).
3. Rewrite the `Expr` to replace any columns not present in the file schema
to be null literal (to handle merged schemas without involving the
`SchemaAdapter`),
4. Gather some stats to estimate the cost of evaluating the expression
(currently just total size of all columns and whether all columns are sorted).
5. Now each predicate is a `FilterCandidate` and we can sort the candidates
by evaluation cost so we can apply the "cheap" filters first.
6. Convert each candidate to an `DatafusionArrowPredicate` and build a
`RowFilter` from the whole lot of them.
TODOs:
1. Need some more specific unit tests
2. Benchmarks!
3. I can't actually figure out how to tell whether columns are sorted from
the `ParquetMetadata` :)
4. Current cost estimation (purely based on compressed size of columns) is
the simplest thing that could possibly work but not sure if there is a better
way to do it...
A separate conceptual question is around optimizing the number of distinct
filters. In this design we simply assume that we want to break the filter into
as many distinct predicates as we can but I'm not sure that is always the case
given that this forces serial evaluation of the filters. I can imagine many
cases where it would be better to group predicates together for evaluation. I
didn't want to make the initial implementation too complicated so I punted on
that for now, but eventually may want to do cost estimation at a higher level
to determine the optimal grouping.
# Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]