Dandandan commented on code in PR #12039:
URL: https://github.com/apache/datafusion/pull/12039#discussion_r1720962983
##########
datafusion/physical-plan/src/filter.rs:
##########
@@ -345,19 +360,55 @@ struct FilterExecStream {
input: SendableRecordBatchStream,
/// runtime metrics recording
baseline_metrics: BaselineMetrics,
+ /// Whether to allow an input batch to be returned unmodified in the case
where
+ /// the predicate evaluates to true for all rows in the batch
+ reuse_input_batches: bool,
}
pub(crate) fn batch_filter(
batch: &RecordBatch,
predicate: &Arc<dyn PhysicalExpr>,
+ reuse_input_batches: bool,
) -> Result<RecordBatch> {
predicate
.evaluate(batch)
.and_then(|v| v.into_array(batch.num_rows()))
.and_then(|array| {
Ok(match as_boolean_array(&array) {
// apply filter array to record batch
- Ok(filter_array) => filter_record_batch(batch, filter_array)?,
+ Ok(filter_array) => {
+ if reuse_input_batches {
+ filter_record_batch(batch, filter_array)?
+ } else {
+ if filter_array.true_count() == batch.num_rows() {
+ // special case where we just make an exact copy
Review Comment:
That would be the same that the filter.
The use case as far as I understand is that for comet the data needs to be a
new copy, as spark will reuse the existing data/arrays.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]