This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/datafusion.git
The following commit(s) were added to refs/heads/main by this push:
new adb8c8a716 minor: Move metric `page_index_rows_pruned` to verbose
level in `EXPLAIN ANALYZE` (#20026)
adb8c8a716 is described below
commit adb8c8a7162e5d57ad2cafd915c779525f4fa5d2
Author: Yongting You <[email protected]>
AuthorDate: Wed Jan 28 00:31:14 2026 +0800
minor: Move metric `page_index_rows_pruned` to verbose level in `EXPLAIN
ANALYZE` (#20026)
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes #123` indicates that this PR will close issue #123.
-->
Follow up to https://github.com/apache/datafusion/pull/19977
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->
There are two similar parquet page pruning metrics:
1. page_index_pages_pruned -- how many pages are pruned/kept
2. page_index_rows_pruned -- The same as 1, but displayed in number of
rows, instead of the page count
Displaying both of them in the `summary` `EXPLAIN ANALYZE` level I think
can be too verbose, so the row metrics is changed to only display in the
verbose level.
Demo:
```sh
> set datafusion.explain.analyze_level='summary';
0 row(s) fetched.
Elapsed 0.039 seconds.
> explain analyze
select * from '/Users/yongting/data/tpch_sf1/lineitem.parquet'
where l_orderkey < 2000000;
> ...
DataSourceExec: ...metrics=[...page_index_pages_pruned=102 total → 98
matched...]
> set datafusion.explain.analyze_level='dev';
0 row(s) fetched.
Elapsed 0.039 seconds.
> explain analyze
select * from '/Users/yongting/data/tpch_sf1/lineitem.parquet'
where l_orderkey < 2000000;
> ...
DataSourceExec: ...metrics=[...page_index_pages_pruned=102 total → 98
matched, page_index_rows_pruned=2.08 M total → 2.00 M matched,...]
```
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
4. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->
## Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
---
datafusion/datasource-parquet/src/metrics.rs | 9 ++++-----
.../sqllogictest/test_files/dynamic_filter_pushdown_config.slt | 2 +-
datafusion/sqllogictest/test_files/limit_pruning.slt | 4 ++--
3 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/datafusion/datasource-parquet/src/metrics.rs
b/datafusion/datasource-parquet/src/metrics.rs
index 317612fac1..2d6fb69270 100644
--- a/datafusion/datasource-parquet/src/metrics.rs
+++ b/datafusion/datasource-parquet/src/metrics.rs
@@ -118,11 +118,6 @@ impl ParquetFileMetrics {
.with_type(MetricType::SUMMARY)
.pruning_metrics("row_groups_pruned_statistics", partition);
- let page_index_rows_pruned = MetricBuilder::new(metrics)
- .with_new_label("filename", filename.to_string())
- .with_type(MetricType::SUMMARY)
- .pruning_metrics("page_index_rows_pruned", partition);
-
let page_index_pages_pruned = MetricBuilder::new(metrics)
.with_new_label("filename", filename.to_string())
.with_type(MetricType::SUMMARY)
@@ -179,6 +174,10 @@ impl ParquetFileMetrics {
.with_new_label("filename", filename.to_string())
.subset_time("page_index_eval_time", partition);
+ let page_index_rows_pruned = MetricBuilder::new(metrics)
+ .with_new_label("filename", filename.to_string())
+ .pruning_metrics("page_index_rows_pruned", partition);
+
let predicate_cache_inner_records = MetricBuilder::new(metrics)
.with_new_label("filename", filename.to_string())
.gauge("predicate_cache_inner_records", partition);
diff --git
a/datafusion/sqllogictest/test_files/dynamic_filter_pushdown_config.slt
b/datafusion/sqllogictest/test_files/dynamic_filter_pushdown_config.slt
index 54418f0509..b112d70f42 100644
--- a/datafusion/sqllogictest/test_files/dynamic_filter_pushdown_config.slt
+++ b/datafusion/sqllogictest/test_files/dynamic_filter_pushdown_config.slt
@@ -104,7 +104,7 @@ Plan with Metrics
03)----ProjectionExec: expr=[id@0 as id, value@1 as v, value@1 + id@0 as
name], metrics=[output_rows=10, <slt:ignore>]
04)------FilterExec: value@1 > 3, metrics=[output_rows=10, <slt:ignore>,
selectivity=100% (10/10)]
05)--------RepartitionExec: partitioning=RoundRobinBatch(4),
input_partitions=1, metrics=[output_rows=10, <slt:ignore>]
-06)----------DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/dynamic_filter_pushdown_config/test_data.parquet]]},
projection=[id, value], file_type=parquet, predicate=value@1 > 3 AND
DynamicFilter [ value@1 IS NULL OR value@1 > 800 ],
pruning_predicate=value_null_count@1 != row_count@2 AND value_max@0 > 3 AND
(value_null_count@1 > 0 OR value_null_count@1 != row_count@2 AND value_max@0 >
800), required_guarantees=[], metrics=[output_rows=1 [...]
+06)----------DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/dynamic_filter_pushdown_config/test_data.parquet]]},
projection=[id, value], file_type=parquet, predicate=value@1 > 3 AND
DynamicFilter [ value@1 IS NULL OR value@1 > 800 ],
pruning_predicate=value_null_count@1 != row_count@2 AND value_max@0 > 3 AND
(value_null_count@1 > 0 OR value_null_count@1 != row_count@2 AND value_max@0 >
800), required_guarantees=[], metrics=[output_rows=1 [...]
statement ok
set datafusion.explain.analyze_level = dev;
diff --git a/datafusion/sqllogictest/test_files/limit_pruning.slt
b/datafusion/sqllogictest/test_files/limit_pruning.slt
index 5dae82516d..72672b707d 100644
--- a/datafusion/sqllogictest/test_files/limit_pruning.slt
+++ b/datafusion/sqllogictest/test_files/limit_pruning.slt
@@ -63,7 +63,7 @@ set datafusion.explain.analyze_level = summary;
query TT
explain analyze select * from tracking_data where species > 'M' AND s >= 50
limit 3;
----
-Plan with Metrics DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/limit_pruning/data.parquet]]},
projection=[species, s], limit=3, file_type=parquet, predicate=species@0 > M
AND s@1 >= 50, pruning_predicate=species_null_count@1 != row_count@2 AND
species_max@0 > M AND s_null_count@4 != row_count@2 AND s_max@3 >= 50,
required_guarantees=[], metrics=[output_rows=3, elapsed_compute=<slt:ignore>,
output_bytes=<slt:ignore>, files_ranges_pruned [...]
+Plan with Metrics DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/limit_pruning/data.parquet]]},
projection=[species, s], limit=3, file_type=parquet, predicate=species@0 > M
AND s@1 >= 50, pruning_predicate=species_null_count@1 != row_count@2 AND
species_max@0 > M AND s_null_count@4 != row_count@2 AND s_max@3 >= 50,
required_guarantees=[], metrics=[output_rows=3, elapsed_compute=<slt:ignore>,
output_bytes=<slt:ignore>, files_ranges_pruned [...]
# limit_pruned_row_groups=0 total → 0 matched
# because of order by, scan needs to preserve sort, so limit pruning is
disabled
@@ -72,7 +72,7 @@ explain analyze select * from tracking_data where species >
'M' AND s >= 50 orde
----
Plan with Metrics
01)SortExec: TopK(fetch=3), expr=[species@0 ASC NULLS LAST],
preserve_partitioning=[false], filter=[species@0 < Nlpine Sheep],
metrics=[output_rows=3, elapsed_compute=<slt:ignore>, output_bytes=<slt:ignore>]
-02)--DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/limit_pruning/data.parquet]]},
projection=[species, s], file_type=parquet, predicate=species@0 > M AND s@1 >=
50 AND DynamicFilter [ species@0 < Nlpine Sheep ],
pruning_predicate=species_null_count@1 != row_count@2 AND species_max@0 > M AND
s_null_count@4 != row_count@2 AND s_max@3 >= 50 AND species_null_count@1 !=
row_count@2 AND species_min@5 < Nlpine Sheep, required_guarantees=[], me [...]
+02)--DataSourceExec: file_groups={1 group:
[[WORKSPACE_ROOT/datafusion/sqllogictest/test_files/scratch/limit_pruning/data.parquet]]},
projection=[species, s], file_type=parquet, predicate=species@0 > M AND s@1 >=
50 AND DynamicFilter [ species@0 < Nlpine Sheep ],
pruning_predicate=species_null_count@1 != row_count@2 AND species_max@0 > M AND
s_null_count@4 != row_count@2 AND s_max@3 >= 50 AND species_null_count@1 !=
row_count@2 AND species_min@5 < Nlpine Sheep, required_guarantees=[], me [...]
statement ok
drop table tracking_data;
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]