comphead commented on code in PR #11994:
URL: https://github.com/apache/datafusion/pull/11994#discussion_r1718558273


##########
datafusion/core/src/datasource/physical_plan/parquet/mod.rs:
##########
@@ -144,6 +142,29 @@ pub use writer::plan_to_parquet;
 /// * User provided  [`ParquetAccessPlan`]s to skip row groups and/or pages
 ///   based on external information. See "Implementing External Indexes" below
 ///
+/// # Predicate Pushdown
+///
+/// `ParquetExec` uses the provided [`PhysicalExpr`] predicate as a filter to
+/// skip reading data and improve query performance using several techniques:

Review Comment:
   ```suggestion
   /// skip reading unnecessary data and improve query performance using 
several techniques:
   ```



##########
datafusion/core/src/datasource/physical_plan/parquet/mod.rs:
##########
@@ -144,6 +142,29 @@ pub use writer::plan_to_parquet;
 /// * User provided  [`ParquetAccessPlan`]s to skip row groups and/or pages
 ///   based on external information. See "Implementing External Indexes" below
 ///
+/// # Predicate Pushdown
+///
+/// `ParquetExec` uses the provided [`PhysicalExpr`] predicate as a filter to
+/// skip reading data and improve query performance using several techniques:
+///
+/// * Row group pruning: skips entire row groups based on min/max statistics
+///   found in [`ParquetMetaData`] and any Bloom filters that are present.
+///
+/// * Page pruning: skips individual pages within a ColumnChunk using the
+///   [Parquet PageIndex], if present.
+///
+/// * Row filtering: skips rows within a page based using a form of late
+///   materialization. When possible, predicates are applied by the parquet
+///   decoder *during* decode (see [`ArrowPredicate`] and [`RowFilter`] for 
more
+///   details). This is only enabled if `pushdown_filters` is set to true.

Review Comment:
   ```suggestion
   ///   details). This is only enabled if 
`ParquetScanOptions::pushdown_filters` is set to true.
   ```



##########
datafusion/core/src/datasource/physical_plan/parquet/mod.rs:
##########
@@ -144,6 +142,29 @@ pub use writer::plan_to_parquet;
 /// * User provided  [`ParquetAccessPlan`]s to skip row groups and/or pages
 ///   based on external information. See "Implementing External Indexes" below
 ///
+/// # Predicate Pushdown
+///
+/// `ParquetExec` uses the provided [`PhysicalExpr`] predicate as a filter to
+/// skip reading data and improve query performance using several techniques:
+///
+/// * Row group pruning: skips entire row groups based on min/max statistics
+///   found in [`ParquetMetaData`] and any Bloom filters that are present.
+///
+/// * Page pruning: skips individual pages within a ColumnChunk using the
+///   [Parquet PageIndex], if present.
+///
+/// * Row filtering: skips rows within a page based using a form of late

Review Comment:
   ```suggestion
   /// * Row filtering: skips rows within a page using a form of late
   ```



##########
datafusion/core/src/datasource/physical_plan/parquet/row_filter.rs:
##########
@@ -15,6 +15,50 @@
 // specific language governing permissions and limitations
 // under the License.
 
+//! Utilities to push down of DataFusion filter predicates (any DataFusion
+//! `PhysicalExpr` that evaluates to a [`BooleanArray`]) to the parquet decoder
+//! level in `arrow-rs`.
+//!
+//! DataFusion will use a `ParquetRecordBatchStream` to read data from parquet
+//! into [`RecordBatch`]es.
+//!
+//! The `ParquetRecordBatchStream` takes an optional `RowFilter` which is 
itself
+//! a Vec of `Box<dyn ArrowPredicate>`. During decoding, the predicates are
+//! evaluated in order, to generate a mask which is used to avoid decoding rows
+//! in projected columns which do not pass the filter which can significantly
+//! reduce the amount of compute required for decoding and thus improve query
+//! performance.
+//!
+//! Since the predicates are applied serially in the order defined in the
+//! `RowFilter`, the optimal ordering depends on the exact filters. The best
+//! filters to execute first have two properties:
+//!
+//! 1. They are relatively inexpensive to evaluate (e.g. they read
+//!    column chunks which are relatively small)
+//!
+//! 2. They filter many (contiguous) rows, reducing the amount of decoding
+//!    required for subsequent filters and projected columns
+//!
+//! If requested, this code will reorder the filters based on heuristics try 
and
+//! reduce the evaluation cost.
+//!
+//! The basic algorithm for constructing the `RowFilter` is as follows
+//!
+//! 1. Break conjunctions into separate predicates. An expression
+//!    like `a = 1 AND (b = 2 AND c = 3)` would be
+//!    separated into the expressions `a = 1`, `b = 2`, and `c = 3`.
+//! 2. Determine whether each predicate can be evaluated as an 
`ArrowPredicate`.
+//! 3. Determine, for each predicate, the total compressed size of all
+//!    columns required to evaluate the predicate.
+//! 4. Determine, for each predicate, whether all columns required to
+//!    evaluate the expression are sorted.
+//! 5. Re-order the predicate by total size (from step 3).
+//! 6. Partition the predicates according to whether they are sorted (from 
step 4)
+//! 7. "Compile" each predicate `Expr` to a `DatafusionArrowPredicate`.
+//! 8. Build the `RowFilter` with the sorted predicates followed by
+//!    the unsorted predicates. Within each partition, predicates are

Review Comment:
   this explanation is a gem



##########
datafusion/core/src/datasource/physical_plan/parquet/row_filter.rs:
##########
@@ -194,6 +235,13 @@ impl<'a> FilterCandidateBuilder<'a> {
         }
     }
 
+    /// Attempt to build a `FilterCandidate` from the expression
+    ///
+    /// # Return values
+    ///
+    /// * `Ok(Some(candidate))` if the expression can be used as an ArrowFilter
+    /// * `Ok(None)` if the expression cannot be used as an ArrowFilter

Review Comment:
   👍 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to