This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 366550689fc7 [SPARK-55692][SQL] Fix `SupportsRuntimeFiltering` and 
`SupportsRuntimeV2Filtering` documentation
366550689fc7 is described below

commit 366550689fc79d863e8bf8ae60ca994acaffe49c
Author: Peter Toth <[email protected]>
AuthorDate: Thu Feb 26 08:59:18 2026 -0800

    [SPARK-55692][SQL] Fix `SupportsRuntimeFiltering` and 
`SupportsRuntimeV2Filtering` documentation
    
    ### What changes were proposed in this pull request?
    This is a follow-up to https://github.com/apache/spark/pull/38924 clarify 
behaviour of scans with runtime filters.
    
    ### Why are the changes needed?
    Please see discussion at 
https://github.com/apache/spark/pull/54330#discussion_r2847645387.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    This is a documentation change.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #54490 from peter-toth/SPARK-55692-fix-supportsruntimefiltering-docs.
    
    Authored-by: Peter Toth <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 .../spark/sql/connector/read/SupportsRuntimeFiltering.java     |  9 +++++----
 .../spark/sql/connector/read/SupportsRuntimeV2Filtering.java   | 10 +++++-----
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
index 0921a90ac22a..927d4a53e22f 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
@@ -49,10 +49,11 @@ public interface SupportsRuntimeFiltering extends 
SupportsRuntimeV2Filtering {
    * <p>
    * If the scan also implements {@link SupportsReportPartitioning}, it must 
preserve
    * the originally reported partitioning during runtime filtering. While 
applying runtime filters,
-   * the scan may detect that some {@link InputPartition}s have no matching 
data. It can omit
-   * such partitions entirely only if it does not report a specific 
partitioning. Otherwise,
-   * the scan can replace the initially planned {@link InputPartition}s that 
have no matching
-   * data with empty {@link InputPartition}s but must preserve the overall 
number of partitions.
+   * the scan may detect that some {@link InputPartition}s have no matching 
data, in which case
+   * it can either replace the initially planned {@link InputPartition}s that 
have no matching data
+   * with empty {@link InputPartition}s, or report only a subset of the 
original partition values
+   * (omitting those with no data) via {@link Batch#planInputPartitions()}. 
The scan must not report
+   * new partition values that were not present in the original partitioning.
    * <p>
    * Note that Spark will call {@link Scan#toBatch()} again after filtering 
the scan at runtime.
    *
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
index 7c238bde969b..f5acdf885bf5 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
@@ -53,11 +53,11 @@ public interface SupportsRuntimeV2Filtering extends Scan {
    * <p>
    * If the scan also implements {@link SupportsReportPartitioning}, it must 
preserve
    * the originally reported partitioning during runtime filtering. While 
applying runtime
-   * predicates, the scan may detect that some {@link InputPartition}s have no 
matching data. It
-   * can omit such partitions entirely only if it does not report a specific 
partitioning.
-   * Otherwise, the scan can replace the initially planned {@link 
InputPartition}s that have no
-   * matching data with empty {@link InputPartition}s but must preserve the 
overall number of
-   * partitions.
+   * predicates, the scan may detect that some {@link InputPartition}s have no 
matching data, in
+   * which case it can either replace the initially planned {@link 
InputPartition}s that have no
+   * matching data with empty {@link InputPartition}s, or report only a subset 
of the original
+   * partition values (omitting those with no data) via {@link 
Batch#planInputPartitions()}. The
+   * scan must not report new partition values that were not present in the 
original partitioning.
    * <p>
    * Note that Spark will call {@link Scan#toBatch()} again after filtering 
the scan at runtime.
    *


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to