This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 38e51eb5e8d4 [SPARK-41398][SQL][FOLLOWUP] Update runtime filtering 
javadoc to reflect relaxed partition constraints
38e51eb5e8d4 is described below

commit 38e51eb5e8d49bed5276a2fc71f1709f643d050f
Author: Yan Yan <[email protected]>
AuthorDate: Thu Jan 29 10:54:26 2026 +0800

    [SPARK-41398][SQL][FOLLOWUP] Update runtime filtering javadoc to reflect 
relaxed partition constraints
    
    Update the javadoc for `SupportsRuntimeV2Filtering.filter()` and 
`SupportsRuntimeFiltering.filter()` to reflect the changes made in 
[SPARK-41398](https://issues.apache.org/jira/browse/SPARK-41398), which relaxed 
the constraint on partition values during runtime filtering.
    
    After that change, scans can now either:
    - Replace partitions with no matching data with empty InputPartitions, or
    - Report only a subset of the original partition values (omitting those 
with no data)
    
    The previous documentation stated that the "overall number of partitions" 
must be preserved, which is no longer required. The only constraint is that new 
partition values not present in the original partitioning cannot be introduced.
    
    ### What changes were proposed in this pull request?
    
    A javadoc update to follow up on https://github.com/apache/spark/pull/38924
    
    ### Why are the changes needed?
    
    To make the java doc up to date for future implementer
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    
    compile and checkstyle
    
    ### Was this patch authored or co-authored using generative AI tooling?
    Yes - Claude Opus 4.5
    
    Closes #54046 from yyanyy/spark-41398-update-javadoc.
    
    Authored-by: Yan Yan <[email protected]>
    Signed-off-by: Wenchen Fan <[email protected]>
---
 .../apache/spark/sql/connector/read/SupportsRuntimeFiltering.java  | 6 ++++--
 .../spark/sql/connector/read/SupportsRuntimeV2Filtering.java       | 7 ++++---
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
index 0921a90ac22a..34bce404f375 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
@@ -51,8 +51,10 @@ public interface SupportsRuntimeFiltering extends 
SupportsRuntimeV2Filtering {
    * the originally reported partitioning during runtime filtering. While 
applying runtime filters,
    * the scan may detect that some {@link InputPartition}s have no matching 
data. It can omit
    * such partitions entirely only if it does not report a specific 
partitioning. Otherwise,
-   * the scan can replace the initially planned {@link InputPartition}s that 
have no matching
-   * data with empty {@link InputPartition}s but must preserve the overall 
number of partitions.
+   * the scan can either replace the initially planned {@link InputPartition}s 
that have no
+   * matching data with empty {@link InputPartition}s, or report only a subset 
of the original
+   * partition values (omitting those with no data). The scan must not report 
new partition values
+   * that were not present in the original partitioning.
    * <p>
    * Note that Spark will call {@link Scan#toBatch()} again after filtering 
the scan at runtime.
    *
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
index 7c238bde969b..1bec81fe8184 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
@@ -55,9 +55,10 @@ public interface SupportsRuntimeV2Filtering extends Scan {
    * the originally reported partitioning during runtime filtering. While 
applying runtime
    * predicates, the scan may detect that some {@link InputPartition}s have no 
matching data. It
    * can omit such partitions entirely only if it does not report a specific 
partitioning.
-   * Otherwise, the scan can replace the initially planned {@link 
InputPartition}s that have no
-   * matching data with empty {@link InputPartition}s but must preserve the 
overall number of
-   * partitions.
+   * Otherwise, the scan can either replace the initially planned {@link 
InputPartition}s that
+   * have no matching data with empty {@link InputPartition}s, or report only 
a subset of the
+   * original partition values (omitting those with no data). The scan must 
not report new
+   * partition values that were not present in the original partitioning.
    * <p>
    * Note that Spark will call {@link Scan#toBatch()} again after filtering 
the scan at runtime.
    *


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to