This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new d99e5a1449b0 Revert "[SPARK-41398][SQL][FOLLOWUP] Update runtime
filtering javadocto reflect relaxed partition constraints"
d99e5a1449b0 is described below
commit d99e5a1449b02bc02754a448e5237ee0359d25e2
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Wed Feb 25 11:08:05 2026 -0800
Revert "[SPARK-41398][SQL][FOLLOWUP] Update runtime filtering javadocto
reflect relaxed partition constraints"
### What changes were proposed in this pull request?
This reverts commit 38e51eb5e8d49bed5276a2fc71f1709f643d050f.
### Why are the changes needed?
Not only it doesn't make sense to make a follow-up for 3-years old patch,
there is a concern about the doc update. We had better get a new JIRA issue for
trace-ability.
- https://github.com/apache/spark/pull/54330#discussion_r2851832853
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
Manual review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #54489 from dongjoon-hyun/SPARK-41398.
Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
.../apache/spark/sql/connector/read/SupportsRuntimeFiltering.java | 6 ++----
.../spark/sql/connector/read/SupportsRuntimeV2Filtering.java | 7 +++----
2 files changed, 5 insertions(+), 8 deletions(-)
diff --git
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
index 34bce404f375..0921a90ac22a 100644
---
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
+++
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
@@ -51,10 +51,8 @@ public interface SupportsRuntimeFiltering extends
SupportsRuntimeV2Filtering {
* the originally reported partitioning during runtime filtering. While
applying runtime filters,
* the scan may detect that some {@link InputPartition}s have no matching
data. It can omit
* such partitions entirely only if it does not report a specific
partitioning. Otherwise,
- * the scan can either replace the initially planned {@link InputPartition}s
that have no
- * matching data with empty {@link InputPartition}s, or report only a subset
of the original
- * partition values (omitting those with no data). The scan must not report
new partition values
- * that were not present in the original partitioning.
+ * the scan can replace the initially planned {@link InputPartition}s that
have no matching
+ * data with empty {@link InputPartition}s but must preserve the overall
number of partitions.
* <p>
* Note that Spark will call {@link Scan#toBatch()} again after filtering
the scan at runtime.
*
diff --git
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
index 1bec81fe8184..7c238bde969b 100644
---
a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
+++
b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeV2Filtering.java
@@ -55,10 +55,9 @@ public interface SupportsRuntimeV2Filtering extends Scan {
* the originally reported partitioning during runtime filtering. While
applying runtime
* predicates, the scan may detect that some {@link InputPartition}s have no
matching data. It
* can omit such partitions entirely only if it does not report a specific
partitioning.
- * Otherwise, the scan can either replace the initially planned {@link
InputPartition}s that
- * have no matching data with empty {@link InputPartition}s, or report only
a subset of the
- * original partition values (omitting those with no data). The scan must
not report new
- * partition values that were not present in the original partitioning.
+ * Otherwise, the scan can replace the initially planned {@link
InputPartition}s that have no
+ * matching data with empty {@link InputPartition}s but must preserve the
overall number of
+ * partitions.
* <p>
* Note that Spark will call {@link Scan#toBatch()} again after filtering
the scan at runtime.
*
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]