sunchao commented on a change in pull request #32921:
URL: https://github.com/apache/spark/pull/32921#discussion_r660321387



##########
File path: 
sql/catalyst/src/main/java/org/apache/spark/sql/connector/read/SupportsRuntimeFiltering.java
##########
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.connector.read;
+
+import org.apache.spark.annotation.Experimental;
+import org.apache.spark.sql.connector.expressions.NamedReference;
+import org.apache.spark.sql.sources.Filter;
+
+/**
+ * A mix-in interface for {@link Scan}. Data sources can implement this 
interface if they can
+ * filter initially planned {@link InputPartition}s using predicates Spark 
infers at runtime.
+ * <p>
+ * Note that Spark will push runtime filters only if they are beneficial.
+ *
+ * @since 3.2.0
+ */
+@Experimental
+public interface SupportsRuntimeFiltering extends Scan {
+  /**
+   * Returns attributes this scan can be filtered by at runtime.
+   * <p>
+   * Spark will call {@link #filter(Filter[], boolean)} if it can derive a 
runtime
+   * predicate for any of the filter attributes.
+   */
+  NamedReference[] filterAttributes();
+
+  /**
+   * Filters this scan using runtime filters.
+   * <p>
+   * The provided expressions must be interpreted as a set of filters that are 
ANDed together.
+   * Implementations may use the filters to prune initially planned {@link 
InputPartition}s.
+   * <p>
+   * The scan must always preserve the original data distribution during 
runtime filtering.
+   * It is allowed to change the number of partitions iff the 
`canChangeNumPartitions` flag is true.
+   * During runtime filtering, the scan may detect that some {@link 
InputPartition}s have no
+   * matching data. It can omit such partitions entirely only if 
`canChangeNumPartitions` is true.
+   * If `canChangeNumPartitions` is false, the scan can replace the initially 
planned
+   * {@link InputPartition}s that have no matching data with empty {@link 
InputPartition}s.
+   * <p>
+   * Note that Spark will call {@link Scan#toBatch()} again after filtering 
the scan at runtime.
+   *
+   * @param filters data source filters used to filter the scan at runtime
+   * @param canChangeNumPartitions a flag whether the scan can change the 
number of partitions
+   */
+  void filter(Filter[] filters, boolean canChangeNumPartitions);

Review comment:
       This looks OK to me too. The current design of storage-partitioned join 
checks both the number of partitions as well as the partition values, e.g., 
`[1, 2]` is considered to be co-partitioned with `[1, 2]` but not `[1, 3]`. I 
can't image a case where the number of tasks remains the same after runtime 
filtering but the partition values have changed.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to