gengliangwang commented on code in PR #54459: URL: https://github.com/apache/spark/pull/54459#discussion_r2881766809
########## sql/catalyst/src/main/java/org/apache/spark/sql/connector/expressions/filter/PartitionPredicate.java: ########## @@ -0,0 +1,83 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.connector.expressions.filter; + +import org.apache.spark.sql.catalyst.InternalRow; +import org.apache.spark.sql.connector.expressions.Expression; + +/** + * Represents a partition filter expression (an expression targeting only the schema of + * {@link org.apache.spark.sql.connector.catalog.Table#partitioning()}). + * <p> + * This can be used to evaluate individual partition keys against this partition expression + * by {@link #accept(InternalRow)}. + * </p> + * @since 4.2.0 + */ +public abstract class PartitionPredicate extends Predicate { Review Comment: **Missing `@Evolving` annotation.** The parent class `Predicate` is annotated with `@Evolving`. This new public API class — exposed to connector implementors — should carry the same annotation to signal that it may change in future minor releases: ```java @Evolving public abstract class PartitionPredicate extends Predicate { ``` ########## sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownUtils.scala: ########## @@ -99,15 +141,40 @@ object PushDownUtils { // Data source filters that need to be evaluated again after scanning. which means // the data source cannot guarantee the rows returned can pass these filters. // As a result we must return it so Spark can plan an extra filter operator. - val postScanFilters = r.pushPredicates(translatedFilters.toArray).map { predicate => - DataSourceV2Strategy.rebuildExpressionFromFilter(predicate, translatedFilterToExpr) + val firstPassPostScanFilters = r.pushPredicates(translatedFilters.toArray).map { + predicate => + DataSourceV2Strategy.rebuildExpressionFromFilter(predicate, translatedFilterToExpr) + } + + // When partition schema is available (enhanced partition filter enabled as per + // SPARK-55596), calculate PartitionPredicates + val secondPassFilterExprs = untranslatableExprs.toSeq ++ firstPassPostScanFilters + val (partitionPredicates, untranslatableDataFilters) = partitionSchema match { + case Some(structType) => + val (partitionExprs, dataExprs) = + DataSourceUtils.getPartitionFiltersAndDataFilters(structType, secondPassFilterExprs) + val preds = partitionExprs.map(expr => + new PartitionPredicateImpl(expr, toAttributes(structType))) + (preds, dataExprs) + case None => + (Seq.empty[PartitionPredicate], secondPassFilterExprs) + } + + val finalPostScanFilters = if (r.supportsEnhancedPartitionFiltering()) { + val secondPassPostScanFilters = r.pushPredicates(partitionPredicates.toArray).map { + predicate => + DataSourceV2Strategy.rebuildExpressionFromFilter(predicate, translatedFilterToExpr) + } + firstPassPostScanFilters ++ secondPassPostScanFilters ++ untranslatableDataFilters Review Comment: **Potential duplicate post-scan filters.** If a filter was rejected in the first pass (appears in `firstPassPostScanFilters`), then re-extracted as a partition predicate and rejected *again* by the source in the second pass (appears in `secondPassPostScanFilters`), the same expression will appear **twice** in `finalPostScanFilters`. While not a correctness bug (duplicate ANDed filters are safe), it's wasteful and could confuse plan analysis. Consider deduplicating, e.g.: ```scala val finalPostScanFilters = if (r.supportsEnhancedPartitionFiltering()) { ... ExpressionSet( firstPassPostScanFilters ++ secondPassPostScanFilters ++ untranslatableDataFilters ).toSeq } else { ... } ``` ########## sql/catalyst/src/main/scala/org/apache/spark/sql/internal/connector/PartitionPredicateImpl.scala: ########## @@ -0,0 +1,97 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.internal.connector + +import org.apache.spark.internal.Logging +import org.apache.spark.sql.catalyst.InternalRow +import org.apache.spark.sql.catalyst.expressions._ +import org.apache.spark.sql.connector.expressions.filter.PartitionPredicate + +/** + * An implementation for [[PartitionPredicate]] that wraps a Catalyst Expression representing a + * partition filter. + * <p> + * Supporting data sources receive these via + * [[org.apache.spark.sql.connector.read.SupportsPushDownV2Filters#pushPredicates pushPredicates]] + * and may use them for additional partition filtering. + */ +class PartitionPredicateImpl( + private val catalystExpression: Expression, + private val partitionSchema: Seq[AttributeReference]) + extends PartitionPredicate( + PartitionPredicate.NAME, + org.apache.spark.sql.connector.expressions.Expression.EMPTY_EXPRESSION) with Logging { + + /** The wrapped partition filter Catalyst Expression. */ + def expression: Expression = catalystExpression + + override def toString(): String = + s"PartitionPredicate(${catalystExpression.sql})" + + override def accept(partitionValues: InternalRow): Boolean = { + // defensive checks + if (partitionSchema.isEmpty) { + logWarning(s"Cannot evaluate partition predicate ${catalystExpression.sql}: " + Review Comment: **Should use structured logging instead of string interpolation.** All four `logWarning` calls in this file use `s"..."` string interpolation. Spark has been migrating to structured logging (`log"..."` with `MDC`). These should follow that pattern, e.g.: ```scala logWarning(log"Cannot evaluate partition predicate " + log"${MDC(LogKeys.EXPRESSION, catalystExpression.sql)}: partition schema is empty, including partition") ``` (Same applies to the `logWarning` calls on lines 54, 62, and 79.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
