Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/19424#discussion_r143591559
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownOperatorsToDataSource.scala
---
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.datasources.v2
+
+import org.apache.spark.sql.catalyst.expressions.{And, Attribute,
AttributeMap, Expression, PredicateHelper}
+import org.apache.spark.sql.catalyst.optimizer.{PushDownPredicate,
RemoveRedundantProject}
+import org.apache.spark.sql.catalyst.plans.logical.{Filter, LogicalPlan,
Project}
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.DataSourceStrategy
+import org.apache.spark.sql.sources
+import org.apache.spark.sql.sources.v2.reader._
+
+/**
+ * Pushes down various operators to the underlying data source for better
performance. Operators are
+ * being pushed down with a specific order. As an example, given a LIMIT
has a FILTER child, you
+ * can't push down LIMIT if FILTER is not completely pushed down. When
both are pushed down, the
+ * data source should execute FILTER before LIMIT. And required columns
are calculated at the end,
+ * because when more operators are pushed down, we may need less columns
at Spark side.
+ */
+object PushDownOperatorsToDataSource extends Rule[LogicalPlan] with
PredicateHelper {
+ override def apply(plan: LogicalPlan): LogicalPlan = {
+ // make sure filters are at very bottom.
+ val prepared = PushDownPredicate(plan)
--- End diff --
Why apply this rule one more time? Is there reason to suspect that
predicates won't already be pushed and that one more run would be worth it?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]