Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19424#discussion_r143593605
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownOperatorsToDataSource.scala
 ---
    @@ -0,0 +1,104 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.execution.datasources.v2
    +
    +import org.apache.spark.sql.catalyst.expressions.{And, Attribute, 
AttributeMap, Expression, PredicateHelper}
    +import org.apache.spark.sql.catalyst.optimizer.{PushDownPredicate, 
RemoveRedundantProject}
    +import org.apache.spark.sql.catalyst.plans.logical.{Filter, LogicalPlan, 
Project}
    +import org.apache.spark.sql.catalyst.rules.Rule
    +import org.apache.spark.sql.execution.datasources.DataSourceStrategy
    +import org.apache.spark.sql.sources
    +import org.apache.spark.sql.sources.v2.reader._
    +
    +/**
    + * Pushes down various operators to the underlying data source for better 
performance. Operators are
    + * being pushed down with a specific order. As an example, given a LIMIT 
has a FILTER child, you
    + * can't push down LIMIT if FILTER is not completely pushed down. When 
both are pushed down, the
    + * data source should execute FILTER before LIMIT. And required columns 
are calculated at the end,
    + * because when more operators are pushed down, we may need less columns 
at Spark side.
    + */
    +object PushDownOperatorsToDataSource extends Rule[LogicalPlan] with 
PredicateHelper {
    +  override def apply(plan: LogicalPlan): LogicalPlan = {
    +    // make sure filters are at very bottom.
    +    val prepared = PushDownPredicate(plan)
    +    val afterPushDown = prepared transformUp {
    +      case Filter(condition, r @ DataSourceV2Relation(_, reader)) =>
    +        val (candidates, containingNonDeterministic) =
    +          splitConjunctivePredicates(condition).span(_.deterministic)
    --- End diff --
    
    It isn't immediately clear why you would use `span` here instead of 
`partition`. I think it is because `span` will produce all deterministic 
predicates that would be run before the first non-deterministic predicate in an 
in-order traversal of teh condition, right? If so, then a comment would be 
really useful to make this clear. I'd also like to see a comment about why 
deterministic predicates "after" the first non-deterministic predicate 
shouldn't be pushed down. An example would really help, too.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to