Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/20069#discussion_r159141412
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -851,7 +851,7 @@ object PushDownPredicate extends Rule[LogicalPlan] with
PredicateHelper {
case filter @ Filter(condition, union: Union) =>
// Union could change the rows, so non-deterministic predicate can't
be pushed down
- val (pushDown, stayUp) =
splitConjunctivePredicates(condition).span(_.deterministic)
+ val (pushDown, stayUp) =
splitConjunctivePredicates(condition).partition(_.deterministic)
--- End diff --
What does it mean "after the first non-deterministic"? Doesn't this simply
partition predicates to deterministic and non-deterministic? Have it considered
"first" non-deterministic?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]