Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/12306#discussion_r60008557
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -110,6 +110,31 @@ trait CheckAnalysis {
s"filter expression '${f.condition.sql}' " +
s"of type ${f.condition.dataType.simpleString} is not a
boolean.")
+ case f @ Filter(condition, child) =>
+ // Make sure no correlated predicate is in an OUTER join,
because this could change the
+ // semantics of the join.
+ lazy val attributes: Set[Expression] = child.output.toSet
+ def checkCorrelatedPredicates(p: PredicateSubquery): Unit =
p.query.foreach {
+ case j @ Join(left, right, jt, _) if jt != Inner =>
+ j.transformAllExpressions {
+ case e if attributes.contains(e) =>
+ failAnalysis(s"Accessing outer query column is not
allowed in outer joins: $e")
--- End diff --
`l.id` shouldn't be part of the join condition either. It would not make
any difference; the join could produce more null values and that is all. There
should be WHERE after the join, e.g.:
SELECT *
FROM l
WHERE EXISTS(SELECT *
FROM r
LEFT JOIN (SELECT * FROM s) t
ON t.id = r.id
WHERE t.id = l.id)
It could be usefull the pull out correlated predicates from an inner join
though (I have never seen this in the wild though).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]