maropu commented on a change in pull request #28898:
URL: https://github.com/apache/spark/pull/28898#discussion_r446502299



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/NestedColumnAliasing.scala
##########
@@ -39,6 +39,22 @@ object NestedColumnAliasing {
           NestedColumnAliasing.replaceToAliases(plan, nestedFieldToAlias, 
attrToAliases)
       }
 
+    /**
+     * This is to solve a `LogicalPlan` like `Project`->`Filter`->`Window`.
+     * In this case, `Window` can be plan that is `canProjectPushThrough`.
+     * By adding this, it allows nested columns to be passed onto next stages.
+     * Currently, not adding `Filter` into `canProjectPushThrough` due to
+     * infinitely loop in optimizers during the predicate push-down rule.
+     */

Review comment:
       How about rephrasing it like tihs?
   ```
       /**
        * This pattern is needed to support [[Filter]] plan cases like
        * [[Project]]->[[Filter]]->listed plan in `canProjectPushThrough` 
(e.g., [[Window]]).
        * The reason why we don't simply add [[Filter]] in 
`canProjectPushThrough` is that
        * the optimizer can hit an infinite loop during the 
[[PushDownPredicates]] rule.
        */
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to