viirya commented on code in PR #38511:
URL: https://github.com/apache/spark/pull/38511#discussion_r1023726173


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SchemaPruning.scala:
##########
@@ -138,18 +132,18 @@ object SchemaPruning extends Rule[LogicalPlan] {
   private def buildNewProjection(
       projects: Seq[NamedExpression],
       normalizedProjects: Seq[NamedExpression],
-      filters: Seq[Expression],
+      filters: Seq[Seq[Expression]],
       leafNode: LeafNode,
       projectionOverSchema: ProjectionOverSchema): Project = {
     // Construct a new target for our projection by rewriting and
     // including the original filters where available
     val projectionChild =
       if (filters.nonEmpty) {
-        val projectedFilters = filters.map(_.transformDown {
+        val projectedFilters = filters.map(_.map(_.transformDown {
           case projectionOverSchema(expr) => expr
-        })
-        val newFilterCondition = projectedFilters.reduce(And)
-        Filter(newFilterCondition, leafNode)
+        }))
+        val newFilterConditions = projectedFilters.map(_.reduce(And))
+        newFilterConditions.foldRight[LogicalPlan](leafNode)((cond, plan) => 
Filter(cond, plan))

Review Comment:
   Hmm, is that same as before?
   
   This constructs a new `Filter` with projected predicates (reduced by `And`). 
But this change reduces all projected predicates from all adjoining `Filter`s 
which can be non-deterministic?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to