chenzhx commented on a change in pull request #35660:
URL: https://github.com/apache/spark/pull/35660#discussion_r815752475



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
##########
@@ -327,16 +323,7 @@ abstract class QueryPlan[PlanType <: QueryPlan[PlanType]]
           val existingAttrMappingSet = transferAttrMapping.map(_._2).toSet
           newValidAttrMapping.filterNot { case (_, a) => 
existingAttrMappingSet.contains(a) }
         }
-        val resultAttrMapping = if (canGetOutput(plan)) {
-          // We propagate the attributes mapping to the parent plan node to 
update attributes, so
-          // the `newAttr` must be part of this plan's output.

Review comment:
       As for why there is a problem with the view, there is no problem with 
the table because:
   ```
   'Join Inner, ('l1.id = 'l2.id)
   :- SubqueryAlias l1
   : +- SubqueryAlias spark_catalog.default.t
   : +- Relation default.t[id#20,name#21] parquet
   +- SubqueryAlias l2
       +- !Filter (count(distinct tempresolvedcolumn(name#23, name)) > cast(1 
as bigint))
          +- Aggregate [id#22], [id#22]
             +- SubqueryAlias spark_catalog.default.t
                +- Relation default.t[id#22,name#23] parquet
   ```
   For tables, the id and name in join are different.
   So it will not be considered duplicate and will not execute the following 
code.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to