prakharjain09 commented on a change in pull request #30300:
URL: https://github.com/apache/spark/pull/30300#discussion_r521211539



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/AliasAwareOutputExpression.scala
##########
@@ -25,20 +25,14 @@ import 
org.apache.spark.sql.catalyst.plans.physical.{HashPartitioning, Partition
 trait AliasAwareOutputExpression extends UnaryExecNode {
   protected def outputExpressions: Seq[NamedExpression]
 
-  protected def hasAlias: Boolean = outputExpressions.collectFirst { case _: 
Alias => }.isDefined
+  lazy val aliasMap = AttributeMap(outputExpressions.collect {
+    case a @ Alias(child: AttributeReference, _) => (child, a.toAttribute)
+  })
 
-  protected def replaceAliases(exprs: Seq[Expression]): Seq[Expression] = {
-    exprs.map {
-      case a: AttributeReference => replaceAlias(a).getOrElse(a)
-      case other => other
-    }
-  }
+  protected def hasAlias: Boolean = aliasMap.nonEmpty
 
   protected def replaceAlias(attr: AttributeReference): Option[Attribute] = {

Review comment:
       I looked into AliasHelper - it has utility methods to pushdown 
expressions. i.e. suppose we have f1 filter on top of Project, how to transform 
f1 so that we can use it below Project. For this it creates mapping between 
newName -> oldExpression.
   
   In our case, we want different kind of map. given an expression coming from 
below the project, how will it look above Project. For this we need 
oldAttributeReference -> newAttributeReference map. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to