cloud-fan commented on code in PR #34929:
URL: https://github.com/apache/spark/pull/34929#discussion_r893624590


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##########
@@ -64,17 +62,6 @@ class QueryExecution(
   // TODO: Move the planner an optimizer into here from SessionState.
   protected def planner = sparkSession.sessionState.planner
 
-  // The CTE map for the planner shared by the main query and all subqueries.
-  private val cteMap = mutable.HashMap.empty[Long, CTERelationDef]

Review Comment:
   It turns out to be a bug in the plan stability test suite.
   
   The test suite normalizes the expr IDs, by using regex `"#\\d+L?".r` to math 
the explain string. However, The exchange node has a special string arg 
`id=#...`: 
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/Exchange.scala#L40
   
   The regex can't distinguish between expr ID and exchange plan id, and may 
normalize the plan wrongly.
   
   I'll try to fix it tomorrow.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to