StevenChenDatabricks commented on code in PR #40385: URL: https://github.com/apache/spark/pull/40385#discussion_r1142811318
########## sql/core/src/main/scala/org/apache/spark/sql/execution/ExplainUtils.scala: ########## @@ -73,14 +78,34 @@ object ExplainUtils extends AdaptiveSparkPlanHelper { */ def processPlan[T <: QueryPlan[T]](plan: T, append: String => Unit): Unit = { try { + // Initialize a reference-unique set of Operators to avoid accdiental overwrites and to allow + // intentional overwriting of IDs generated in previous AQE iteration + val operators = newSetFromMap[QueryPlan[_]](new IdentityHashMap()) + // Initialize an array of ReusedExchanges to help find Adaptively Optimized Out + // Exchanges as part of SPARK-42753 + val reusedExchanges = ArrayBuffer.empty[ReusedExchangeExec] Review Comment: This `ArrayBuffer` is guaranteed unique because we only insert `ReusedExchange` nodes in the `setOpId` function which checks against the `IdentityHashMap`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org