StevenChenDatabricks commented on code in PR #40385: URL: https://github.com/apache/spark/pull/40385#discussion_r1136498864
########## sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala: ########## @@ -771,6 +775,130 @@ class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuit FormattedMode, statistics) } + + test("SPARK-42753: Process subtree for ReusedExchange with unknown child") { + // Simulate a simplified subtree with a ReusedExchange pointing to an Exchange node that has + // no ID. This is a rare edge cases that could arise during AQE if there are multiple + // ReusedExchanges. We check to make sure the child Exchange gets an ID and gets printed + val exchange = ShuffleExchangeExec(UnknownPartitioning(10), + RangeExec(org.apache.spark.sql.catalyst.plans.logical.Range(0, 1000, 1, 10))) + val reused = ReusedExchangeExec(Seq.empty, exchange) + + var results = "" + def appendStr(str: String): Unit = { + results = results + str + } + + ExplainUtils.processPlan[SparkPlan](reused, appendStr(_)) + + val expectedTree = """|ReusedExchange (1) + |+- Exchange (3) + | +- Range (2)""".stripMargin + + assert(results.contains(expectedTree)) + assert(results.contains("(1) ReusedExchange [Reuses operator id: 3]")) + } + + test("SPARK-42753: Two ReusedExchange Sharing Same Subtree") { Review Comment: @cloud-fan yes we only print one copy of the `Exchange` if its shared between multiple `ReusedExchange`. The order is sort of "random" because it prints the first time the `Exchange` is traversed. I added a unit test here to demo this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org