liuzqt commented on code in PR #48484:
URL: https://github.com/apache/spark/pull/48484#discussion_r1803759517


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala:
##########
@@ -815,6 +818,12 @@ case class AdaptiveSparkPlanExec(
     }
   }
 
+  private def assertStageNotFailed(stage: QueryStageExec): Unit = {
+    if (stage.hasFailed) {
+      throw stage.error.get().get

Review Comment:
   Updated to apply the same logic as `cleanUpAndThrowException`.
   
   In terms of plan level exception, I think in AQE world it makes sense to 
cache the error since stages might be reused. In non-AQE world, a collect 
action will trigger a new job from a compiled RDD(the RDD is cached though). 
I'm not sure how do we want to define the error caching behavior...but it's 
definitely a good question worth further discussion. Can we address that in a 
follow up item?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to