lordk911 commented on issue #5876:
URL: https://github.com/apache/iceberg/issues/5876#issuecomment-1261648466

   the error message is from class DAGScheduler.cleanUpAfterSchedulerStop()
   
   ```
     private[scheduler] def cleanUpAfterSchedulerStop(): Unit = {
       for (job <- activeJobs) {
         val error =
           new SparkException(s"Job ${job.jobId} cancelled because SparkContext 
was shut down")
         job.listener.jobFailed(error)
         // Tell the listeners that all of the running stages have ended.  
Don't bother
         // cancelling the stages because if the DAG scheduler is stopped, the 
entire application
         // is in the process of getting stopped.
         val stageFailedMessage = "Stage cancelled because SparkContext was 
shut down"
         // The `toArray` here is necessary so that we don't iterate over 
`runningStages` while
         // mutating it.
         runningStages.toArray.foreach { stage =>
           markStageAsFinished(stage, Some(stageFailedMessage))
         }
         listenerBus.post(SparkListenerJobEnd(job.jobId, clock.getTimeMillis(), 
JobFailed(error)))
       }
     }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to