Ngone51 commented on a change in pull request #28746:
URL: https://github.com/apache/spark/pull/28746#discussion_r440905867



##########
File path: core/src/main/scala/org/apache/spark/deploy/LocalSparkCluster.scala
##########
@@ -63,23 +65,34 @@ class LocalSparkCluster(
 
     /* Start the Workers */
     for (workerNum <- 1 to numWorkers) {
-      val workerEnv = Worker.startRpcEnvAndEndpoint(localHostname, 0, 0, 
coresPerWorker,
-        memoryPerWorker, masters, null, Some(workerNum), _conf,
-        conf.get(config.Worker.SPARK_WORKER_RESOURCE_FILE))
+      val (workerEnv, workerRef) = 
Worker.startRpcEnvAndEndpoint(localHostname, 0, 0,
+        coresPerWorker, memoryPerWorker, masters, null, Some(workerNum), _conf,
+        conf.get(config.Worker.SPARK_WORKER_RESOURCE_FILE), isLocalCluster = 
true)
       workerRpcEnvs += workerEnv
+      workerRefs += workerRef
     }
 
     masters
   }
 
   def stop(): Unit = {
     logInfo("Shutting down local Spark cluster.")
+    // SPARK-31922: make sure all the workers have handled the 
messages(`KillExecutor`,
+    // `ApplicationFinished`) from the Master before we shutdown the workers' 
rpcEnvs.
+    // Otherwise, we could hit "RpcEnv already stopped" error.
+    var busyWorkers = workerRefs
+    while (busyWorkers.nonEmpty) {

Review comment:
       >  Actually I doubt whether it worth to make so many changes to avoid an 
error message that doesn't hurt anything.
   
   Hmm...yet this is the way of least changes for a **determined** fix I can 
think of.
   
   Do you have any other ideas?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to