prakharjain09 commented on pull request #28370:
URL: https://github.com/apache/spark/pull/28370#issuecomment-622929394


   > So I tried it out locally, and if instead of disabling the entire stop (we 
want to keep the interrupt call), we call interrupt & inside of the catch block 
with interrupt set stopped to true and remove the join call I had the executor 
exit successfully. Would you be willing to try that @prakharjain09 ?
   
   @holdenk Thanks for helping with this. I tried running tests in my local and 
never saw any hanging worker. I used the following command:
   
   "time ./build/mvn -Phadoop-3.2  
-Dsuites="org.apache.spark.storage.BlockManagerDecommissionSuite,org.apache.spark.storage.BlockManagerSuite,org.apache.spark.util.UtilsSuite"
 -pl=core test"
   
   Here I am running 3 test suite files one after the other. Worker JVMs used 
to terminate immediately after test in BlockManagerDecommissionSuite file 
finishes. Can you please share the scenario where you are seeing any hanging 
worker jvm?
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to