AngersZhuuuu opened a new pull request, #43906:
URL: https://github.com/apache/spark/pull/43906

   ### What changes were proposed in this pull request?
   We meet a case that user call sc.stop() after run all custom code, but stuck 
in some place. 
   
   Cause below situation
   
   1. User call sc.stop()
   2. sc.stop() stuck in some process, but SchedulerBackend.stop was called
   3. Since tarn ApplicationMaster didn't finish, still call 
YarnAllocator.allocateResources()
   4. Since driver endpoint stop new allocated executor failed to register
   5. untll trigger Max number of executor failures
   6. Caused by 
   
   Before call CoarseGrainedSchedulerBackend.stop() will call 
YarnSchedulerBackend.requestTotalExecutor() to clean request info
   
![image](https://github.com/apache/spark/assets/46485123/4a61fb40-5986-4ecc-9329-369187d5311d)
   
   
   When YarnAllocator handle then empty resource request,  since 
resourceTotalExecutorsWithPreferedLocalities is empty, miss clean 
targetNumExecutorsPerResourceProfileId.
   
![image](https://github.com/apache/spark/assets/46485123/0133f606-e1d7-4db7-95fe-140c61379102)
   
   
   
   
   
   ### Why are the changes needed?
   Fix bug
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   No
   
   ### Was this patch authored or co-authored using generative AI tooling?
   No


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to