[
https://issues.apache.org/jira/browse/SPARK-46006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dongjoon Hyun closed SPARK-46006.
---------------------------------
> YarnAllocator miss clean targetNumExecutorsPerResourceProfileId after
> YarnSchedulerBackend call stop
> ----------------------------------------------------------------------------------------------------
>
> Key: SPARK-46006
> URL: https://issues.apache.org/jira/browse/SPARK-46006
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 3.1.3, 3.2.4, 3.3.2, 3.4.1, 3.5.0
> Reporter: angerszhu
> Assignee: angerszhu
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.2, 3.5.1, 3.3.4, 4.0.0
>
> Attachments: image-2023-11-20-17-56-45-212.png,
> image-2023-11-20-17-56-56-507.png
>
>
> We meet a case that user call sc.stop() after run all custom code, but stuck
> in some place.
> Cause below situation
> # User call sc.stop()
> # sc.stop() stuck in some process, but SchedulerBackend.stop was called
> # Since tarn ApplicationMaster didn't finish, still call
> YarnAllocator.allocateResources()
> # Since driver endpoint stop new allocated executor failed to register
> # untll trigger Max number of executor failures
> Caused by
> Before call CoarseGrainedSchedulerBackend.stop() will call
> YarnSchedulerBackend.requestTotalExecutor() to clean request info
> !image-2023-11-20-17-56-56-507.png|width=898,height=297!
>
> From the log we make sure that CoarseGrainedSchedulerBackend.stop() was
> called
>
>
> When YarnAllocator handle then empty resource request, since
> resourceTotalExecutorsWithPreferedLocalities is empty, miss clean
> targetNumExecutorsPerResourceProfileId.
> !image-2023-11-20-17-56-45-212.png|width=708,height=379!
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]