[
https://issues.apache.org/jira/browse/SPARK-55257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18054843#comment-18054843
]
Wenjun Ruan commented on SPARK-55257:
-------------------------------------
[~dongjoon] Ping you again
> Is it possible to allocate executor pod in parallel
> ---------------------------------------------------
>
> Key: SPARK-55257
> URL: https://issues.apache.org/jira/browse/SPARK-55257
> Project: Spark
> Issue Type: Improvement
> Components: Kubernetes
> Affects Versions: 4.1.1
> Reporter: Wenjun Ruan
> Priority: Major
>
> Currently, the driver allocates executor pods one by one.
> When a large number of executors need to be allocated, this step can become a
> bottleneck.
> Since the allocation has already been split at this stage, would it be
> possible to perform the pod creation in parallel?
>
> ```
> ExecutorPodsAllocator.splitSlots(podsToAllocateWithRpId,
> remainingSlotFromPendingPods)
> .foreach { case ((rpId, podCountForRpId, targetNum, pendingPodCountForRpId),
> sharedSlotFromPendingPods) =>
> val remainingSlotsForRpId = maxPendingPodsPerRpid - pendingPodCountForRpId
> val numMissingPodsForRpId = targetNum - podCountForRpId
> val numExecutorsToAllocate = Seq(numMissingPodsForRpId, podAllocationSize,
> sharedSlotFromPendingPods, remainingSlotsForRpId).min
> logInfo(log"Going to request ${MDC(LogKeys.COUNT, numExecutorsToAllocate)}
> executors from" +
> log" Kubernetes for ResourceProfile Id: ${MDC(LogKeys.RESOURCE_PROFILE_ID,
> rpId)}, " +
> log"target: ${MDC(LogKeys.NUM_POD_TARGET, targetNum)}, " +
> log"known: ${MDC(LogKeys.NUM_POD, podCountForRpId)},
> sharedSlotFromPendingPods: " +
> log"${MDC(LogKeys.NUM_POD_SHARED_SLOT, sharedSlotFromPendingPods)}.")
> requestNewExecutors(numExecutorsToAllocate, applicationId, rpId,
> k8sKnownPVCNames)
> }
> ```
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]