[
https://issues.apache.org/jira/browse/SPARK-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14206098#comment-14206098
]
Apache Spark commented on SPARK-4214:
-------------------------------------
User 'sryza' has created a pull request for this issue:
https://github.com/apache/spark/pull/3204
> With dynamic allocation, avoid outstanding requests for more executors than
> pending tasks need
> ----------------------------------------------------------------------------------------------
>
> Key: SPARK-4214
> URL: https://issues.apache.org/jira/browse/SPARK-4214
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core, YARN
> Affects Versions: 1.2.0
> Reporter: Sandy Ryza
> Assignee: Sandy Ryza
>
> Dynamic allocation tries to allocate more executors while we have pending
> tasks remaining. Our current policy can end up with more outstanding
> executor requests than needed to fulfill all the pending tasks. Capping the
> executor requests to the number of cores needed to fulfill all pending tasks
> would make dynamic allocation behavior less sensitive to settings for
> maxExecutors.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]