[ 
https://issues.apache.org/jira/browse/SPARK-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15026284#comment-15026284
 ] 

Apache Spark commented on SPARK-10582:
--------------------------------------

User 'jerryshao' has created a pull request for this issue:
https://github.com/apache/spark/pull/9963

> using dynamic-executor-allocation, if AM failed. the new AM will be started. 
> But the new AM does not allocate executors to dirver
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-10582
>                 URL: https://issues.apache.org/jira/browse/SPARK-10582
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.1, 1.5.1
>            Reporter: KaiXinXIaoLei
>
> During running tasks, when the total number of executors is the value of 
> spark.dynamicAllocation.maxExecutors and the AM is failed. Then a new AM 
> restarts. Because in ExecutorAllocationManager, the total number of executors 
> does not changed, driver does not send RequestExecutors to AM to ask 
> executors. Then the total number of executors is the value of 
> spark.dynamicAllocation.initialExecutors . So the total number of executors 
> in driver and AM is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to