[ 
https://issues.apache.org/jira/browse/SPARK-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hunter Kelly resolved SPARK-12027.
----------------------------------
    Resolution: Not A Problem

To be clear, the issue was a configuration issue with EMR, where it was setting 
"spark.executor.instances" behind my back even though I had set 
"spark.dynamicAllocation.enabled" to true.  The solution is to explicitly set 
"spark.executor.instances" to 0.

> Spark on YARN won't ever ask for more executors than there were containers at 
> time of context creation
> ------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-12027
>                 URL: https://issues.apache.org/jira/browse/SPARK-12027
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler, YARN
>    Affects Versions: 1.5.2
>            Reporter: Hunter Kelly
>              Labels: scheduler, scheduling, yarn
>         Attachments: doc-emr-patch.txt
>
>
> Looking at YarnSchedulerBackend, it appears that totalExpectedExecutors is 
> only ever set at startup.
> Based on my experience of running on EMR (and a quick browse through the code 
> supports this), Spark will never ask for more executors than what this was 
> set to.
> This means that if I add more nodes to my YARN cluster, Spark will never pick 
> them up.  This is bad.  The whole point of using Spark on EMR/YARN is to be 
> able to add or remove nodes to the cluster and have Spark "Do The Right 
> Thing".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to