[ 
https://issues.apache.org/jira/browse/SPARK-18769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15729689#comment-15729689
 ] 

Marcelo Vanzin commented on SPARK-18769:
----------------------------------------

A little clarification in case the summary is not clear: Spark's dynamic 
allocation will keep growing the number of requested executors until it reaches 
the upper limit, even when the cluster manager hasn't really been allocating 
new executors. This is sub-optimal, especially since, I believe, this increases 
the memory usage in Spark unnecessarily, and might also put unnecessary burden 
on the cluster manager. It also causes exceptions like the one in SPARK-18750.

That exception should be fixed regardless of this bug. But it would be nice for 
Spark to behave better here, and not blindingly increase the number of 
requested executors if that's not having any effect.

(Note that unlike what the summary seems to say, no executors are actually 
created, it's just that Spark is requesting for them.)

>  Spark to be smarter about what the upper bound is and to restrict number of 
> executor when dynamic allocation is enabled
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18769
>                 URL: https://issues.apache.org/jira/browse/SPARK-18769
>             Project: Spark
>          Issue Type: New Feature
>            Reporter: Neerja Khattar
>
> Currently when dynamic allocation is enabled max.executor is infinite and 
> spark creates so many executor and even exceed the yarn nodemanager memory 
> limit and vcores.
> It should have a check to not exceed more that yarn resource limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to