[ 
https://issues.apache.org/jira/browse/SPARK-18769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15729675#comment-15729675
 ] 

Sean Owen commented on SPARK-18769:
-----------------------------------

Hm, is this actually related to 
https://issues.apache.org/jira/browse/SPARK-18750 ? I'm sort of unclear if 
these are actually intended to be the same issue.

Spark should ramp up executor allocation somewhat smoothly. If the app wants a 
bunch of executors and hasn't limited the max, is there a reason to forbid it? 
the caller should cap executors if desired. If YARN can't fulfill the request, 
it will reject it. That's normal. Why would Spark have to check with YARN to 
see if YARN is enforcing its own limits?

>  Spark to be smarter about what the upper bound is and to restrict number of 
> executor when dynamic allocation is enabled
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18769
>                 URL: https://issues.apache.org/jira/browse/SPARK-18769
>             Project: Spark
>          Issue Type: New Feature
>            Reporter: Neerja Khattar
>
> Currently when dynamic allocation is enabled max.executor is infinite and 
> spark creates so many executor and even exceed the yarn nodemanager memory 
> limit and vcores.
> It should have a check to not exceed more that yarn resource limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to