[ 
https://issues.apache.org/jira/browse/SPARK-18769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889412#comment-15889412
 ] 

Marcelo Vanzin edited comment on SPARK-18769 at 3/1/17 3:04 AM:
----------------------------------------------------------------

bq. What do you mean by this?

I mean that, if I remember the code correctly, Spark's code will generate one 
container request per container it wants. So if the schedulers decides it wants 
50k containers, the Spark allocator will be sending 50k request objects per 
heartbeat. That's memory used in the Spark allocator code, and results in 
memory used in the RM that's receiving the message. And if the RM code is 
naive, it will look at all those requests too when trying to allocate resources 
(increasing latency for the reply).


was (Author: vanzin):
bq. What do you mean by this?

I mean that, if I remember the code correctly, Spark's code will generate one 
container request per container it wants. So if the schedulers decides it wants 
50k containers, the Spark allocator will be sending 50k request objects per 
heartbeat. That's memory used in the Spark allocator code, and results in 
memory used in the RM that's receiving the message. And if the RM code is 
naive, it will look at all those requests too.

>  Spark to be smarter about what the upper bound is and to restrict number of 
> executor when dynamic allocation is enabled
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18769
>                 URL: https://issues.apache.org/jira/browse/SPARK-18769
>             Project: Spark
>          Issue Type: New Feature
>            Reporter: Neerja Khattar
>
> Currently when dynamic allocation is enabled max.executor is infinite and 
> spark creates so many executor and even exceed the yarn nodemanager memory 
> limit and vcores.
> It should have a check to not exceed more that yarn resource limit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to