[ 
https://issues.apache.org/jira/browse/SPARK-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572233#comment-14572233
 ] 

Saisai Shao commented on SPARK-4352:
------------------------------------

Hi [~sandyr], I start to think that shall we need to think this situation where 
task number <= executor number * cores, since if dynamic allocation allocation 
is enabled, this over-demanded containers will soon be ramped down, also if 
container is enough, the container request for adding containers should be 
invalid.

So task number >= executor number * cores is the normal situation, and task 
number <= executor number * cores should be avoided and will soon be ramped 
down, so I think we don't need to consider this situation as special cases, 
what do you think? :)

> Incorporate locality preferences in dynamic allocation requests
> ---------------------------------------------------------------
>
>                 Key: SPARK-4352
>                 URL: https://issues.apache.org/jira/browse/SPARK-4352
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.2.0
>            Reporter: Sandy Ryza
>            Assignee: Saisai Shao
>            Priority: Critical
>         Attachments: Supportpreferrednodelocationindynamicallocation.pdf
>
>
> Currently, achieving data locality in Spark is difficult unless an 
> application takes resources on every node in the cluster.  
> preferredNodeLocalityData provides a sort of hacky workaround that has been 
> broken since 1.0.
> With dynamic executor allocation, Spark requests executors in response to 
> demand from the application.  When this occurs, it would be useful to look at 
> the pending tasks and communicate their location preferences to the cluster 
> resource manager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to