[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365506#comment-15365506
 ] 

Karthik Kambatla commented on YARN-5215:
----------------------------------------

YARN-1011 and I assume YARN-5202 primarily target using those resources that 
have been allocated to other containers but not used. I see the value in 
extending this to all unused resources on the node, especially if we can 
release resources immediately in case of resource contention.

My concern is with aggressively scheduling non-YARN resources *without* 
immediate preemption in case of resource contention. It might also be nice to 
have a way for other (white-listed) processes to actively reclaim resources 
from YARN. May be, the preemption code could be shared between this and 
YARN-1011? 

[~elgoiri] - do you know how long it takes to compute node utilization and if 
there is need to improve that too? 

If we look only at cpu and memory utilization, may be we could oversubscribe on 
disk/network. Any chance we could get the node-level utilization for 
disk/network from Tetris work? [~asuresh], [~srikanthkandula]? 

> Scheduling containers based on external load in the servers
> -----------------------------------------------------------
>
>                 Key: YARN-5215
>                 URL: https://issues.apache.org/jira/browse/YARN-5215
>             Project: Hadoop YARN
>          Issue Type: Improvement
>            Reporter: Inigo Goiri
>         Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to