[ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15674481#comment-15674481
 ] 

Wangda Tan commented on YARN-5864:
----------------------------------

Thanks [~curino] for sharing the firmament paper. I just read it, it provided a 
lot of insightful ideas. I believe it can work pretty well for a cluster which 
have homogeneous workload, but it may not be able to solve the mix workloads 
issues, as it stated:

bq. Firmament shows that a single scheduler can attain scalability, but its 
MCMF optimization does not trivially admit multiple independent schedulers. 

So in my mind, for YARN, we need borg-like architecture to make different kinds 
of workload can be scheduled using different pluggable scheduling policies and 
scorers. Firmament could be one of these scheduling policies. 

I agree your comment about we should make a better semantics of the feature, I 
will think it again and keep you posted.

> Capacity Scheduler preemption for fragmented cluster 
> -----------------------------------------------------
>
>                 Key: YARN-5864
>                 URL: https://issues.apache.org/jira/browse/YARN-5864
>             Project: Hadoop YARN
>          Issue Type: New Feature
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>         Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to