[ 
https://issues.apache.org/jira/browse/YARN-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16619771#comment-16619771
 ] 

Íñigo Goiri commented on YARN-1013:
-----------------------------------

Thanks [~asuresh] for  [^YARN-1013-001.branch-2.patch].
A couple general questions:
* Can we get a patch for trunk for Yetus to be able to run (branch-2 has 
issues)?
* Can you give an overview comparing to the FS approach? I went through the 
patch and it is hard to compare as this uses the allocator.

Comments to the patch itself:
* Some of the debug messages seem for development. Should we keep all of them?
* Can you add more comments to {{testContainerOverAllocation()}}? For example, 
we setup one node without overallocation and one with it. Why those numbers and 
what is the goal?
* Can we add a couple lower level unit tests? Just testing the allocator or the 
scheduler?
* There are many space fixes, can we avoid most of them? Specially, pass the 
null by default as second parameter to registerNode for TestAMRestart and 
TestReservations.

> CS should watch resource utilization of containers and allocate speculative 
> containers if appropriate
> -----------------------------------------------------------------------------------------------------
>
>                 Key: YARN-1013
>                 URL: https://issues.apache.org/jira/browse/YARN-1013
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Arun C Murthy
>            Assignee: Arun Suresh
>            Priority: Major
>         Attachments: YARN-1013-001.branch-2.patch
>
>
> CS should watch resource utilization of containers (provided by NM in 
> heartbeat) and allocate speculative containers (at lower OS priority) if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to