[ 
https://issues.apache.org/jira/browse/YARN-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15879094#comment-15879094
 ] 

ASF GitHub Bot commented on YARN-6194:
--------------------------------------

Github user kambatla commented on a diff in the pull request:

    https://github.com/apache/hadoop/pull/196#discussion_r102556457
  
    --- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
 ---
    @@ -3369,7 +3369,48 @@ public void testBasicDRFWithQueues() throws 
Exception {
         scheduler.handle(updateEvent);
         Assert.assertEquals(1, app2.getLiveContainers().size());
       }
    -  
    +
    +  @Test
    +  public void testDRFWithClusterResourceChanges() throws Exception {
    --- End diff --
    
    Using a "real" scheduler with mock nodes seems excessive for this. Can we 
just mock the scheduler and context? Also, this might be a good test for 
TestDRF than TestFairScheduler.


> Cluster capacity in SchedulingPolicy is updated only on allocation file reload
> ------------------------------------------------------------------------------
>
>                 Key: YARN-6194
>                 URL: https://issues.apache.org/jira/browse/YARN-6194
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: fairscheduler
>    Affects Versions: 2.8.0
>            Reporter: Karthik Kambatla
>            Assignee: Yufei Gu
>
> Some of the {{SchedulingPolicy}} methods need cluster capacity which is set 
> using {{#initialize}} today. However, {{initialize()}} is called only on 
> allocation reload. If nodes are added between reloads, the cluster capacity 
> is not considered until the next reload.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to