[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981830#comment-15981830
 ] 

Eric Payne commented on YARN-2113:
----------------------------------

[~sunilg],
It looks like 
{{IntraQueueCandidatesSelector#initializeUsageAndUserLimitForCompute}} should 
be cloning the used resources from leafqueue:
{code}

       // Initialize used resource of a given user for rolling computation.
       rollingResourceUsagePerUser.put(user,
-          leafQueue.getUser(user).getResourceUsage().getUsed(partition));
+        Resources.clone(
+          leafQueue.getUser(user).getResourceUsage().getUsed(partition)));
       if (LOG.isDebugEnabled()) {
         LOG.debug("Rolling resource usage for user:" + user + " is : "
             + rollingResourceUsagePerUser.get(user));
{code}


> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---------------------------------------------------------------
>
>                 Key: YARN-2113
>                 URL: https://issues.apache.org/jira/browse/YARN-2113
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: scheduler
>            Reporter: Vinod Kumar Vavilapalli
>            Assignee: Sunil G
>         Attachments: 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to