[
https://issues.apache.org/jira/browse/YARN-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15811817#comment-15811817
]
Sunil G commented on YARN-5889:
-------------------------------
Thanks [~eepayne] for the detailed explanation.
I was also having similar thought. Jus one point to clarify here,
bq. if number of active users has increased or decreased, all active users in
preComputedActiveUserLimit are invalided, and not just the one that was
activated/deactivated. This requires recalculation for other users when it is
not necessary.
Since number of active users are changed, we need to recalculate all active
users limits, correct?. Because we divide total-resource-used-byactive-user
with active-user count. In the proposed changed patch also, cached limit will
be different with actual user count when we query user-limit for that user.
In my patch, i cleared all map because of that. could you please help to
elaborate a little more.
I also feel cachedLimit make code more simpler, hence no issue in making
change. However I need to have 2 cacheLimit in user data structure (one for
active user and another for all users). Is my thinking in line with yours. pls
help to clarify. Thank You.
> Improve user-limit calculation in capacity scheduler
> ----------------------------------------------------
>
> Key: YARN-5889
> URL: https://issues.apache.org/jira/browse/YARN-5889
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacity scheduler
> Reporter: Sunil G
> Assignee: Sunil G
> Attachments: YARN-5889.0001.patch,
> YARN-5889.0001.suggested.patchnotes, YARN-5889.v0.patch, YARN-5889.v1.patch,
> YARN-5889.v2.patch
>
>
> Currently user-limit is computed during every heartbeat allocation cycle with
> a write lock. To improve performance, this tickets is focussing on moving
> user-limit calculation out of heartbeat allocation flow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]