[
https://issues.apache.org/jira/browse/YARN-6947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
YunFan Zhou updated YARN-6947:
------------------------------
Description:
Each time the FairScheduler assign container, it will checks whether the
resources used by the queue exceed Max Share. However, our current calculation
of the resources of the queue is particularly inefficient, which recursively
iterates over all child nodes, with high time complexity.
We can refactor this logic by using lazy update way.
{code:java}
@Override
public Resource assignContainer(FSSchedulerNode node) {
Resource assigned = Resources.none();
// If this queue is over its limit, reject
if (!assignContainerPreCheck(node)) {
return assigned;
}
{code}
{code:java}
* Helper method to check if the queue should attempt assigning resources
*
* @return true if check passes (can assign) or false otherwise
*/
boolean assignContainerPreCheck(FSSchedulerNode node) {
if (node.getReservedContainer() != null) {
if (LOG.isDebugEnabled()) {
LOG.debug("Assigning container failed on node '" + node.getNodeName()
+ " because it has reserved containers.");
}
return false;
} else if (!Resources.fitsIn(getResourceUsage(), maxShare)) {
if (LOG.isDebugEnabled()) {
LOG.debug("Assigning container failed on node '" + node.getNodeName()
+ " because queue resource usage is larger than MaxShare: "
+ dumpState());
}
return false;
} else {
return true;
}
}
{code}
{code:java}
@Override
public Resource getResourceUsage() {
Resource usage = Resources.createResource(0);
readLock.lock();
try {
for (FSQueue child : childQueues) {
Resources.addTo(usage, child.getResourceUsage());
}
} finally {
readLock.unlock();
}
return usage;
}
{code}
was:
{code:java}
@Override
public Resource assignContainer(FSSchedulerNode node) {
Resource assigned = Resources.none();
// If this queue is over its limit, reject
if (!assignContainerPreCheck(node)) {
return assigned;
}
{code}
> The implementation of Schedulable#getResourceUsage so inefficiency that can
> reduce the performance of scheduling
> ----------------------------------------------------------------------------------------------------------------
>
> Key: YARN-6947
> URL: https://issues.apache.org/jira/browse/YARN-6947
> Project: Hadoop YARN
> Issue Type: Improvement
> Components: fairscheduler
> Reporter: YunFan Zhou
> Priority: Critical
>
> Each time the FairScheduler assign container, it will checks whether the
> resources used by the queue exceed Max Share. However, our current
> calculation of the resources of the queue is particularly inefficient, which
> recursively iterates over all child nodes, with high time complexity.
> We can refactor this logic by using lazy update way.
> {code:java}
> @Override
> public Resource assignContainer(FSSchedulerNode node) {
> Resource assigned = Resources.none();
> // If this queue is over its limit, reject
> if (!assignContainerPreCheck(node)) {
> return assigned;
> }
> {code}
> {code:java}
> * Helper method to check if the queue should attempt assigning resources
> *
> * @return true if check passes (can assign) or false otherwise
> */
> boolean assignContainerPreCheck(FSSchedulerNode node) {
> if (node.getReservedContainer() != null) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("Assigning container failed on node '" + node.getNodeName()
> + " because it has reserved containers.");
> }
> return false;
> } else if (!Resources.fitsIn(getResourceUsage(), maxShare)) {
> if (LOG.isDebugEnabled()) {
> LOG.debug("Assigning container failed on node '" + node.getNodeName()
> + " because queue resource usage is larger than MaxShare: "
> + dumpState());
> }
> return false;
> } else {
> return true;
> }
> }
> {code}
> {code:java}
> @Override
> public Resource getResourceUsage() {
> Resource usage = Resources.createResource(0);
> readLock.lock();
> try {
> for (FSQueue child : childQueues) {
> Resources.addTo(usage, child.getResourceUsage());
> }
> } finally {
> readLock.unlock();
> }
> return usage;
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]