[
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15951798#comment-15951798
]
Wangda Tan commented on YARN-6406:
----------------------------------
Thanks [~asuresh] for working on the fix, my comments:
1) Why changes of AppInfo required?
2) Not caused by your patch (Actually caused by my patch). In
LocalitySchedulingPlacementSet: it calls appSchedulingInfo directly in
decrementOutstanding, which could potentially cause trouble since it tries to
modify parent from child. Is it possible to move this logic to
AppSchedulingInfo#allocate. If it is a non trivial change, I can take it up in
a separate JIRA.
> Garbage Collect unused SchedulerRequestKeys
> -------------------------------------------
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
> Issue Type: Improvement
> Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
> Reporter: Arun Suresh
> Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys
> from the AppScheduleingInfo. It looks like after YARN-6040,
> ScedulerRequestKeys are removed only if the Application sends a 0
> numContainers requests. While earlier, the outstanding schedulerKeys were
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the
> ResourceRequests itself once the numContainers == 0, since we see in our
> clusters that the RM heap space consumption increases drastically due to a
> large number of ResourceRequests with 0 num containers.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]