[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-04-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954485#comment-15954485
 ] 

Wangda Tan commented on YARN-6406:
--

LGTM, +1. Thanks [~asuresh], will commit tomorrow.

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch, YARN-6406.002.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15954339#comment-15954339
 ] 

Hadoop QA commented on YARN-6406:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 7 new + 577 unchanged - 7 fixed = 584 total (was 584) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6406 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861764/YARN-6406.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dfd376479e47 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5faa949 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15492/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15492/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15492/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15492/console 

[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-04-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953895#comment-15953895
 ] 

Wangda Tan commented on YARN-6406:
--

Filed YARN-6429 and will work on that once this patch is done.

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-04-03 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15953889#comment-15953889
 ] 

Wangda Tan commented on YARN-6406:
--

bq. I think the right approach is to fix the test case (which might be 
harder).. thoughts ?
I would suggest to fix the test case instead.

bq. Please feel free to open another JIRA for that (I can help review), but for 
the timebeing, I think we can remove the schedulerkey as is done in this patch ?
Will do that. 

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951845#comment-15951845
 ] 

Arun Suresh commented on YARN-6406:
---

Thanks for the review [~leftnoteasy],

bq. Why changes of AppInfo required?
Hmmm.. so the the {{TestRMWebServicesApps}} was complaining since it needed a 
resource request object. With this patch, if there is no outstanding resource 
request, The AppInfo will not contain any resource request objects. So I 
decided to send a dummy resourceRequest Object when none exist. I think the 
right approach is to fix the test case (which might be harder).. thoughts ?

bq.  In LocalitySchedulingPlacementSet: it calls appSchedulingInfo directly in 
decrementOutstanding ...
Don't think it is a problem too much (based on existing code paths) .. but yes, 
maybe we should clean it up, since it could lead some circular references to 
the same placementset object.
Please feel free to open another JIRA for that (I can help review), but for the 
timebeing, I think we can remove the schedulerkey as is done in this patch ?

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951819#comment-15951819
 ] 

Hadoop QA commented on YARN-6406:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 622 unchanged - 6 fixed = 625 total (was 628) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6406 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861534/YARN-6406.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 10356231056e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 73835c7 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15461/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15461/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15461/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15461/console 

[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951798#comment-15951798
 ] 

Wangda Tan commented on YARN-6406:
--

Thanks [~asuresh] for working on the fix, my comments: 

1) Why changes of AppInfo required?
2) Not caused by your patch (Actually caused by my patch). In 
LocalitySchedulingPlacementSet: it calls appSchedulingInfo directly in 
decrementOutstanding, which could potentially cause trouble since it tries to 
modify parent from child. Is it possible to move this logic to 
AppSchedulingInfo#allocate. If it is a non trivial change, I can take it up in 
a separate JIRA.

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6406.001.patch
>
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951542#comment-15951542
 ] 

Jason Lowe commented on YARN-6406:
--

Yep, the refcount was only added because of the possibility of the two types of 
requests.  When there are multiple refs to the key, we can't assume removing 
the last of one type removes all references to the key.  If there is only one 
type that can reference the scheduler key then we don't need to refcount it 
separately.


> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951526#comment-15951526
 ] 

Arun Suresh commented on YARN-6406:
---

bq.  Regarding to this, I don't think we need the #ref any more, correct?
Yeah... think we can move this back to a set.

Will post a patch with this shortly



> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951418#comment-15951418
 ] 

Wangda Tan commented on YARN-6406:
--

Nice catch [~arun.sur...@gmail.com].

[~asuresh]/[~jlowe], I totally agree to remove it once #pending-req = 0.

[~jlowe], IIRC, the reference number for scheduler key is added because we have 
two different request before: increase request and resource request. There's a 
change [~arun.sur...@gmail.com] did recently is, remove the increase request, 
now all increase request becomes a normal resource request. Regarding to this, 
I don't think we need the #ref any more, correct? To me a set of SchedulerKey 
will be good enough.

Please share your thoughts.

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946028#comment-15946028
 ] 

Jason Lowe commented on YARN-6406:
--

I haven't dug into YARN-6040, but in general I'm a big +1 for having the RM 
aggressively remove bookkeeping entries that aren't necessary to improve 
lookup/iteration performance in addition to reducing the heap pressure.  That 
was the whole idea behind YARN-5540.  I don't see why we would need to keep 
scheduler keys or requests around once there are no more containers to allocate 
for them.


> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6406) Garbage Collect unused SchedulerRequestKeys

2017-03-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15945990#comment-15945990
 ] 

Arun Suresh commented on YARN-6406:
---

[~leftnoteasy] / [~jlowe] Thoughts ?

> Garbage Collect unused SchedulerRequestKeys
> ---
>
> Key: YARN-6406
> URL: https://issues.apache.org/jira/browse/YARN-6406
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-5540 introduced some optimizations to remove satisfied SchedulerKeys 
> from the AppScheduleingInfo. It looks like after YARN-6040, 
> ScedulerRequestKeys are removed only if the Application sends a 0 
> numContainers requests. While earlier, the outstanding schedulerKeys were 
> also remove as soon as a container is allocated as well.
> An additional optimization we were hoping to include is to remove the 
> ResourceRequests itself once the numContainers == 0, since we see in our 
> clusters that the RM heap space consumption increases drastically due to a 
> large number of ResourceRequests with 0 num containers.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org