[jira] [Commented] (YARN-5658) YARN should have a hook to delete a path from HDFS when an application ends

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508863#comment-15508863
 ] 

Rohith Sharma K S commented on YARN-5658:
-

I think intention of this JIRA is more or less similar or part of YARN-2261.

> YARN should have a hook to delete a path from HDFS when an application ends
> ---
>
> Key: YARN-5658
> URL: https://issues.apache.org/jira/browse/YARN-5658
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> There are many cases when a client uploads data to HDFS and then needs to 
> subsequently clean it up, such as with the distributed cache.  It would be 
> helpful if YARN would do that cleanup automatically on job completion.
> The hook could be generic to an URI supported by {{FileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3139:
-
Attachment: YARN-3139.1.patch

Attached ver.1 patch fixed findbugs warning and checkstyle.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508807#comment-15508807
 ] 

Hadoop QA commented on YARN-5609:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} root: The patch generated 0 new + 491 unchanged - 
17 fixed = 491 total (was 508) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 42s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 0s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 46s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 8s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829491/YARN-5609.006.patch |
| JIRA Issue | YARN-5609 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5c171a87473e 3.13.0-36-lowlatency 

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508769#comment-15508769
 ] 

Hadoop QA commented on YARN-4205:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 13 
new + 502 unchanged - 3 fixed = 515 total (was 505) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 6s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829492/0006-YARN-4205.patch |
| JIRA Issue | YARN-4205 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 2991a91b8ebf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 964e546 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508770#comment-15508770
 ] 

Jian He commented on YARN-5609:
---

bq.  since it is always called in conjunctions with a container.canRollback() 
which returns true only if oldLaunchContext is non null.
I see, should we remove the if null condition ? as it'll never happens.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508741#comment-15508741
 ] 

Jian He edited comment on YARN-5609 at 9/21/16 4:58 AM:


bq. I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)
Yep, I also had that in mind for rollback. And I think it is indeed needed for 
rollback, because the old resource may have a chance to get purged...
But for restart, as the same resources are re-used, we don't need to re-check. 
So, I guess we need to retain the behavior for rollback ?


was (Author: jianhe):
bq. I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)
Yep, I also had that in mind for rollback. And I think it is indeed needed for 
rollback, because the old resource may have a change to get purged...
But for restart, as the same resources are re-used, we don't need to re-check. 
So, I guess we need to retain the behavior for rollback ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508741#comment-15508741
 ] 

Jian He commented on YARN-5609:
---

bq. I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)
Yep, I also had that in mind for rollback. And I think it is indeed needed for 
rollback, because the old resource may have a change to get purged...
But for restart, as the same resources are re-used, we don't need to re-check. 
So, I guess we need to retain the behavior for rollback ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5621) Support LinuxContainerExecutor to create symlinks for continuously localized resources

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508689#comment-15508689
 ] 

Jian He edited comment on YARN-5621 at 9/21/16 4:40 AM:


Thanks for your explanation,  some parts I'm missing: 
bq. Both will start ContainerLocalizer instances
Is it starting both instances now?  Not sure if I read the code wrong...  It 
seems not the case. Based on the code, if it's an already existing resource,  
it will NOT start the ContainerLocalizer. e.g. below comments in 
ResourceLocalizationService.  Could you point me the right cod... maybe I'm 
missing something.
{code}
 /*
  * Multiple containers will try to download the same resource. So the
  * resource download should start only if
  * 1) We can acquire a non blocking semaphore lock on resource
  * 2) Resource is still in DOWNLOADING state
  */
{code}
bq.  it may be used by both, concurrently.
This approach may not be easily worked for the new containers without 
structural change, because for new containers, when localizer is started, the 
work-dirs are not setup yet. It cannot create symlinks on localization for new 
containers. The work-dirs are created later when launching the container.
bq.  for services that upgrade over minutes/hours,
Not only for upgrades, it's also used by Tez for localizing resources on 
demand. 

I think I understand your approach now,  basically,
To create the symlinks,
1. we start the localizer process,
2. send the symlinks over on localizer heartbeat, 
3. localizer process create symlinks. 
right ?


was (Author: jianhe):
Thanks for your explanation,  some parts I'm missing: 
bq. Both will start ContainerLocalizer instances
Is it starting both instances now?  Not sure if I read the code wrong...  It 
seems not the case. Based on the code, if it's an already existing resource,  
it will NOT start the ContainerLocalizer. e.g. below comments in 
ResourceLocalizationService.  Could you point me the right cod... maybe I'm 
missing something.
{code}
 /*
  * Multiple containers will try to download the same resource. So the
  * resource download should start only if
  * 1) We can acquire a non blocking semaphore lock on resource
  * 2) Resource is still in DOWNLOADING state
  */
{code}
bq.  it may be used by both, concurrently.
This approach may not be easily worked for the new containers without 
structural change, because for new containers, when localizer is started, the 
work-dirs are not setup yet. It cannot create symlinks on localization for new 
containers. The work-dirs are created later when launching the container.
bq.  for services that upgrade over minutes/hours,
Not only for upgrades, it's also used by Tez for localizing resources on 
demand. 

I think I understand your approach now,  basically,
To create the symlinks,
1. we start the localizer process,
2. send the symlinks over on localizer heartbeat, 
3. localizer process create symlinks. 

One question is, should we keep the localizer process around or terminates it 
immediately after symlinks created. If we keep it around for certain time, we 
need to add some life timeout for the localizer process. If it's immediately 
terminated,  that means every localize request by AM would spawn/kill a new 
process, and I think that could churn over the machine resources.

> Support LinuxContainerExecutor to create symlinks for continuously localized 
> resources
> --
>
> Key: YARN-5621
> URL: https://issues.apache.org/jira/browse/YARN-5621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5621.1.patch, YARN-5621.2.patch, YARN-5621.3.patch, 
> YARN-5621.4.patch, YARN-5621.5.patch
>
>
> When new resources are localized, new symlink needs to be created for the 
> localized resource. This is the change for the LinuxContainerExecutor to 
> create the symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5621) Support LinuxContainerExecutor to create symlinks for continuously localized resources

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508689#comment-15508689
 ] 

Jian He commented on YARN-5621:
---

Thanks for your explanation,  some parts I'm missing: 
bq. Both will start ContainerLocalizer instances
Is it starting both instances now?  Not sure if I read the code wrong...  It 
seems not the case. Based on the code, if it's an already existing resource,  
it will NOT start the ContainerLocalizer. e.g. below comments in 
ResourceLocalizationService.  Could you point me the right cod... maybe I'm 
missing something.
{code}
 /*
  * Multiple containers will try to download the same resource. So the
  * resource download should start only if
  * 1) We can acquire a non blocking semaphore lock on resource
  * 2) Resource is still in DOWNLOADING state
  */
{code}
bq.  it may be used by both, concurrently.
This approach may not be easily worked for the new containers without 
structural change, because for new containers, when localizer is started, the 
work-dirs are not setup yet. It cannot create symlinks on localization for new 
containers. The work-dirs are created later when launching the container.
bq.  for services that upgrade over minutes/hours,
Not only for upgrades, it's also used by Tez for localizing resources on 
demand. 

I think I understand your approach now,  basically,
To create the symlinks,
1. we start the localizer process,
2. send the symlinks over on localizer heartbeat, 
3. localizer process create symlinks. 

One question is, should we keep the localizer process around or terminates it 
immediately after symlinks created. If we keep it around for certain time, we 
need to add some life timeout for the localizer process. If it's immediately 
terminated,  that means every localize request by AM would spawn/kill a new 
process, and I think that could churn over the machine resources.

> Support LinuxContainerExecutor to create symlinks for continuously localized 
> resources
> --
>
> Key: YARN-5621
> URL: https://issues.apache.org/jira/browse/YARN-5621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5621.1.patch, YARN-5621.2.patch, YARN-5621.3.patch, 
> YARN-5621.4.patch, YARN-5621.5.patch
>
>
> When new resources are localized, new symlink needs to be created for the 
> localized resource. This is the change for the LinuxContainerExecutor to 
> create the symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4205:

Attachment: 0006-YARN-4205.patch

Updated patch fixing review comments

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.006.patch

Updating patch for make testcase less fragile..

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508595#comment-15508595
 ] 

Arun Suresh edited comment on YARN-5609 at 9/21/16 3:28 AM:


Updating patch to make testcase less fragile..


was (Author: asuresh):
Updating patch for make testcase less fragile..

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508541#comment-15508541
 ] 

Rohith Sharma K S commented on YARN-4205:
-

bq. Shouldn't it be counted from the time YARN allocates resource for the AM 
and launches it? What if YARN takes more time than the lifetime to allocate 
resource for the app? Seems like the KILL event will be raised immediately 
after the app reaches the RUNNING state in this case. Am I correct?
Basically this point got discussed in earlier comments also, this JIRA is to 
track lifetime of an application. Defining lifetime is nothing but overall 
execution_time of application from submission time. Application get killed at 
any point of time. Basically user is imposing timeout for execution time. Use 
case is every 5 minutes he submits a application and user need output with in 5 
minutes. Use do not worry for time-consumed on statestore or allocation or any 
other which is not user facing also. 
Considering each stages of timeout, in future any other timeouts can be added 
to ApplicationTimeouts class. ApplicationTimeouts java doc also gives 
definition of lifetime.

bq. Should AMRMClientAsync.onShutdownRequest callback be raised to give AM to 
do some last minute work/cleanup/graceful-shutdown-opportunity?
Good point and need to discuss more in general. IIUC, killing an application 
does not allow AM containers to do some clean up. I think the JIRA YARN-2261 is 
intended to do such post clean up.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> YARN-4205_01.patch, YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508524#comment-15508524
 ] 

Hadoop QA commented on YARN-5659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829487/YARN-5659.01.patch |
| JIRA Issue | YARN-5659 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 54b22a3583bc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 964e546 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/13171/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/13171/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/13171/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13171/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13171/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
| unit | 

[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.01.patch

The patch for trunk, where this method has been moved. What's the difference 
between trunk and master? Do both need to be fixed?

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4758) Enable discovery of AMs by containers

2016-09-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-4758:
-
Target Version/s: 2.9.0, MAPREDUCE-6608  (was: MAPREDUCE-6608)

> Enable discovery of AMs by containers
> -
>
> Key: YARN-4758
> URL: https://issues.apache.org/jira/browse/YARN-4758
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Junping Du
> Attachments: YARN-4758. AM Discovery Service for YARN Container.pdf
>
>
> {color:red}
> This is already discussed on the umbrella JIRA YARN-1489.
> Copying some of my condensed summary from the design doc (section 3.2.10.3) 
> of YARN-4692.
> {color}
> Even after the existing work in Work­preserving AM restart (Section 3.1.2 / 
> YARN-1489), we still haven’t solved the problem of old running containers not 
> knowing where the new AM starts running after the previous AM crashes. This 
> is a specifically important problem to be solved for long running services 
> where we’d like to avoid killing service containers when AMs fail­over. So 
> far, we left this as a task for the apps, but solving it in YARN is much 
> desirable. [(Task) This looks very much like service­-registry (YARN-913), 
> but for app­containers to discover their own AMs.
> Combining this requirement (of any container being able to find their AM 
> across fail­overs) with those of services (to be able to find through DNS 
> where a service container is running - YARN-4757) will put our registry 
> scalability needs to be much higher than that of just service end­points. 
> This calls for a more distributed solution for registry readers  something 
> that is discussed in the comments section of YARN-1489 and MAPREDUCE-6608.
> See comment 
> https://issues.apache.org/jira/browse/YARN-1489?focusedCommentId=13862359=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13862359



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508366#comment-15508366
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} root: The patch generated 0 new + 492 unchanged - 
17 fixed = 492 total (was 509) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 40s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 43s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRegression |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508348#comment-15508348
 ] 

Hadoop QA commented on YARN-5638:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 4 new + 386 unchanged - 10 fixed = 390 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 39s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 36s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 3s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829470/YARN-5638-trunk.v2.patch
 |
| JIRA Issue | YARN-5638 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 1877021e365a 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0e918df |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-4591) YARN Web UIs should provide a robots.txt

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508311#comment-15508311
 ] 

Hudson commented on YARN-4591:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10470 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10470/])
YARN-4591. YARN Web UIs should provide a robots.txt. (Sidharta Seethana 
(wangda: rev 5a58bfee30a662b1b556048504f66f9cf00d182a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/Dispatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/TestWebApp.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/RobotsTextPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApp.java


> YARN Web UIs should provide a robots.txt
> 
>
> Key: YARN-4591
> URL: https://issues.apache.org/jira/browse/YARN-4591
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Sidharta Seethana
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: YARN-4591.001.patch, YARN-4591.002.patch
>
>
> To prevent well-behaved crawlers from indexing public YARN UIs.
> Similar to HDFS-330 / HDFS-9651.
> I took a quick look at the Webapp stuff in YARN and it looks complicated so I 
> can't provide a quick patch. If anyone can point me in the right direction I 
> might take a look.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3250) Support admin cli interface in for Application Priority

2016-09-20 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508306#comment-15508306
 ] 

stefanlee commented on YARN-3250:
-

ok,thank you。

> Support admin cli interface in for Application Priority
> ---
>
> Key: YARN-3250
> URL: https://issues.apache.org/jira/browse/YARN-3250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-3250-V1.patch, 0002-YARN-3250.patch, 
> 0003-YARN-3250.patch
>
>
> Current Application Priority Manager supports only configuration via file. 
> To support runtime configurations for admin cli and REST, a common management 
> interface has to be added which can be shared with NodeLabelsManager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3250) Support admin cli interface in for Application Priority

2016-09-20 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508306#comment-15508306
 ] 

stefanlee edited comment on YARN-3250 at 9/21/16 12:59 AM:
---

thank you.


was (Author: imstefanlee):
ok,thank you。

> Support admin cli interface in for Application Priority
> ---
>
> Key: YARN-3250
> URL: https://issues.apache.org/jira/browse/YARN-3250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sunil G
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-3250-V1.patch, 0002-YARN-3250.patch, 
> 0003-YARN-3250.patch
>
>
> Current Application Priority Manager supports only configuration via file. 
> To support runtime configurations for admin cli and REST, a common management 
> interface has to be added which can be shared with NodeLabelsManager. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508296#comment-15508296
 ] 

Naganarasimha G R commented on YARN-3692:
-

Thanks for the latest patch [~rohithsharma],
+1 LGTM, if no further comments will commit it today.


> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.1.patch, 
> 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5079) [Umbrella] Native YARN framework layer for services and beyond

2016-09-20 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508270#comment-15508270
 ] 

Gour Saha commented on YARN-5079:
-

Given that, we are focused on next-gen Slider as per YARN-4692, it is not much 
beneficial for the Slider community to undertake any new feature primarily if 
it needs significant changes on the agent side (with corresponding AM side 
changes). All of such work will be throw away in the new agent-less 
architecture.

Specifically, SLIDER-1167 is an application packaging detail. With native yarn 
support for services, where we focus on docker containers, we don't really care 
if a docker image has a single simple service or complex multiple services. It 
is up to the application owner to start as many or as few. Upgrade would need a 
new docker image. Hence it will be unlikely that feature request SLIDER-1167 
will be worked on.

New releases for Slider are mostly going to focus on bug fixes and minor 
enhancements (if at all demanded by existing users).

> [Umbrella] Native YARN framework layer for services and beyond
> --
>
> Key: YARN-5079
> URL: https://issues.apache.org/jira/browse/YARN-5079
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> (See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.1 to track the specific sub-item.)
> (This is a companion to YARN-4793 in our effort to simplify the entire story, 
> but focusing on APIs)
> So far, YARN by design has restricted itself to having a very low-­level API 
> that can support any type of application. Frameworks like Apache Hadoop 
> MapReduce, Apache Tez, Apache Spark, Apache REEF, Apache Twill, Apache Helix 
> and others ended up exposing higher level APIs that end­-users can directly 
> leverage to build their applications on top of YARN. On the services side, 
> Apache Slider has done something similar.
> With our current attention on making services first­-class and simplified, 
> it's time to take a fresh look at how we can make Apache Hadoop YARN support 
> services well out of the box. Beyond the functionality that I outlined in the 
> previous sections in the doc on how NodeManagers can be enhanced to help 
> services, the biggest missing piece is the framework itself. There is a lot 
> of very important functionality that a services' framework can own together 
> with YARN in executing services end­-to­-end.
> In this JIRA I propose we look at having a native Apache Hadoop framework for 
> running services natively on YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508231#comment-15508231
 ] 

Junping Du commented on YARN-5659:
--

Thanks [~templedf] for review and comments! Patch looks OK to me but I agree 
with Daniel that we should add a simple unit test for a normal URL and URL 
without scheme (that you mentioned a bug exists so far). 
+1 when adding UT there.

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4493) move queue can make app don't belong to any queue

2016-09-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508157#comment-15508157
 ] 

Daniel Templeton commented on YARN-4493:


Looks generally good to me.  Thanks, [~yufeigu]!  Please complete the javadocs 
for the methods you added, i.e. params, returns, etc.  Also, as was suggested 
before, some unit tests would be good to add.

> move queue can make app don't belong to any queue
> -
>
> Key: YARN-4493
> URL: https://issues.apache.org/jira/browse/YARN-4493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.0, 2.6.0, 2.7.1
>Reporter: jiangyu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-4493.001.patch, yarn-4493.patch.1
>
>
> When moving a running application to a different queue, the current implement 
> don't check if the app can run in the new queue before remove it from current 
> queue. So if the destination queue is full, the app will throw exception, and 
> don't belong to any queue.
> After that, the queue become orphane, can not schedule any resources. If you 
> kill the app,  the removeApp method in FSLeafQueue will throw 
> IllealStateException of "Given app to remove app does not exist in queue ..." 
> exception.   
> So i think we should check if the destination queue can run the app before 
> remove it from the current queue.  
> The patch is from our revision.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5638:

Attachment: YARN-5638-trunk.v2.patch

Sorry wrong patch... let me try a new one... 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-trunk.v1.patch, YARN-5638-trunk.v2.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5638:

Attachment: (was: YARN-5638-trunk.v2.patch)

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-trunk.v1.patch, YARN-5638-trunk.v2.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5655) TestContainerManagerSecurity#testNMTokens is asserting

2016-09-20 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508129#comment-15508129
 ] 

Robert Kanter commented on YARN-5655:
-

Thanks [~jlowe]!

> TestContainerManagerSecurity#testNMTokens is asserting
> --
>
> Key: YARN-5655
> URL: https://issues.apache.org/jira/browse/YARN-5655
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Robert Kanter
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5655.001.patch
>
>
> TestContainerManagerSecurity has been failing recently in 2.8:
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 80.928 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 44.478 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:394)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 34.964 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:333)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.005.patch

Uploading patch:

* Fixing Test failures, checkstyles and javadocs.
* Added NMAudit logging messages when reinitialization starts and completes.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Description: 
YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with a 
new launch context and subsequently *rollback* / *commit* the change on the 
Container. This can also be used to simply *restart* the Container as well. 

This JIRA proposes to extend the ContainerManagementProtocol with the following 
API:
* *reInitializeContainer*
* *rollbackLastUpgrade*
* *commitLastUpgrade*
* *restartContainer*


  was:
YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with a 
new launch context and subsequently *rollback* / *commit* the change on the 
Container. This can also be used to simply *restart* the Container as well. 

This JIRA proposes to extend the ContainerManagementProtocol with the following 
API:
* *upgradeContainer*
* *rollbackLastUpgrade*
* *commitLastUpgrade*
* *restartContainer*



> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508019#comment-15508019
 ] 

Hadoop QA commented on YARN-3139:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 18 new + 236 unchanged - 51 fixed = 254 total (was 287) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 3s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 49s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$AllocationReloadListener.onReload(AllocationConfiguration)
 does not release lock on all exception paths  At FairScheduler.java:on all 
exception paths  At FairScheduler.java:[line 1651] |
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828352/YARN-3139.0.patch |
| JIRA Issue | YARN-3139 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 82f046cef257 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e80386d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507973#comment-15507973
 ] 

Hadoop QA commented on YARN-5638:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 14s {color} | 
{color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 3 new + 384 unchanged - 12 fixed = 387 total (was 396) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 26 new + 159 unchanged - 0 fixed = 185 total (was 159) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 11 new + 240 unchanged - 0 fixed = 251 total (was 240) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} 

[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507971#comment-15507971
 ] 

Hitesh Shah commented on YARN-5659:
---

\cc [~leftnoteasy] [~vvasudev] [~djp]

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507958#comment-15507958
 ] 

Hadoop QA commented on YARN-5659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-5659 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829444/YARN-5659.patch |
| JIRA Issue | YARN-5659 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13168/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507846#comment-15507846
 ] 

Wangda Tan commented on YARN-2009:
--

bq. I see. I wasn't suggesting that preemption should balance all users, only 
those that are asking.

But I think we need an overhaul for user-limit related logic in CS so we can 
better balancing usages between users, which should be a part of the 
intra-queue preemption story. We can discuss more once this JIRA is done.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507840#comment-15507840
 ] 

Daniel Templeton edited comment on YARN-5659 at 9/20/16 9:26 PM:
-

Looks right to me.  It would be really nice to add some unit tests to make sure 
the change isn't breaking anything.


was (Author: templedf):
Looks right to me.  +1 (non-binding)

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5638:

Attachment: YARN-5638-trunk.v2.patch

Thanks for the review [~rohithsharma]! I addressed most of you comments except 
those two:

bq. Can happensBefore comparison method name can be changed something 
meaningful ? May we can define comparator method itself.
Happens-before has very concrete meaning in distributed system theory, as 
defined in Lamport's "Time, Clocks, and the Ordering of Events in a Distributed 
System" 
(http://research.microsoft.com/en-us/um/people/lamport/pubs/time-clocks.pdf) 
Here, we're assigning each collector data a timestamp, and then we use 
timestamps to reason about happens before order in the system. Personally I'd 
prefer using this formal definition to capture our use case here. 

bq. In stamped method, I think check for both rmIdentifiers && version?
Did not quite get the point here... I think we're checking both fields? 

About the design question, a pull (IIUC, pulling collector data from the RM) 
based method works. The current approach offloads the burden of deciding where 
collectors should run from the RM. At the early stage we can also reuse some 
well established mechanisms with heartbeat. I believe most them are for 
engineering reasons, though. 

One thing to note is that we will not send known collector information from NMs 
to the RM in *every* heartbeat. A collector's information got sent only when it 
needs registration to the RM, or the RM needs resync. On the other hand, the RM 
will only send related collector's information to each of the NMs. 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-trunk.v1.patch, YARN-5638-trunk.v2.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507840#comment-15507840
 ] 

Daniel Templeton commented on YARN-5659:


Looks right to me.  +1 (non-binding)

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507832#comment-15507832
 ] 

Eric Payne commented on YARN-2009:
--

{quote}
In my above example, the #active_users is 2 instead of 3 (because B has no more 
pending resource). The reason why it uses #active-user is: existing user-limit 
is used to balance available resource to active users, it doesn't consider the 
needs to re-balance (via preemption) usages of users. To make intra-queue user 
limit preemption can correctly balance usages between users, we need to fix the 
scheduling logic as well.
{quote}
I see. I wasn't suggesting that preemption should balance all users, only those 
that are asking.
{quote}
{code}
...
for app in sort-by-fifo-or-priority(apps) {
   if (user-to-allocated.get(app.user) < user-limit-resource) {
app.allocated = min(app.used + pending, user-limit-resource - 
user-to-allocated.get(app.user));
user-to-allocated.get(app.user) += app.allocated;
...
{code}
{quote}
Yes, that would work. Thanks.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507792#comment-15507792
 ] 

Hadoop QA commented on YARN-5356:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s 
{color} | {color:red} root: The patch generated 4 new + 161 unchanged - 3 fixed 
= 165 total (was 164) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 35s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 52s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829425/YARN-5356.004.patch |
| JIRA Issue | YARN-5356 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f586f1d5482d 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9f03b40 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Description: 
getPathFromYarnURL does some string shenanigans where  standard ctors should 
suffice.
There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
invalid, null should be used. 

  was:
getPathFromYarnURL does some string shenanigans where  standard ctors should 
suffice.
There are also bugs in it e.g. passing an empty string to the URI ctor is 
invalid, null should be used. 


> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated YARN-5659:
---
Attachment: YARN-5659.patch

The patch. Based on the reversal of getYarnURLFromPath/URI.
normalize is also unneeded since Path already does that.

[~hitesh] can you take a look?

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty string to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned YARN-5659:
--

Assignee: Sergey Shelukhin

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty string to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-20 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created YARN-5659:
--

 Summary: getPathFromYarnURL should use standard methods
 Key: YARN-5659
 URL: https://issues.apache.org/jira/browse/YARN-5659
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sergey Shelukhin


getPathFromYarnURL does some string shenanigans where  standard ctors should 
suffice.
There are also bugs in it e.g. passing an empty string to the URI ctor is 
invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5656) Fix ReservationACLsTestBase

2016-09-20 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507677#comment-15507677
 ] 

Sean Po commented on YARN-5656:
---

Thanks for the review and commit [~asuresh]!

> Fix ReservationACLsTestBase
> ---
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5656) Fix ReservationACLsTestBase

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507668#comment-15507668
 ] 

Hudson commented on YARN-5656:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10468 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10468/])
YARN-5656. Fix ReservationACLsTestBase. (Sean Po via asuresh) (arun suresh: rev 
9f03b403ec69658fc57bc0f6b832da0e3c746497)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/NoOverCommitPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestNoOverCommitPolicy.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/exceptions/MismatchedUserException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ReservationACLsTestBase.java


> Fix ReservationACLsTestBase
> ---
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507633#comment-15507633
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
50s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 34s 
{color} | {color:red} root: The patch generated 28 new + 494 unchanged - 1 
fixed = 522 total (was 495) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
3 new + 123 unchanged - 0 fixed = 126 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 42s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 8s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 45s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 112m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Updated] (YARN-5656) Fix ReservationACLsTestBase

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5656:
--
Summary: Fix ReservationACLsTestBase  (was: ReservationACLsTestBase fails 
on trunk)

> Fix ReservationACLsTestBase
> ---
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-09-20 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507531#comment-15507531
 ] 

Carlo Curino commented on YARN-5323:


Thanks [~subru] for reviews and committing.

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: YARN-2915
>
> Attachments: YARN-5323-YARN-2915.05.patch, 
> YARN-5323-YARN-2915.06.patch, YARN-5323-YARN-2915.07.patch, 
> YARN-5323-YARN-2915.08.patch, YARN-5323-YARN-2915.09.patch, 
> YARN-5323-YARN-2915.10.patch, YARN-5323-YARN-2915.11.patch, 
> YARN-5323.01.patch, YARN-5323.02.patch, YARN-5323.03.patch, YARN-5323.04.patch
>
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507524#comment-15507524
 ] 

Daniel Templeton commented on YARN-5599:


Looks like the patch adds back the log line in 
{{createAMContainerLaunchContext()}}, which reintroduces the security 
vulnerability we were trying to eliminate.  The log line should be dropped 
altogether.

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5656) ReservationACLsTestBase fails on trunk

2016-09-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507491#comment-15507491
 ] 

Arun Suresh commented on YARN-5656:
---

+1 Committing this shortly..

> ReservationACLsTestBase fails on trunk
> --
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5656) ReservationACLsTestBase fails on trunk

2016-09-20 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507475#comment-15507475
 ] 

Sean Po commented on YARN-5656:
---

The Javadoc failures are not caused by the latest patch. 

Changes were made to only four files: NoOverCommitPolicy.java, 
[MismatchedUserException.java], ReservationACLsTestBase.java and 
TestNoOverCommitPolicy.java.

None of these were referenced in the Javadoc failure results.

> ReservationACLsTestBase fails on trunk
> --
>
> Key: YARN-5656
> URL: https://issues.apache.org/jira/browse/YARN-5656
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5656.v1.patch, YARN-5656.v2.patch
>
>
> ReservationACLsTestBase fails when verifying that a reservation can be 
> successfully updated by a user who did not submit the reservation who also 
> has an admin ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-09-20 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5356:
--
Attachment: YARN-5356.004.patch

Fixing compilation.

> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch, YARN-5356.002.patch, YARN-5356.003.patch, 
> YARN-5356.004.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507403#comment-15507403
 ] 

Gour Saha commented on YARN-4205:
-

Thanks [~rohithsharma]. A few minor cosmetic follow-up comments, and 2 
additional fundamental questions.

h6. \[ApplicationTimeouts.java\]
{code}
   * Get life timeout of an application. The application will be killed
{code}
Change *life timeout* to lifetime.

{code}
   * @param lifeTimeout of an application in seconds.
{code}
Change *lifeTimeout* to lifetime.

{code}
  public abstract void setLifetime(long lifeTime);
{code}
Change *lifeTime* to lifetime (lowercase t)

h6. \[yarn-default.xml\]
{code}
The RMAppLifeTimeMonitor Service uses this value as monitor interval.
{code}
Change to "The RMAppLifetimeMonitor Service uses this value as lifetime monitor 
interval." (note, lower-cased t in RMAppLifetimeMonitor and added lifetime 
after "value as")

h6. \[TestApplicationLifetimeMonitor.java\]
{code}
  Assert.assertTrue("Applicaiton killed before life timeout value",
{code}
Change "life timeout" to "lifetime" (note, this change is needed in 2 lines)

{code}
  public void testApplicationLifeTimeMonitor() throws Exception {
{code}
testApplicationLifeTimeMonitor -> testApplicationLifetimeMonitor (lowercase t)

{code}
  public void testApplicationLifeTimeOnRMRestart() throws Exception {
{code}
testApplicationLifeTimeOnRMRestart -> testApplicationLifetimeOnRMRestart 
(lowercase t)

h6. \[RMContextImpl.java\]
{code}
  RMAppLifetimeMonitor rmAppLifeTimeMonitor) {
{code}
rmAppLifeTimeMonitor -> rmAppLifetimeMonitor (lowercase t)

h6. \[MockRM.java\]
{code}
  long applicationLifeTime) throws Exception {
{code}
applicationLifeTime -> applicationLifetime (lowercase t)

There are 2 fundamental questions that come to my mind and I wanted to run 
across with you -

1. Should *AMRMClientAsync.onShutdownRequest* callback be raised to give AM to 
do some last minute work/cleanup/graceful-shutdown-opportunity? I don't think 
we need to, but still wanted to call it out and know your thoughts on this.

2. Seems like the lifetime is counted from the start of the application 
submission. Shouldn't it be counted from the time YARN allocates resource for 
the AM and launches it? What if YARN takes more time than the lifetime to 
allocate resource for the app? Seems like the KILL event will be raised 
immediately after the app reaches the RUNNING state in this case. Am I correct?


> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> YARN-4205_01.patch, YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507395#comment-15507395
 ] 

Hudson commented on YARN-3141:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10467 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10467/])
Addendum patch for fix javadocs failure which is caused by YARN-3141. (wangda: 
rev e45307c9a063248fcfb08281025d87c4abd343b1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java


> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch, 
> YARN-3141.addendum-0.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507385#comment-15507385
 ] 

Wangda Tan commented on YARN-3142:
--

Thanks [~varun_saxena], I don't have a patch yet, it will be great, if you 
could have some cycle in the next one or two days to have the patch ready. I 
can help with reviews. 

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507374#comment-15507374
 ] 

Varun Saxena commented on YARN-3142:


Sorry [~leftnoteasy], missed your comment as I was on leave.

Let me see if I can get cycles to update the patch by tomorrow, otherwise you 
can take it up. Will let you know offline.
If you already have a patch ready for it though, feel free to pick it up.


> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507356#comment-15507356
 ] 

Wangda Tan commented on YARN-2009:
--

[~eepayne],

Existing user-limit is computed by following logic:
{code}
user_limit = min { 
max{ current_capacity / #active_users, current_capacity * 
user_limit_percent},
queue_capacity * user_limit_factor}
   }
{code}

In my above example, the #active_users is 2 instead of 3 (because B has no more 
pending resource). The reason why it uses #active-user is: existing user-limit 
is used to balance *available resource to active users*, it doesn't consider 
the needs to re-balance (via preemption) usages of users. To make intra-queue 
user limit preemption can correctly balance usages between users, we need to 
fix the scheduling logic as well.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507329#comment-15507329
 ] 

Hadoop QA commented on YARN-3141:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 941 new + 0 unchanged - 942 fixed = 941 total (was 942) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 18s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829413/YARN-3141.addendum-0.patch
 |
| JIRA Issue | YARN-3141 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a5169d83015a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c6d1d74 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13163/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13163/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13163/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output 

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-20 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507326#comment-15507326
 ] 

Eric Payne commented on YARN-2009:
--

[~leftnoteasy], I am confused by your above example:
{quote}
Queue's user-limit-percent = 33
Queue's used=guaranteed=max=12. 
There're 3 users (A,B,C) in the queue, order of applications are A/B/C
...
So the computed user-limit-resource will be 6.
...
The actual user-ideal-assignment when doing scheduling is 6/6/0 !
{quote}
If {{minimum-user-limit-percent == 33}}, why is the {{user-limit-resource == 
6}}?

Shouldn't {idealAssigned}} be 4/4/4? not 6/6/0?

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5621) Support LinuxContainerExecutor to create symlinks for continuously localized resources

2016-09-20 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507320#comment-15507320
 ] 

Chris Douglas commented on YARN-5621:
-

I think I see where the CL proposal was unclear.

It is an alternative to CE changes; container start remains as-is. The proposal 
was scoped only to localizing resources for running containers. The CE is 
agnostic to new/running containers for an application- it may be used by both, 
concurrently. By adding a new command {{LINK}} to its protocol, the NM can 
instruct the {{ContainerLocalizer}} to create a symlink to a resource for a 
running container. Again, these commands could be grouped.

{quote}
> a case that already exists for containers on the same node requesting the 
> same resource
Do you mean this is an existing implemented functionality or this is an 
existing use-case?
{quote}

Neither. The case where running containers (c ~1x~, c ~2y~) for different 
applications (a ~1~, a ~2~) request the same resource _R_ exists. Both will 
start {{ContainerLocalizer}} instances, but only one will download the resource 
to the private cache. In the CL proposal, this is the same as rollback, where 
the CL starts, heartbeats, then receives a command to LINK an existing resource 
without downloading anything. By "a case that already exists", I meant it's a 
case the CL proposal handles implicitly.

bq. yeah, I feel it's inefficient to start a localizer process to only create 
symlinks..

No question. But if localizing a new resource takes a few seconds, for services 
that upgrade over minutes/hours, then a few hundred milliseconds is not worth 
adding {{RUN_SCRIPT}} to the CE.

> Support LinuxContainerExecutor to create symlinks for continuously localized 
> resources
> --
>
> Key: YARN-5621
> URL: https://issues.apache.org/jira/browse/YARN-5621
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5621.1.patch, YARN-5621.2.patch, YARN-5621.3.patch, 
> YARN-5621.4.patch, YARN-5621.5.patch
>
>
> When new resources are localized, new symlink needs to be created for the 
> localized resource. This is the change for the LinuxContainerExecutor to 
> create the symlinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-20 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507292#comment-15507292
 ] 

Daniel Templeton commented on YARN-5599:


The original intent was to cover for the removal of a log message from the RM 
that output the launch command.  With the log message gone, we wanted to have 
another source for the launch command so that launch failures can be debugged.

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507260#comment-15507260
 ] 

Arun Suresh edited comment on YARN-5609 at 9/20/16 5:50 PM:


Updating patch, Thanks [~jianhe]..

* Addressed most of your concerns.
* Added some more comments to clarify some assumptions.
* Added testcases to verify that Explicit rollback is still possible AFTER 
upgraded container has been restarted.

bq.  I think this will cause the resources to be re-requested on restart. Even 
though the effect might still be the same, because the resources are already 
localized and the requests will be ignored,  but I think we can still try to 
avoid sending these unnecessary events in case the resource set is large ?
I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)...
But I agree with you.. It is inefficient. I've updated patch to make sure that 
in case of rollback and restart, this wont happen.. I've also put some comments 
there.. do take a look and let me know if its fine.


With regard to this:
{noformat} 
private ReInitializationContext createContextForRollback() {
  if (oldLaunchContext == null) {
return null;
  } else {
{noformat}
There should not be a NPE, since it is always called in conjunctions with a 
{{container.canRollback()}} which returns true only if _oldLaunchContext_ is 
non null.





was (Author: asuresh):
Updating patch, Thanks [~jianhe]..

* Addressed most of your concerns.
* Added some more comments to clarify some assumptions.
* Added testcases to verify that Explicit rollback is still possible AFTER 
upgraded container has been restarted.

bq.  I think this will cause the resources to be re-requested on restart. Even 
though the effect might still be the same, because the resources are already 
localized and the requests will be ignored,  but I think we can still try to 
avoid sending these unnecessary events in case the resource set is large ?
I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)...
But I agree with you.. It is inefficient. I've updated patch to make sure that 
in care of rollback and restart, this wont happen.. do take a look and let me 
know if its fine.


With regard to this:
{noformat} 
private ReInitializationContext createContextForRollback() {
  if (oldLaunchContext == null) {
return null;
  } else {
{noformat}
There should not be a NPE, since it is always called in conjunctions with a 
{{container.canRollback()}} which returns true only if _oldLaunchContext_ is 
non null.




> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.004.patch

Updating patch, Thanks [~jianhe]..

* Addressed most of your concerns.
* Added some more comments to clarify some assumptions.
* Added testcases to verify that Explicit rollback is still possible AFTER 
upgraded container has been restarted.

bq.  I think this will cause the resources to be re-requested on restart. Even 
though the effect might still be the same, because the resources are already 
localized and the requests will be ignored,  but I think we can still try to 
avoid sending these unnecessary events in case the resource set is large ?
I had intentionally kept it that way (my thinking was that the Tracker will 
then verify that the resources.. directories etc. are still good)...
But I agree with you.. It is inefficient. I've updated patch to make sure that 
in care of rollback and restart, this wont happen.. do take a look and let me 
know if its fine.


With regard to this:
{noformat} 
private ReInitializationContext createContextForRollback() {
  if (oldLaunchContext == null) {
return null;
  } else {
{noformat}
There should not be a NPE, since it is always called in conjunctions with a 
{{container.canRollback()}} which returns true only if _oldLaunchContext_ is 
non null.




> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507237#comment-15507237
 ] 

Rohith Sharma K S commented on YARN-4205:
-

It looks 3 java doc warnings are related, I will upload new patch. Before thatI 
will wait for Gour to review last patch.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> YARN-4205_01.patch, YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3141:
-
Attachment: YARN-3141.addendum-0.patch

Ah my bad, there's one line change which causes javadocs fail.

There're tons of message in javadocs building which start with "[ERROR]", 
however most of them are warnings, the real error message submerged in these 
output:

bq. [ERROR] 
/Users/wtan/project/github/hadoop-common-trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java:331:
 error: @param name not found

Uploaded addendum-0 patch.

> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch, 
> YARN-3141.addendum-0.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507166#comment-15507166
 ] 

Wangda Tan commented on YARN-3141:
--

Thanks for pointing out, [~aw]. Looking at it now.

> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507069#comment-15507069
 ] 

Hadoop QA commented on YARN-3692:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} root: The patch generated 0 new + 232 unchanged - 1 
fixed = 232 total (was 233) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 1 new + 157 unchanged - 0 fixed = 158 total (was 157) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 34m 4s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 5s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 118m 20s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 222m 5s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829380/0007-YARN-3692.1.patch
 |
| JIRA Issue | YARN-3692 |
| Optional Tests |  asflicense  compile  

[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507056#comment-15507056
 ] 

Sunil G commented on YARN-4855:
---

Thanks [~Tao Jie] and [~Naganarasimha] for taking the discussions forward.

We already support sub options as below.
{noformat}
[-refreshNodes [-g|graceful [timeout in seconds] -client|server]]
{noformat}
So i think *"-fail-on-unknown-nodes"* makes more sense to me. However, 
generally RMAdminCLI options are using camel casing. And we are not supporting 
much of sub-options till now here unlike in ApplicationCLI. And ApplicationCLI 
already has {{appStates}} etc. So may be we could make this sub option like 
{{failOnUnknownNodes}}.

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch, 
> YARN-4855.012.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Add support for toggling the removal of completed and failed docker containers

2016-09-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507053#comment-15507053
 ] 

Allen Wittenauer commented on YARN-5366:


And another question:

Why are we even using container-executor for this and some of these other 
docker commands?  If yarn is in the docker group, we shouldn't need privilege 
at all to run docker.

> Add support for toggling the removal of completed and failed docker containers
> --
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch
>
>
> Currently, completed and failed docker containers are removed by 
> container-executor. Add a job level environment variable to 
> DockerLinuxContainerRuntime to allow the user to toggle whether they want the 
> container deleted or not and remove the logic from container-executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations

2016-09-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5256:
--
Summary: Add REST endpoint to support detailed NodeLabel Informations  
(was: [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations)

> Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, YARN-5256-YARN-3368.2.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5256) [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations

2016-09-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5256:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-3368)

> [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, YARN-5256-YARN-3368.2.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507006#comment-15507006
 ] 

Hadoop QA commented on YARN-4855:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 165 unchanged - 2 fixed = 165 total (was 167) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 35s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 45s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829400/YARN-4855.012.patch |
| JIRA Issue | YARN-4855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux fe65b3fed91b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Reopened] (YARN-3141) Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp

2016-09-20 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened YARN-3141:


bq. -1  javadoc 0m 21s  hadoop-yarn-server-resourcemanager in the patch 
failed. 

I'm not sure why this error from precommit was ignored, but this patch just 
broke javadoc generation for everyone.  Please fix or revert ASAP.

> Improve locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp
> --
>
> Key: YARN-3141
> URL: https://issues.apache.org/jira/browse/YARN-3141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3141.1.patch, YARN-3141.2.patch, YARN-3141.3.patch, 
> YARN-3141.4.patch, YARN-3141.5.patch, YARN-3141.6.patch
>
>
> Enhance locks in SchedulerApplicationAttempt/FSAppAttempt/FiCaSchedulerApp, 
> as mentioned in YARN-3091, a possible solution is using read/write lock. 
> Other fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5324) Stateless router policies implementation

2016-09-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504769#comment-15504769
 ] 

Subru Krishnan edited comment on YARN-5324 at 9/20/16 3:58 PM:
---

Thanks [~curino] for addressing my comments. 

The patch looks very close, have a few follow up comments:
  * {{PriorityRouterPolicy}} seems to be missing in the latest version.
  * In {{BaseWeightedRouterPolicy}}, use a Logger instead of 
_e.printStackTrace_.
  * Are we handling the null case for *policyInfo* in 
{{BaseWeightedRouterPolicy}}?

  
  * bq. check for active subclusters is indeed somewhat repeated
  In that case, we should have a base version in  {{BaseWeightedRouterPolicy}} 
which others can override in case they have acustom logic.

  
  * The suggestion of adding *selectSubCluster* is not for API purposes but 
purely for readability as every _RouterPolicy_ has the same pattern.
  * Rename {{BaseFederationPoliciesTest}} to 
{{BaseFederationRouterPoliciesTest}}
  * Why can't we move *testNoSubclusters* to 
{{BaseFederationRouterPoliciesTest}}?

  
  * bq. In all/most tests the set of "activeSubclusters" is chosen to be a 
subset of the one specified in the policy weights. All policies are basically 
stateless, previous decisions should not affect following ones so the multi 
invocation tests are only relevant if we check statistical properties 
  IIUC then, the Javadocs _Generate large number of randomized tests_ in tests 
seem misleading, can you update.


  * bq. Some of the method in FederationPoliciesTestUtil are used by the 
upcoming patches for AMRMProxy (I was trying to avoid editing that class over 
and over at every patch).
  We should _only_ have related changes in the patch. Editing same files 
incrementally over multiple patches is the norm as otherwise we will loose 
track of provenance which is required for selective cherry-picking, roll-backs 
etc.




was (Author: subru):
Thanks [~curino] for addressing my comments. 

The patch looks very close, have a few follow up comments:
  * {{PriorityRouterPolicy}} seems to be missing in the latest version.
  * Are we handling the null case for *policyInfo* in 
{{BaseWeightedRouterPolicy}}?

  
  * bq. check for active subclusters is indeed somewhat repeated
  In that case, we should have a base version in  {{BaseWeightedRouterPolicy}} 
which others can override in case they have acustom logic.

  
  * The suggestion of adding *selectSubCluster* is not for API purposes but 
purely for readability as every _RouterPolicy_ has the same pattern.
  * Rename {{BaseFederationPoliciesTest}} to 
{{BaseFederationRouterPoliciesTest}}
  * Why can't we move *testNoSubclusters* to 
{{BaseFederationRouterPoliciesTest}}?

  
  * bq. In all/most tests the set of "activeSubclusters" is chosen to be a 
subset of the one specified in the policy weights. All policies are basically 
stateless, previous decisions should not affect following ones so the multi 
invocation tests are only relevant if we check statistical properties 
  IIUC then, the Javadocs _Generate large number of randomized tests_ in tests 
seem misleading, can you update.


  * bq. Some of the method in FederationPoliciesTestUtil are used by the 
upcoming patches for AMRMProxy (I was trying to avoid editing that class over 
and over at every patch).
  We should _only_ have related changes in the patch. Editing same files 
incrementally over multiple patches is the norm as otherwise we will loose 
track of provenance which is required for selective cherry-picking, roll-backs 
etc.



> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324.01.patch, YARN-5324.02.patch, 
> YARN-5324.03.patch, YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506856#comment-15506856
 ] 

Naganarasimha G R commented on YARN-4855:
-

Thanks [~Tao Jie],
agree if you want to update all it would be considerable effort but what about 
only for this command ? 
Why i am insisting once we provide a *"--fail-on-unknown-nodes"* then modifying 
to *"-fail-on-unknown-nodes"* will be compatability break and if new jira is 
only for this command then also it would not make sense. 
thoughts from others ? cc/ [~sunilg] & [~wangda]

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch, 
> YARN-4855.012.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5655) TestContainerManagerSecurity#testNMTokens is asserting

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506815#comment-15506815
 ] 

Hudson commented on YARN-5655:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10466 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10466/])
YARN-5655. TestContainerManagerSecurity#testNMTokens is asserting. (jlowe: rev 
c6d1d742e70e7b8f1d89cf9a4780657646e6a367)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java


> TestContainerManagerSecurity#testNMTokens is asserting
> --
>
> Key: YARN-5655
> URL: https://issues.apache.org/jira/browse/YARN-5655
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Robert Kanter
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5655.001.patch
>
>
> TestContainerManagerSecurity has been failing recently in 2.8:
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 80.928 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 44.478 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:394)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 34.964 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:333)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5655) TestContainerManagerSecurity#testNMTokens is asserting

2016-09-20 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5655:
-
Hadoop Flags: Reviewed
 Summary: TestContainerManagerSecurity#testNMTokens is asserting  (was: 
TestContainerManagerSecurity is failing)

+1 for the patch.  It fixes the problem introduced in YARN-5566, and we can 
address the other issues with this test in YARN-4342.  I updated the summary to 
better differentiate it from YARN-4342.

Committing this.

> TestContainerManagerSecurity#testNMTokens is asserting
> --
>
> Key: YARN-5655
> URL: https://issues.apache.org/jira/browse/YARN-5655
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Robert Kanter
> Attachments: YARN-5655.001.patch
>
>
> TestContainerManagerSecurity has been failing recently in 2.8:
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 80.928 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 44.478 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.waitForContainerToFinishOnNM(TestContainerManagerSecurity.java:394)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 34.964 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:333)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-4855:
--
Attachment: YARN-4855.012.patch

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch, 
> YARN-4855.012.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Attachment: YARN-2009.0002.patch

Thanks [~eepayne] and [~leftnoteasy]. Attaching new patch.

Regarding user-limit discussion, i think generally approach looks fine. I feel 
we can also add dead-zone around user-limit. This can help to avoid thrashing 
scenarios. (we do need for priority as its kind of direct calculation).

I am also thinking case when we ll have a complete preemption use case like few 
containers coming from inter queue, and few others from user-limit, and finally 
some more containers coming from priority based policy.

I think we might  need to improve preemption metrics here. I would like to 
collect and design preemption metrics module so that we can get clear 
information about which module perform how much preemption. Thoughts?

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506713#comment-15506713
 ] 

Tao Jie commented on YARN-4855:
---

[~Naganarasimha], thank you for comments!
Yes, it is better to use org.apache.commons.cli.CommandLine to parse 
CLI commands, and it would make a lot of work to change existing code in  
RMAdminCLI. I think it is better to do those refactor job in another JIRA, do 
you agree? 
3&4 are fixed in the latest patch.


> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5641) Localizer leaves behind tarballs after container is complete

2016-09-20 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506564#comment-15506564
 ] 

Eric Badger commented on YARN-5641:
---

The test failure is unrelated to this patch and passes for me locally. 
[~Naganarasimha], [~jlowe], could you review the most recent patch? Thanks!

> Localizer leaves behind tarballs after container is complete
> 
>
> Key: YARN-5641
> URL: https://issues.apache.org/jira/browse/YARN-5641
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5641.001.patch, YARN-5641.002.patch, 
> YARN-5641.003.patch
>
>
> The localizer sometimes fails to clean up extracted tarballs leaving large 
> footprints that persist on the nodes indefinitely. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-20 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3692:

Attachment: 0007-YARN-3692.1.patch

Updated the patch

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.1.patch, 
> 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506238#comment-15506238
 ] 

Rohith Sharma K S commented on YARN-3692:
-

oops, let me upload new patch. thanks 

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3359) Recover collector list in RM failed over

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506221#comment-15506221
 ] 

Rohith Sharma K S commented on YARN-3359:
-

The overall patch looks good. I think right place for method 
reregisterCollectors should be in NodeManager#resyncWithRM when 
rmWorkPreservingRestartEnabled is true.  

> Recover collector list in RM failed over
> 
>
> Key: YARN-3359
> URL: https://issues.apache.org/jira/browse/YARN-3359
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: YARN-3359-YARN-5638.patch
>
>
> Per discussion in YARN-3039, split the recover work from RMStateStore in a 
> separated JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3692) Allow REST API to set a user generated message when killing an application

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506198#comment-15506198
 ] 

Naganarasimha G R commented on YARN-3692:
-

hi [~rohithsharma],
Seems like patch is erroneously created, can you check and recreate the patch ? 
Looks like overall things are fine, can get it committed if no other issues 
from others. 

> Allow REST API to set a user generated message when killing an application
> --
>
> Key: YARN-3692
> URL: https://issues.apache.org/jira/browse/YARN-3692
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rajat Jain
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-3692.patch, 0002-YARN-3692.patch, 
> 0003-YARN-3692.patch, 0004-YARN-3692.patch, 0005-YARN-3692.1.patch, 
> 0005-YARN-3692.patch, 0006-YARN-3692.patch, 0007-YARN-3692.patch
>
>
> Currently YARN's REST API supports killing an application without setting a 
> diagnostic message. It would be good to provide that support.
> *Use Case*
> Usually this helps in workflow management in a multi-tenant environment when 
> the workflow scheduler (or the hadoop admin) wants to kill a job - and let 
> the user know the reason why the job was killed. Killing the job by setting a 
> diagnostic message is a very good solution for that. Ideally, we can set the 
> diagnostic message on all such interface:
> yarn kill -applicationId ... -diagnosticMessage "some message added by 
> admin/workflow"
> REST API { 'state': 'KILLED', 'diagnosticMessage': 'some message added by 
> admin/workflow'}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5599) Post AM launcher artifacts to ATS

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506160#comment-15506160
 ] 

Naganarasimha G R commented on YARN-5599:
-

IMO it would be better to capture it in any version of ATS, as it will help in 
analysis.

bq. in case of container launch failures YARN already keeps track of 
diagnostics message and also publishes to ATS.
Yes but IIRC it would collect the 4k bytes log message as failure to launch but 
may not be the launch command which we are trying to capture here right ?

> Post AM launcher artifacts to ATS
> -
>
> Key: YARN-5599
> URL: https://issues.apache.org/jira/browse/YARN-5599
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5599.patch
>
>
> To aid in debugging launch failures, it would be valuable to have an 
> application's launch script and logs posted to ATS.  Because the 
> application's command line may contain private credentials or other secure 
> information, access to the data in ATS should be restricted to the job owner, 
> including the at-rest data.
> Along with making the data available through ATS, the configuration parameter 
> introduced in YARN-5549 and the log line that it guards should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506152#comment-15506152
 ] 

Naganarasimha G R commented on YARN-4855:
-

[~Tao Jie], thanks for patiently redoing on most of the rework. 
bq. It looks to me that verify nodes on server side(actually I did with this 
approach in the very earlier patch).
Had forgotten your initial approach thanks for pointing it out i think you 
started to differ the approach after wangda's comment and [~wangda] also has 
already reviewed your latest patch so i think even he too is fine with your 
latest approach.

bq. I think we also need to check getInactiveRMNodes, node which is 
decommissioning should be treated as known node
Yes wangda, i think it would be good to support this too and i think its been 
covered in the latest patch

Latest patch seems to be good with approach just few small nits :
# The way we are handling "--fail-on-unknown-nodes" is not correct in the CLI, 
i had just typed wrongly while testing as  "--fail-on-unkown-nodes" so it 
silently passes taking  "--fail-on-unkown-nodes" as the host name as per the 
existing code. For handling of additional options in a command we need to 
follow the approach similiar to one present in nodeCLI where we take from 
command parser {{cliParser.hasOption(NODE_SHOW_DETAILS)}} instead from the 
position, so that on wrong command it pops up an error
# if you agree on above we need to change "--fail-on-unknown-nodes" to 
"-fail-on-unknown-nodes". Also would like to see whether camel casing is better 
than this
# AdminService. ln no 850 "Replace labels on unknown nodes:" => "Failed to 
replace labels as there are unknown nodes:" would be better as we fail updating 
for all the mappings
# test failure message "Should not fail on unknown node when not verify nodes" 
on unknown node" instead "Should not fail on unknown node when 
fail-on-unkown-nodes is set false" 

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506151#comment-15506151
 ] 

Rohith Sharma K S commented on YARN-5638:
-

Thanks [~gtCarrera9] for the updating patch. Basically I understand from above 
discussions and attached patch that always consider latest collector address in 
case of multiple collector address has been reported from different NM for same 
application. The overall approach of the patch looks good. 

few comments on the patch, 
h6. AppCollectorData.java
# Can *happensBefore* comparison method name can be changed something 
meaningful ? May we can define comparator method itself.
# stamped --> isStamped?
# In stamped method, I think check for both {{rmIdentifiers && version}}?
# {{public static final long UNSTAMPED_VERSION_NUMBER = -1;}} public --> 
private? 

h6. ResourceTrackerService
# line 662, need not have explicit null check for {{previousCollectorData == 
null}}.

h6. RMApp.java
# I think this interface can have only getter method. Update and remove is 
happens with in the app, these 2 API need not expose as interface.

h6. yarn_server_common_service_protos.proto
# app_collectors_map --> app_collectors_data ?

h6. NodeStatusUpdaterImpl
# In line 966, null check existingData == null not necessarily required.
# In line 962, application has been verified with null before proceeding to add 
collector address. I think else part of this should directly remove from 
{{context.getRegisteringCollectors()}} otherwise leak would occur.

h6. Design comment
# In every NM heartbeat collector address have been sent to RM. RM process 
these data and send back an collector data to NM in response. This happens for 
every heartbeat. Instead of sending for every NM heartbeat, why can't these 
data send through pull mechanism only if collector address has changed? Let 
RMAppImpl trigger event to running nodes, and these nodes can pull collector 
address on heartbeat. 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-trunk.v1.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506069#comment-15506069
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
52s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 35s 
{color} | {color:red} root: The patch generated 38 new + 494 unchanged - 1 
fixed = 532 total (was 495) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
49s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
3 new + 123 unchanged - 0 fixed = 126 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 19s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 2s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506046#comment-15506046
 ] 

Rohith Sharma K S commented on YARN-4205:
-

Many checkstyle and javadoc errors are reported at project level. 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> YARN-4205_01.patch, YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506031#comment-15506031
 ] 

Jian He commented on YARN-5609:
---

- I think this will cause the resources to be re-requested on restart. Even 
though the effect might still be the same, because the resources are already 
localized and the requests will be ignored,  but I think we can still try to 
avoid sending these unnecessary events in case the resource set is large ?
{code}
// This is a Restart...
return new ReInitializationContext(
container.launchContext, container.resourceSet, null, null);
{code}
Also, suppose this is a restart after upgrade, then the old contexts are wiped 
out by this call, and user won't be able to rollback, after the restart.
- Can we add some comments about what ReInitializationContext#newResourceSet 
contains, on upgrade, it contains pendingResources only, while on rollback it 
contains full copy of original resources.

- while looking at previous code: is it possible for this call to return null ? 
If it's possible , then later code will throw NPE.
{code}
private ReInitializationContext createContextForRollback() {
  if (oldLaunchContext == null) {
return null;
  } else {
{code}

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5256) [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations

2016-09-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505943#comment-15505943
 ] 

Sunil G commented on YARN-5256:
---

I think if its for all partitions or for a single label, this new api can help 
in getting all (provided changing get-node-labels is not planned). I personally 
feel we can have only required REST end points and each serve the critical 
information needed for user.
I dont see any problem in having list for all but I thought making a point to 
check whether reusing NodeLabelInfo was considered or not.

This jira was trying to gather as many information as possible for a label. So 
such a detailed info class can be used in general too and you have mentioned 
that there are use cases for all list. I think i can update this ticket handle 
it if its fine.

> [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, YARN-5256-YARN-3368.2.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3140) Improve locks in AbstractCSQueue/LeafQueue/ParentQueue

2016-09-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505890#comment-15505890
 ] 

Hudson commented on YARN-3140:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10464 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10464/])
YARN-3140. Improve locks in AbstractCSQueue/LeafQueue/ParentQueue. (jianhe: rev 
2b66d9ec5bdaec7e6b278926fbb6f222c4e3afaa)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerResizing.java
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java


> Improve locks in AbstractCSQueue/LeafQueue/ParentQueue
> --
>
> Key: YARN-3140
> URL: https://issues.apache.org/jira/browse/YARN-3140
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-3140.1.patch, YARN-3140.2.patch, YARN-3140.3.patch, 
> YARN-3140.4.patch
>
>
> Enhance locks in AbstractCSQueue/LeafQueue/ParentQueue, as mentioned in 
> YARN-3091, a possible solution is using read/write lock. Other fine-graind 
> locks for specific purposes / bugs should be addressed in separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5256) [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations

2016-09-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505844#comment-15505844
 ] 

Naganarasimha G R commented on YARN-5256:
-

[~sunilg], I was initially thinking of improving of {{get-node-labels}} itself 
as it was just to add the resource info in {{NodeLabelInfo}} as 
??get-node-labels?? is returning list of NodeLabelInfos. But as you had started 
with this jira thought of requesting modifications for it. Our purpose is to 
get all Labels resources through REST.

bq. Do you see use case to get this stats info for few labels together in 
single request.
As mentioned in earlier comment what we require is for all partitions get the 
resources available, through REST. 

So would it be ok to raise and work on another jira to modify 
{{get-node-labels}} to return NodeLabelsInfo (List), 
NodeLabelInfo containing node resource info and you modify here in the suitable 
way as required for web ui ?

> [YARN-3368] Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, YARN-5256-YARN-3368.2.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505812#comment-15505812
 ] 

Arun Suresh edited comment on YARN-5609 at 9/20/16 6:58 AM:


Uploading updated patch. Thanks [~jianhe].. 

* Added a new testcase for restart
* Added an authorization check to ensure only applications that started the 
container can reinit/restart/rollback etc.
* Added some metrics to monitor the number of reinitialized and auto-rolledback 
containers.

Will fix the checkstyles (and add the Audit logs) once we are fine with the API 
/ class names etc.


was (Author: asuresh):
Uploading updated patch. Thanks [~jianhe].. 

* Added a new testcase for restart
* Added an authorization check to ensure only applications that started the 
container can reinit/restart/rollback etc.
* Added some metrics to monitor the number of reinitialized and auto-rolledback 
containers.

Will fix the checkstyles (and add the Audit logs) once we are fine with the 
ames an

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-20 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.003.patch

Uploading updated patch. Thanks [~jianhe].. 

* Added a new testcase for restart
* Added an authorization check to ensure only applications that started the 
container can reinit/restart/rollback etc.
* Added some metrics to monitor the number of reinitialized and auto-rolledback 
containers.

Will fix the checkstyles (and add the Audit logs) once we are fine with the 
ames an

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *upgradeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15505803#comment-15505803
 ] 

Hadoop QA commented on YARN-4205:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
59s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in trunk failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 13 
new + 502 unchanged - 3 fixed = 515 total (was 505) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 5s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829326/0005-YARN-4205.patch |
| JIRA Issue | YARN-4205 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 9b565b799e60 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64