[jira] [Commented] (YARN-6923) Metrics for Federation Router

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126872#comment-16126872
 ] 

Arun Suresh commented on YARN-6923:
---

Thanks for the patch [~giovanni.fumarola].
The only nit I can find is that instead of initializing the metrics lazily when 
you are calling "getMetrics()", maybe you can simply just initialize it 
directly in the init() method of  the interceptor.
+1 otherwise.

> Metrics for Federation Router
> -
>
> Key: YARN-6923
> URL: https://issues.apache.org/jira/browse/YARN-6923
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-6923.v1.patch, YARN-6923.v2.patch
>
>
> This JIRA proposes addition of metrics for Federation StateStore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6972) Adding RM ClusterId in AppInfo

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126873#comment-16126873
 ] 

Arun Suresh commented on YARN-6972:
---

[~giovanni.fumarola], to me, clusterId seems to actually be more appropriate 
than clusterTimestamp. Also, I would be wary of changing this, since every from 
appid to appattemptid to containerId is based on the cluster id and I would 
recommend leaving it as it is until we find a stronger argument in favor of 
changing it. 

> Adding RM ClusterId in AppInfo
> --
>
> Key: YARN-6972
> URL: https://issues.apache.org/jira/browse/YARN-6972
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6972.001.patch, YARN-6972.002.patch, 
> YARN-6972.003.patch, YARN-6972.004.patch, YARN-6972.005.patch, 
> YARN-6972.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned YARN-6589:
---

Assignee: Yang Wang

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4166) Support changing container cpu resource

2017-08-15 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-4166:

Attachment: (was: YARN-4166-branch2.8-001.patch)

> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Jian He
>Assignee: Yang Wang
> Attachments: YARN-4166.001.patch, YARN-4166.002.patch, 
> YARN-4166.003.patch, YARN-4166.004.patch
>
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2017-08-15 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-6966:

Attachment: YARN-6966.003.patch

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4166) Support changing container cpu resource

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126897#comment-16126897
 ] 

Hadoop QA commented on YARN-4166:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-4166 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4166 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12866330/YARN-4166.004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16902/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support changing container cpu resource
> ---
>
> Key: YARN-4166
> URL: https://issues.apache.org/jira/browse/YARN-4166
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager, resourcemanager
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Jian He
>Assignee: Yang Wang
> Attachments: YARN-4166.001.patch, YARN-4166.002.patch, 
> YARN-4166.003.patch, YARN-4166.004.patch
>
>
> Memory resizing is now supported, we need to support the same for cpu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7007) NPE in RM while using YarnClientImpl.getApplications() ot get all applications

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126920#comment-16126920
 ] 

Hadoop QA commented on YARN-7007:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881881/YARN-7007.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dbda4c79e50f 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 645a8f2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16900/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16900/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16900/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NPE in RM while using YarnClientImpl.getApplications() ot get all applications
> -

[jira] [Commented] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16126937#comment-16126937
 ] 

Hadoop QA commented on YARN-6966:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 90 unchanged - 2 fixed = 92 total (was 92) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 12s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6966 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881884/YARN-6966.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bbcec4e095ba 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 645a8f2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16901/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16901/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16901/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16901/testRepo

[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Yang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated YARN-6589:

Attachment: YARN-6589-YARN-3926.001.patch

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127005#comment-16127005
 ] 

Hadoop QA commented on YARN-6589:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
56s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} YARN-3926 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-3926 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6589 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881894/YARN-6589-YARN-3926.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b34d7dc347e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16903/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/16903/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16903/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/16903/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-ser

[jira] [Commented] (YARN-7006) [ATSv2 Security] Changes for authentication for CollectorNodemanagerProtocol

2017-08-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127031#comment-16127031
 ] 

Varun Saxena commented on YARN-7006:


Thanks [~jianhe] for the review.
bq. why is this change required?
This is required because I noticed that we do not pass user from CotainerImpl, 
where CONTAINER_INIT Aux services event is generated. Hence the change.
We rely on this user(i.e. AM user) to fill owner of token.

This is not directly related to title of this JIRA and is bug in current code. 
As it was only one line change, I included it here instead of raising a new 
JIRA. 

> [ATSv2 Security] Changes for authentication for CollectorNodemanagerProtocol
> 
>
> Key: YARN-7006
> URL: https://issues.apache.org/jira/browse/YARN-7006
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-7006-YARN-5355.01.patch, 
> YARN-7006-YARN-5355.02.patch
>
>
> Communication between Collector and NM is via RPC.
> We would do kerberos based authentication for communication between these 2 
> components, as of now.  Added SecurityInfo implementation for it.
> We can think of adding token based access once collector starts outside of NM.
> Also creation of timeline client within NMTimelinePublisher would be done 
> using NM login UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-65:
-
Attachment: YARN-65.008.patch

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127102#comment-16127102
 ] 

Manikandan R commented on YARN-65:
--

[~rohithsharma] [~bibinchundatt] [~Naganarasimha] Thanks for taking a closer 
look and suggestions.

Since ACLs are getting stored in {{ApplicationACLManager}} as part of 
{{RMAppManager#createAndPopulateNewRMApp}}, we are setting {{AMContainerSpec}} 
to null and attached patch for the same. Test cases using 
{{MemoryRMStateStore}} were not passing because of NPE during recovery process. 
Copy of the stack trace - 

java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:432)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:347)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:537)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1403)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:767)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1156)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1196)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)

To fix this NPE and pass these test cases, preserved {{AMContainerSpec}} from 
{{MemoryRMStateStore}}, after app submission into the running RM and restored 
the same into {{MemoryRMStateStore}} before starting RM again. Attached patch 
contains these test case changes as well.

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127169#comment-16127169
 ] 

Hadoop QA commented on YARN-65:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 26 new + 389 unchanged - 1 fixed = 415 total (was 390) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-65 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881914/YARN-65.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9e5d6ec8b72 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e43c28 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16904/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16904/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16904/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16904/console |
| Powered by

[jira] [Commented] (YARN-6995) Improve use of ResourceNotFoundException in resource types code

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127175#comment-16127175
 ] 

Sunil G commented on YARN-6995:
---

I have some doubts on the exception message.

{noformat}
33private static final String MESSAGE = "The resource manager 
encountered a "
34+ "problem that should not occur under normal circumstances. "
35+ "Please report this error to the Hadoop community by opening a "
36+ "JIRA ticket at http://issues.apache.org/jira and including the 
"
37+ "following information:\n* Resource type requested: %s\n* 
Resource "
38+ "object: %s\n* The stack trace for this exception\nAfter 
encountering "
39+ "this error, the resource manager is in an inconsistent state. 
It is "
40+ "safe for the resource manager to be restarted as the error "
41+ "encountered should be transitive. If high availability is 
enabled, "
42+ "failing over to a standby resource manager is also safe.";
{noformat}

I think when an AM or NM tries to contact RM with a resource capability 
including a new resource type, RM could simply ignore such resource types as it 
not supported. So in few such cases, above message may not be accurate.

> Improve use of ResourceNotFoundException in resource types code
> ---
>
> Key: YARN-6995
> URL: https://issues.apache.org/jira/browse/YARN-6995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-6995.YARN-3926.001.patch, 
> YARN-6995.YARN-3926.002.patch, YARN-6995.YARN-3926.003.patch, 
> YARN-6995.YARN-3926.004.patch
>
>
> Now that all the YarnExceptions have been replaced with 
> ResourceNotFoundExceptions, we should make the ResourceNotFoundExceptions as 
> useful as possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7013) merge related work for YARN-3926 branch

2017-08-15 Thread Sunil G (JIRA)
Sunil G created YARN-7013:
-

 Summary: merge related work for YARN-3926 branch
 Key: YARN-7013
 URL: https://issues.apache.org/jira/browse/YARN-7013
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil G
Assignee: Sunil G


To run jenkins for whole branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-6959:

Target Version/s: 3.0.0-alpha4, 2.7.1, 2.8.0  (was: 2.7.1, 3.0.0-alpha4)

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-6959:

Fix Version/s: 2.8.0

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Yuqi Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Wang updated YARN-6959:

Attachment: YARN-6959-branch-2.7.002.patch
YARN-6959-branch-2.8.001.patch

Add updated patch for 2.7 and new patch for 2.8.

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959-branch-2.7.002.patch, 
> YARN-6959-branch-2.8.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6882) AllocationFileLoaderService.reloadAllocations() should use the diamond operator

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127247#comment-16127247
 ] 

Daniel Templeton commented on YARN-6882:


Not really.  The problem is that Java 7 is not as capable as Java 8 at type 
inference.  Don't worry about it, though.  The patch is still committed to 
trunk, which is what matters.

> AllocationFileLoaderService.reloadAllocations() should use the diamond 
> operator
> ---
>
> Key: YARN-6882
> URL: https://issues.apache.org/jira/browse/YARN-6882
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Larry Lo
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6882.001.patch
>
>
> Here:{code}for (FSQueueType queueType : FSQueueType.values()) {
>   configuredQueues.put(queueType, new HashSet());
> }{code} and here:{code}List queueElements = new 
> ArrayList();{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127261#comment-16127261
 ] 

Sunil G commented on YARN-6610:
---

Yes [~templedf]
I am still not very sure. I suggested those optimizations if possible because I 
saw a small performance dip with this patch post YARN-6892. I am working on 
some concrete numbers to share. I ll share soon.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6780) ResourceWeights.toString() cleanup

2017-08-15 Thread weiyuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weiyuan updated YARN-6780:
--
Attachment: YARN-6780.002.patch

> ResourceWeights.toString() cleanup
> --
>
> Key: YARN-6780
> URL: https://issues.apache.org/jira/browse/YARN-6780
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6780.001.patch, YARN-6780.002.patch
>
>
> The {{toString()}} method should have {{@Override}} and should use a 
> {{StringBuilder}} instead of a {{StringBuffer}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7014) test-container-executor is failing with exit code 139

2017-08-15 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7014:
-

 Summary: test-container-executor is failing with exit code 139
 Key: YARN-7014
 URL: https://issues.apache.org/jira/browse/YARN-7014
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Shane Kumpf


test-container-executor is failing in trunk.

{code}

[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
(test-container-executor) @ hadoop-yarn-server-nodemanager ---
[INFO] ---
[INFO]  C M A K E B U I L D E RT E S T
[INFO] ---
[INFO] test-container-executor: running 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
[INFO] with extra environment variables {}
[INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
[INFO] ---
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 13:47 min
[INFO] Finished at: 2017-08-12T12:58:55+00:00
[INFO] Final Memory: 19M/296M
[INFO] 
[WARNING] The requested profile "parallel-tests" could not be activated because 
it does not exist.
[WARNING] The requested profile "yarn-ui" could not be activated because it 
does not exist.
[ERROR] Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
(test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
 returned ERROR CODE 134 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6930) Admins should be able to explicitly enable specific LinuxContainerRuntime in the NodeManager

2017-08-15 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127274#comment-16127274
 ] 

Shane Kumpf commented on YARN-6930:
---

The unit test failure does not appear to be related to this patch. It is 
failing on trunk today. I've opened YARN-7014 to investigate. I believe the 
latest patch has addressed the comments. /cc [~jianhe]

> Admins should be able to explicitly enable specific LinuxContainerRuntime in 
> the NodeManager
> 
>
> Key: YARN-6930
> URL: https://issues.apache.org/jira/browse/YARN-6930
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Shane Kumpf
> Attachments: YARN-6930.001.patch, YARN-6930.002.patch, 
> YARN-6930.003.patch
>
>
> Today, in the java land, all LinuxContainerRuntimes are always enabled when 
> using LinuxContainerExecutor and the user can simply invoke anything that 
> he/she wants - default, docker, java-sandbox.
> We should have a way for admins to explicitly enable only specific runtimes 
> that he/she decides for the cluster. And by default, we should have 
> everything other than the default one disabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2402) NM restart: Container recovery for Windows

2017-08-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127292#comment-16127292
 ] 

Jason Lowe commented on YARN-2402:
--

So it does sound like we need this patch before declaring container recovery on 
Windows completely working, correct?  Unfortunately I cannot get Jenkins to 
comment on this since the parent JIRA has been Closed.  We can file a separate 
JIRA for this so we can get a proper Jenkins run on the patch.  We can then 
mark this as a duplicate of the new one.

> NM restart: Container recovery for Windows
> --
>
> Key: YARN-2402
> URL: https://issues.apache.org/jira/browse/YARN-2402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Yuqi Wang
> Attachments: YARN-2402-v1.patch, YARN-2402-v2.patch
>
>
> We should add container recovery for NM restart on Windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2402) NM restart: Container recovery for Windows

2017-08-15 Thread Yuqi Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127294#comment-16127294
 ] 

Yuqi Wang commented on YARN-2402:
-

Thanks, I will do it later.

> NM restart: Container recovery for Windows
> --
>
> Key: YARN-2402
> URL: https://issues.apache.org/jira/browse/YARN-2402
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Yuqi Wang
> Attachments: YARN-2402-v1.patch, YARN-2402-v2.patch
>
>
> We should add container recovery for NM restart on Windows.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-08-15 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-6623:

Attachment: YARN-6623.002.patch

Uploaded a new patch to fix the test failures and findbugs warnings.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-7014:


 Assignee: Jason Lowe
Affects Version/s: 3.0.0-beta1
 Priority: Critical  (was: Major)
  Summary: container-executor has off-by-one error which can 
corrupt the heap  (was: test-container-executor is failing with exit code 139)
 Target Version/s: 3.0.0-beta1

The test is failing because there's a memory corruption error in the 
container-executor code, and that triggers a crash in malloc.  At least one 
problem is in string_utils.c which has this classic off-by-one error:
{code}
  char* input_cpy = malloc(strlen(input));
  strcpy(input_cpy, input);
{code}

strlen does not account for the terminating NUL character, so we end up 
allocating one byte less than we need and then promptly scribble past the end 
of the allocation.

Looks like this was introduced by YARN-6726.

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7014:
-
Attachment: YARN-7014.001.patch

Attaching a patch that uses strdup instead of separate strlen/malloc/strcpy 
calls.  Arguably the existing unit test can be leveraged here since it was 
failing before.

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6882) AllocationFileLoaderService.reloadAllocations() should use the diamond operator

2017-08-15 Thread Larry Lo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127368#comment-16127368
 ] 

Larry Lo commented on YARN-6882:


Cool, really appreciate it.

> AllocationFileLoaderService.reloadAllocations() should use the diamond 
> operator
> ---
>
> Key: YARN-6882
> URL: https://issues.apache.org/jira/browse/YARN-6882
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Larry Lo
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6882.001.patch
>
>
> Here:{code}for (FSQueueType queueType : FSQueueType.values()) {
>   configuredQueues.put(queueType, new HashSet());
> }{code} and here:{code}List queueElements = new 
> ArrayList();{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6780) ResourceWeights.toString() cleanup

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127370#comment-16127370
 ] 

Hadoop QA commented on YARN-6780:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6780 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881936/YARN-6780.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b9a6404665fe 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e43c28 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16906/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16906/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcem

[jira] [Commented] (YARN-6780) ResourceWeights.toString() cleanup

2017-08-15 Thread weiyuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127381#comment-16127381
 ] 

weiyuan commented on YARN-6780:
---

[~templedf], the patch is updated again, thanks for your reminder.

> ResourceWeights.toString() cleanup
> --
>
> Key: YARN-6780
> URL: https://issues.apache.org/jira/browse/YARN-6780
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: weiyuan
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6780.001.patch, YARN-6780.002.patch
>
>
> The {{toString()}} method should have {{@Override}} and should use a 
> {{StringBuilder}} instead of a {{StringBuffer}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127387#comment-16127387
 ] 

Hadoop QA commented on YARN-7014:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
34s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7014 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881943/YARN-7014.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 561e6f7cca19 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e43c28 |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16909/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16909/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 

[jira] [Updated] (YARN-7015) Handle Container ExecType update (Promotion/Demotion) in cgroups resource handlers

2017-08-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7015:
--
Description: 
YARN-5085 adds support for change of container execution type 
(Promotion/Demotion).
Modifications to the ContainerManagementProtocol, ContainerManager and 
ContainerScheduler to handle this change are now in trunk. Opening this JIRA to 
track changes (if any) required in the cgroups resourcehandlers to accommodate 
this in the context of YARN-1011. (cc [~kasha], [~kkaranasos], [~haibochen], 
[~miklos.szeg...@cloudera.com])

  was:
YARN-5085 allows support for change of container execution type 
(Promotion/Demotion).
Modifications to the ContainerManagementProtocol, ContainerManager and 
ContainerScheduler to handle this change are now in trunk. Opening this JIRA to 
track changes (if any) required in the cgroups resourcehandlers to accommodate 
this in the context of YARN-1011. (cc [~kasha], [~kkaranasos], [~haibochen], 
[~miklos.szeg...@cloudera.com])


> Handle Container ExecType update (Promotion/Demotion) in cgroups resource 
> handlers
> --
>
> Key: YARN-7015
> URL: https://issues.apache.org/jira/browse/YARN-7015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>
> YARN-5085 adds support for change of container execution type 
> (Promotion/Demotion).
> Modifications to the ContainerManagementProtocol, ContainerManager and 
> ContainerScheduler to handle this change are now in trunk. Opening this JIRA 
> to track changes (if any) required in the cgroups resourcehandlers to 
> accommodate this in the context of YARN-1011. (cc [~kasha], [~kkaranasos], 
> [~haibochen], [~miklos.szeg...@cloudera.com])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7015) Handle Container ExecType update (Promotion/Demotion) in cgroups resource handlers

2017-08-15 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7015:
-

 Summary: Handle Container ExecType update (Promotion/Demotion) in 
cgroups resource handlers
 Key: YARN-7015
 URL: https://issues.apache.org/jira/browse/YARN-7015
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh


YARN-5085 allows support for change of container execution type 
(Promotion/Demotion).
Modifications to the ContainerManagementProtocol, ContainerManager and 
ContainerScheduler to handle this change are now in trunk. Opening this JIRA to 
track changes (if any) required in the cgroups resourcehandlers to accommodate 
this in the context of YARN-1011. (cc [~kasha], [~kkaranasos], [~haibochen], 
[~miklos.szeg...@cloudera.com])



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7015) Handle Container ExecType update (Promotion/Demotion) in cgroups resource handlers

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127418#comment-16127418
 ] 

Arun Suresh commented on YARN-7015:
---

cc: [~kasha], [~haibochen], [~kkaranasos], [~miklos.szeg...@cloudera.com]

> Handle Container ExecType update (Promotion/Demotion) in cgroups resource 
> handlers
> --
>
> Key: YARN-7015
> URL: https://issues.apache.org/jira/browse/YARN-7015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>
> YARN-5085 adds support for change of container execution type 
> (Promotion/Demotion).
> Modifications to the ContainerManagementProtocol, ContainerManager and 
> ContainerScheduler to handle this change are now in trunk. Opening this JIRA 
> to track changes (if any) required in the cgroups resourcehandlers to 
> accommodate this in the context of YARN-1011.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7015) Handle Container ExecType update (Promotion/Demotion) in cgroups resource handlers

2017-08-15 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7015:
--
Description: 
YARN-5085 adds support for change of container execution type 
(Promotion/Demotion).
Modifications to the ContainerManagementProtocol, ContainerManager and 
ContainerScheduler to handle this change are now in trunk. Opening this JIRA to 
track changes (if any) required in the cgroups resourcehandlers to accommodate 
this in the context of YARN-1011.

  was:
YARN-5085 adds support for change of container execution type 
(Promotion/Demotion).
Modifications to the ContainerManagementProtocol, ContainerManager and 
ContainerScheduler to handle this change are now in trunk. Opening this JIRA to 
track changes (if any) required in the cgroups resourcehandlers to accommodate 
this in the context of YARN-1011. (cc [~kasha], [~kkaranasos], [~haibochen], 
[~miklos.szeg...@cloudera.com])


> Handle Container ExecType update (Promotion/Demotion) in cgroups resource 
> handlers
> --
>
> Key: YARN-7015
> URL: https://issues.apache.org/jira/browse/YARN-7015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>
> YARN-5085 adds support for change of container execution type 
> (Promotion/Demotion).
> Modifications to the ContainerManagementProtocol, ContainerManager and 
> ContainerScheduler to handle this change are now in trunk. Opening this JIRA 
> to track changes (if any) required in the cgroups resourcehandlers to 
> accommodate this in the context of YARN-1011.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Yu-Tang Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127433#comment-16127433
 ] 

Yu-Tang Lin commented on YARN-6781:
---

Hi Daniel, thanks for submit this issue. I would like to take this task!

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Priority: Minor
>  Labels: newbie
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127436#comment-16127436
 ] 

Hadoop QA commented on YARN-6959:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 7 new + 1298 unchanged - 3 fixed = 1305 total (was 1301) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:67e87c9 |
| JIRA Issue | YARN-6959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/att

[jira] [Assigned] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin reassigned YARN-6781:
-

Assignee: Yu-Tang Lin

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7015) Handle Container ExecType update (Promotion/Demotion) in cgroups resource handlers

2017-08-15 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-7015:


Assignee: Miklos Szegedi

> Handle Container ExecType update (Promotion/Demotion) in cgroups resource 
> handlers
> --
>
> Key: YARN-7015
> URL: https://issues.apache.org/jira/browse/YARN-7015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Miklos Szegedi
>
> YARN-5085 adds support for change of container execution type 
> (Promotion/Demotion).
> Modifications to the ContainerManagementProtocol, ContainerManager and 
> ContainerScheduler to handle this change are now in trunk. Opening this JIRA 
> to track changes (if any) required in the cgroups resourcehandlers to 
> accommodate this in the context of YARN-1011.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-08-15 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127457#comment-16127457
 ] 

Sunil G commented on YARN-5146:
---

+1 committing shortly.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch, YARN-5146.005.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5146) Support for Fair Scheduler in new YARN UI

2017-08-15 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5146:
--
Summary: Support for Fair Scheduler in new YARN UI  (was: [YARN-3368] 
Supports Fair Scheduler in new YARN UI)

> Support for Fair Scheduler in new YARN UI
> -
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch, YARN-5146.005.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-08-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127494#comment-16127494
 ] 

Eric Badger commented on YARN-6623:
---

I'll review the patch sometime this week

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127493#comment-16127493
 ] 

Eric Badger commented on YARN-6988:
---

[~vvasudev], if we just increase the EXECUTOR_PATH_MAX then we would be wasting 
a lot of memory if the system arg length was set to lower. Also, YARN-6623 is a 
pretty large patch and might take some time to go through review. I would 
prefer to put up a patch here, if that's alright. I can put up the patch today. 

> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>
> {{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
> arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in 
> configuration.h. Because of this, the full docker command can only be 4096 
> characters. If it is longer, it will be truncated and the command will fail 
> with a parsing error. Because of the bind-mounting of volumes, the arguments 
> to the docker command can quickly get large. For example, I passed the 4096 
> limit with an 11 disk node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127522#comment-16127522
 ] 

Shane Kumpf commented on YARN-7014:
---

Thanks for identifying the issue and the patch [~jlowe]. I have tested the 
patch locally and it resolves the issue with the test. +1 from me.

{code}
[INFO]
[INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
(test-container-executor) @ hadoop-yarn-server-nodemanager ---
[INFO] ---
[INFO]  C M A K E B U I L D E RT E S T
[INFO] ---
[INFO] test-container-executor: running 
/hadoop_staging/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
[INFO] with extra environment variables {}
[INFO] STATUS: SUCCESS after 5309 millisecond(s).
[INFO] ---
{code}

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) Support for Fair Scheduler in new YARN UI

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127526#comment-16127526
 ] 

Hudson commented on YARN-5146:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12190 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12190/])
YARN-5146. Support for Fair Scheduler in new YARN UI. Contributed by (sunilg: 
rev dadb0c2225adef5cb0126610733c285b51f4f43e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/utils/color-utils.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-conf-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue-info.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue-conf-table.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/fair-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue/fair-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue/capacity-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/fifo-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/cluster-overview.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/capacity-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue-conf-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue-info.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue/fifo-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/fair-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-navigator.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue/fifo-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/queue-configuration-table.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-queue/yarn-queue.js


> Support for Fair Scheduler in new YARN UI
> -
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch, YARN-5146.004.patch, YARN-5146.005.patch
>
>
> C

[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127528#comment-16127528
 ] 

Eric Badger commented on YARN-7014:
---

I'm +1 (non-binding) for the change. Because it's such a specific test and is 
already covered by the validate_container_id test failing, I'm on board with 
using the existing test. However, it might not be a bad idea to just generally 
add in some tests to test-container-executor to make sure that we aren't 
leaking memory or susceptible to simple overflows from the invocation

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127533#comment-16127533
 ] 

Daniel Templeton commented on YARN-6610:


I would expect a small performance dip, because the code is now doing the right 
thing rather than the inexpensive thing.  Without this patch the code is fast 
but wrong.  I'll keep thinking about a way to do this faster, but I'm not sure 
there's much more that can be squeezed out.  Maybe a cheaper sort like radix 
would buy us a little.  The real answer is to go through capacity scheduler and 
evaluate if all of the uses of {{ResourceCalculator}} (mostly via 
{{Resources}}) are correct.  I just did that for fair scheduler, and there were 
many places where it was being misused.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127536#comment-16127536
 ] 

Hadoop QA commented on YARN-6623:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 25 unchanged - 2 fixed = 31 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 12s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 30s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | TEST-cetest |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | TEST-cetest |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6623 |
| JIRA Patch UR

[jira] [Updated] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6781:
--
Attachment: YARN-6781.001.patch

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6781.001.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127563#comment-16127563
 ] 

Daniel Templeton commented on YARN-6781:


LGTM.  Let's see what Jenkins says.

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6781.001.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127568#comment-16127568
 ] 

Hadoop QA commented on YARN-6781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 21s{color} 
| {color:red} YARN-6781 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881972/YARN-6781.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16910/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6781.001.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127591#comment-16127591
 ] 

Daniel Templeton commented on YARN-6781:


Looks like you need to rebase.

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: YARN-3926
>
> Attachments: YARN-6781.001.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-65:
-
Attachment: YARN-65.009.patch

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch, YARN-65.009.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127608#comment-16127608
 ] 

Manikandan R commented on YARN-65:
--

Fixed checkstyle issues. Test case failure is not related to this patch.

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch, YARN-65.009.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Yu-Tang Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu-Tang Lin updated YARN-6781:
--
Attachment: YARN-6781-YARN-3926.002.patch

> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>Priority: Minor
>  Labels: newbie
> Fix For: YARN-3926
>
> Attachments: YARN-6781.001.patch, YARN-6781-YARN-3926.002.patch
>
>
> The {{resourceInformationMap}} parameter is always passed in as a new {{Map}} 
> object, and it's never referenced again after the call.  The parameter can be 
> eliminated.  Instead the {{Map}} can be created inside the 
> {{initializeResourcesMap()}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6781) ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127660#comment-16127660
 ] 

Hadoop QA commented on YARN-6781:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
20s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881978/YARN-6781-YARN-3926.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7190fea4ff0e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16912/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16912/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ResourceUtils.initializeResourcesMap() takes an unnecessary Map parameter
> -
>
> Key: YARN-6781
> URL: https://issues.apache.org/jira/browse/YARN-6781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Yu-Tang Lin
>  

[jira] [Updated] (YARN-6964) Fair scheduler misuses Resources operations

2017-08-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6964:
---
Attachment: YARN-6964.006.patch

Per offline discussion, 1, 2.1, 3, and 4 are fine.  Attaching a patch that 
addresses 2.2 better.

> Fair scheduler misuses Resources operations
> ---
>
> Key: YARN-6964
> URL: https://issues.apache.org/jira/browse/YARN-6964
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6964.001.patch, YARN-6964.002.patch, 
> YARN-6964.003.patch, YARN-6964.004.patch, YARN-6964.005.patch, 
> YARN-6964.006.patch
>
>
> There are several places where YARN uses the {{Resources}} class to do 
> comparisons of {{Resource}} instances incorrectly.  This patch corrects those 
> mistakes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6739) Crash NM at start time if oversubscription is on but LinuxContainerExcutor or cgroup is off

2017-08-15 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127664#comment-16127664
 ] 

Haibo Chen commented on YARN-6739:
--

As a follow up, turn on cpu & memory cgroup and strict resource usage mode if 
oversubscription is enabled.

> Crash NM at start time if oversubscription is on but LinuxContainerExcutor or 
> cgroup is off
> ---
>
> Key: YARN-6739
> URL: https://issues.apache.org/jira/browse/YARN-6739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6996) Change javax.cache library implementation from JSR107 to Apache Geronimo

2017-08-15 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127671#comment-16127671
 ] 

Ray Chiang commented on YARN-6996:
--

Thanks [~subru] and [~busbey]!

> Change javax.cache library implementation from JSR107 to Apache Geronimo
> 
>
> Key: YARN-6996
> URL: https://issues.apache.org/jira/browse/YARN-6996
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6996.001.patch
>
>
> With YARN Federation, we added YARN-3672, which adds the following to 
> {noformat}
> javax.cache
> cache-api
> {noformat}
> This third-party library has some murky license history, as documented in 
> this [really long comment 
> thread|https://github.com/jsr107/jsr107spec/issues/333].  The summary of the 
> thread is that "the library is officially APL (take our word for it), but 
> there hasn't been a subsequent release with the license file change".
> LEGAL-325 has been filed to discuss the validity of this license for Apache.
> Before we get to final Hadoop 3 release, I'm wondering if anyone else has 
> concerns about using this library.  Just from looking at the various javax 
> Maven artifacts in our pom.xml files, I see a lot of other javax.* library 
> entries (although we may not ship the .jars if they're part of the Java 
> runtime).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127673#comment-16127673
 ] 

Vrushali C commented on YARN-6820:
--

Okay, so the branch-2 branch name is " YARN-5355_branch2 " . 

Here are the latest commits
https://github.com/apache/hadoop/commits/YARN-5355

https://github.com/apache/hadoop/commits/YARN-5355_branch2

The commits are not in the same order but pretty much the same across both.


> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6959) RM may allocate wrong AM Container for new attempt

2017-08-15 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127696#comment-16127696
 ] 

Jian He commented on YARN-6959:
---

[~yqwang], TestFairScheduler is failing with the patch , can you take a look ?

> RM may allocate wrong AM Container for new attempt
> --
>
> Key: YARN-6959
> URL: https://issues.apache.org/jira/browse/YARN-6959
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, fairscheduler, scheduler
>Affects Versions: 2.7.1
>Reporter: Yuqi Wang
>Assignee: Yuqi Wang
>  Labels: patch
> Fix For: 2.8.0, 2.7.1, 3.0.0-alpha4
>
> Attachments: YARN-6959.001.patch, YARN-6959.002.patch, 
> YARN-6959.003.patch, YARN-6959.004.patch, YARN-6959.005.patch, 
> YARN-6959-branch-2.7.001.patch, YARN-6959-branch-2.7.002.patch, 
> YARN-6959-branch-2.8.001.patch, YARN-6959.yarn_nm.log.zip, 
> YARN-6959.yarn_rm.log.zip
>
>
> *Issue Summary:*
> Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests. These mis-recorded ResourceRequests may confuse AM 
> Container Request and Allocation for current attempt.
> *Issue Pipeline:*
> {code:java}
> // Executing precondition check for the incoming attempt id.
> ApplicationMasterService.allocate() ->
> scheduler.allocate(attemptId, ask, ...) ->
> // Previous precondition check for the attempt id may be outdated here, 
> // i.e. the currentAttempt may not be the corresponding attempt of the 
> attemptId.
> // Such as the attempt id is corresponding to the previous attempt.
> currentAttempt = scheduler.getApplicationAttempt(attemptId) ->
> // Previous attempt ResourceRequest may be recorded into current attempt 
> ResourceRequests
> currentAttempt.updateResourceRequests(ask) ->
> // RM may allocate wrong AM Container for the current attempt, because its 
> ResourceRequests
> // may come from previous attempt which can be any ResourceRequests previous 
> AM asked
> // and there is not matching logic for the original AM Container 
> ResourceRequest and 
> // the returned amContainerAllocation below.
> AMContainerAllocatedTransition.transition(...) ->
> amContainerAllocation = scheduler.allocate(currentAttemptId, ...)
> {code}
> *Patch Correctness:*
> Because after this Patch, RM will definitely record ResourceRequests from 
> different attempt into different objects of 
> SchedulerApplicationAttempt.AppSchedulingInfo.
> So, even if RM still record ResourceRequests from old attempt at any time, 
> these ResourceRequests will be recorded in old AppSchedulingInfo object which 
> will not impact current attempt's resource requests and allocation.
> *Concerns:*
> The getApplicationAttempt function in AbstractYarnScheduler is so confusing, 
> we should better rename it to getCurrentApplicationAttempt. And reconsider 
> whether there are any other bugs related to getApplicationAttempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127700#comment-16127700
 ] 

Hadoop QA commented on YARN-65:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 388 unchanged - 2 fixed = 389 total (was 390) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-65 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881977/YARN-65.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ebb291734826 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16911/console |
| Powered by 

[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6589:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-3926

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7016) Consider using ZKCuratorManager in CuratorService

2017-08-15 Thread JIRA
Íñigo Goiri created YARN-7016:
-

 Summary: Consider using ZKCuratorManager in CuratorService
 Key: YARN-7016
 URL: https://issues.apache.org/jira/browse/YARN-7016
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Íñigo Goiri


{{CuratorService}} uses the curator framework and this has been wrapped in 
{{ZKCuratorManager}}. It would be good to make it use the common framework.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127712#comment-16127712
 ] 

Wangda Tan commented on YARN-6589:
--

Thanks [~fly_in_gis] for reporting and working on this JIRA. 

Converted to YARN-3926 subtask, I think this is a blocker for YARN-3926 merge. 

Yang, could you update patch to address Jenkins reported issues? Patch looks 
good.

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6589) Recover all resources when NM restart

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6589:
-
Priority: Blocker  (was: Major)

> Recover all resources when NM restart
> -
>
> Key: YARN-6589
> URL: https://issues.apache.org/jira/browse/YARN-6589
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Blocker
> Attachments: YARN-6589-YARN-3926.001.patch
>
>
> When NM restart, containers will be recovered. However, only memory and 
> vcores in capability have been recovered. All resources need to be recovered.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   this.resource = 
> Resource.newInstance(recoveredCapability.getMemorySize(),
>   recoveredCapability.getVirtualCores());
> {code}
> It should be like this.
> {code:title=ContainerImpl.java}
>   // resource capability had been updated before NM was down
>   // need to recover all resources, not only 
>   this.resource = Resources.clone(recoveredCapability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)
sarun created YARN-7017:
---

 Summary: Enable preemption for a single queue.
 Key: YARN-7017
 URL: https://issues.apache.org/jira/browse/YARN-7017
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: yarn
Reporter: sarun


PROBLEM
How to enable preemption on a single queue in a cluster?
DESCRIPTION
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Priority: Critical  (was: Major)

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>Priority: Critical
>
> PROBLEM
> How to enable preemption on a single queue in a cluster?
> DESCRIPTION
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Priority: Major  (was: Critical)

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>
> PROBLEM
> How to enable preemption on a single queue in a cluster?
> DESCRIPTION
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread sarun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sarun updated YARN-7017:

Description: 
*PROBLEM*
How to enable preemption on a single queue in a cluster?
*DESCRIPTION*
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.



  was:
PROBLEM
How to enable preemption on a single queue in a cluster?
DESCRIPTION
As of today the only way to enable preemption at a queue level is:
* Enable cluster level preemption
* Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

Can we have some sort of a parameter like 
*_yarn.scheduler.capacity..enable_preemption_*
which would just enable preemption per queue instead of going the other way 
round which is more time consuming and error prone.




> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7017:
---
Component/s: capacity scheduler

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127774#comment-16127774
 ] 

Eric Payne commented on YARN-7017:
--

Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = true}}

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7017) Enable preemption for a single queue.

2017-08-15 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127774#comment-16127774
 ] 

Eric Payne edited comment on YARN-7017 at 8/15/17 7:56 PM:
---

Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = false}}


was (Author: eepayne):
Thanks [~saruntek] for raising this issue.

{quote}
As of today the only way to enable preemption at a queue level is:
- Enable cluster level preemption
- Disable preemption on the queues where not required using 
yarn.scheduler.capacity..disable_preemption to true

{quote}
Actually, you don't need to {{disable_preemption}} for every queue. The 
{{disable_preemption}} property is inherited, so you can:
- Enable cluster level preemption
- Disable preemption on the root queue using 
{{yarn.scheduler.capacity.root.disable_preemption = true}}
- Enable preemption on the queues where required using 
{{yarn.scheduler.capacity..disable_preemption = true}}

> Enable preemption for a single queue.
> -
>
> Key: YARN-7017
> URL: https://issues.apache.org/jira/browse/YARN-7017
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler, yarn
>Reporter: sarun
>
> *PROBLEM*
> How to enable preemption on a single queue in a cluster?
> *DESCRIPTION*
> As of today the only way to enable preemption at a queue level is:
> * Enable cluster level preemption
> * Disable preemption on the queues where not required using 
> yarn.scheduler.capacity..disable_preemption to true
> Can we have some sort of a parameter like 
> *_yarn.scheduler.capacity..enable_preemption_*
> which would just enable preemption per queue instead of going the other way 
> round which is more time consuming and error prone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6964) Fair scheduler misuses Resources operations

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127790#comment-16127790
 ] 

Hadoop QA commented on YARN-6964:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6964 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881985/YARN-6964.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 98ec729bf67e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dadb0c2 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16913/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16913/testReport/ |
| modules | C

[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127824#comment-16127824
 ] 

Nathan Roberts commented on YARN-7014:
--

+1 on the patch. I will commit shortly.
Thanks [~jlowe] for the patch and  [~ebadger] and [~shaneku...@gmail.com] for 
the reviews!

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6820:
-
Fix Version/s: YARN-5355-branch-2

Thanks, Vrushali!  I committed the branch-2 patch to YARN-5355_branch2.

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-7014:
-
Fix Version/s: 3.0.0-beta1

> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7018) Interface for adding extra behavior to node heartbeats

2017-08-15 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7018:


 Summary: Interface for adding extra behavior to node heartbeats
 Key: YARN-7018
 URL: https://issues.apache.org/jira/browse/YARN-7018
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Reporter: Jason Lowe
Assignee: Jason Lowe


This JIRA tracks an interface for plugging in new behavior to node heartbeat 
processing.  Adding a formal interface for additional node heartbeat processing 
would allow admins to configure new functionality that is scheduler-independent 
without needing to replace the entire scheduler.  For example, both YARN-5202 
and YARN-5215 had approaches where node heartbeat processing was extended to 
implement new functionality that was essentially scheduler-independent and 
could be implemented as a plugin with this interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-08-15 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127883#comment-16127883
 ] 

Vrushali C commented on YARN-6820:
--

Thanks [~jlowe] , appreciate it. 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355, YARN-5355-branch-2
>
> Attachments: YARN-6820-YARN-5355.0001.patch, 
> YARN-6820-YARN-5355.002.patch, YARN-6820-YARN-5355.003.patch, 
> YARN-6820-YARN-5355.004.patch, YARN-6820-YARN-5355.005.patch, 
> YARN-6820-YARN-5355_branch_2.patch
>
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-7019:


 Summary: Ability for applications to notify YARN about container 
reuse
 Key: YARN-7019
 URL: https://issues.apache.org/jira/browse/YARN-7019
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Jason Lowe


During preemption calculations YARN can try to reduce the amount of work lost 
by considering how long a container has been running.  However when an 
application framework like Tez reuses a container across multiple tasks it 
changes the work lost calculation since the container has essentially 
checkpointed between task assignments.  It would be nice if applications could 
inform YARN when a container has been reused/checkpointed and therefore is a 
better candidate for preemption wrt. lost work than other, younger containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7014) container-executor has off-by-one error which can corrupt the heap

2017-08-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127889#comment-16127889
 ] 

Hudson commented on YARN-7014:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12191 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12191/])
YARN-7014. Fix off-by-one error causing heap corruption (Jason Lowe via 
(nroberts: rev d265459024b8e5f5eccf421627f684ca8f162112)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c


> container-executor has off-by-one error which can corrupt the heap
> --
>
> Key: YARN-7014
> URL: https://issues.apache.org/jira/browse/YARN-7014
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Shane Kumpf
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7014.001.patch
>
>
> test-container-executor is failing in trunk.
> {code}
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) @ hadoop-yarn-server-nodemanager ---
> [INFO] ---
> [INFO]  C M A K E B U I L D E RT E S T
> [INFO] ---
> [INFO] test-container-executor: running 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
> [INFO] with extra environment variables {}
> [INFO] STATUS: ERROR CODE 134 after 3714 millisecond(s).
> [INFO] ---
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13:47 min
> [INFO] Finished at: 2017-08-12T12:58:55+00:00
> [INFO] Final Memory: 19M/296M
> [INFO] 
> 
> [WARNING] The requested profile "parallel-tests" could not be activated 
> because it does not exist.
> [WARNING] The requested profile "yarn-ui" could not be activated because it 
> does not exist.
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.0.0-beta1-SNAPSHOT:cmake-test 
> (test-container-executor) on project hadoop-yarn-server-nodemanager: Test 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/target/usr/local/bin/test-container-executor
>  returned ERROR CODE 134 -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-15 Thread Aaron Gresch (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Gresch updated YARN-6736:
---
Attachment: YARN-6736-YARN-5355.002.patch

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-6988:
--
Attachment: YARN-6988.001.patch

Attaching a patch to increase the size of the docker commands to 128 KB. This 
would decouple it from EXECUTOR_PATH_MAX and not override the change that 
[~vvasudev] is making in YARN-6623. 

> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-6988.001.patch
>
>
> {{run_docker}} and {{launch_docker_container_as_user}} allocate their command 
> arrays using EXECUTOR_PATH_MAX, which is hardcoded to 4096 in 
> configuration.h. Because of this, the full docker command can only be 4096 
> characters. If it is longer, it will be truncated and the command will fail 
> with a parsing error. Because of the bind-mounting of volumes, the arguments 
> to the docker command can quickly get large. For example, I passed the 4096 
> limit with an 11 disk node. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Description: Kill button should not be displayed for FAILED, KILLED and 
FINISHED apps

> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Attachment: YARN-6992.001.patch

> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Attachments: YARN-6992.001.patch
>
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
> Application specific landing page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-6992:
---
Description: 
Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
Application specific landing page


  was:Kill button should not be displayed for FAILED, KILLED and FINISHED apps


> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> -
>
> Key: YARN-6992
> URL: https://issues.apache.org/jira/browse/YARN-6992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
> Attachments: YARN-6992.001.patch
>
>
> Kill button should not be displayed for FAILED, KILLED and FINISHED apps in 
> Application specific landing page



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127919#comment-16127919
 ] 

Hadoop QA commented on YARN-6736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-6736 does not apply to YARN-5355. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6736 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882015/YARN-6736-YARN-5355.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16914/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6892) Improve API implementation in Resources and DominantResourceCalculator in align to ResourceInformation

2017-08-15 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6892:
-
Attachment: YARN-6892-YARN-3926.004.patch

Fixed findbugs warning in the .004 patch.

> Improve API implementation in Resources and DominantResourceCalculator in 
> align to ResourceInformation
> --
>
> Key: YARN-6892
> URL: https://issues.apache.org/jira/browse/YARN-6892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-6892-YARN-3926.001.patch, 
> YARN-6892-YARN-3926.002.patch, YARN-6892-YARN-3926.003.patch, 
> YARN-6892-YARN-3926.004.patch
>
>
> In YARN-3926, apis in Resources and DRC spents significant cpu cycles in most 
> of its api. For better performance, its better to improve the apis as 
> resource types order is defined in system level (ResourceUtils class ensures 
> this post YARN-6788)
> This work is preceding to YARN-6788



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6988) container-executor fails for docker when command length > 4096 B

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127949#comment-16127949
 ] 

Hadoop QA commented on YARN-6988:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 36s{color} | 
{color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6988 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882017/YARN-6988.001.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux dd978eebe5b9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16915/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> container-executor fails for docker when command length > 4096 B
> 
>
> Key: YARN-6988
> URL: https://issues.apache.org/jira/browse/YARN-6988
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Componen

[jira] [Commented] (YARN-6257) CapacityScheduler REST API produces incorrect JSON - JSON object operationsInfo contains deplicate key

2017-08-15 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127958#comment-16127958
 ] 

Wangda Tan commented on YARN-6257:
--

[~Tao Yang], 

Thanks for explanation, I just checked 
https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API,
 health-info related ref and whole CapacitySchedulerHealthInfo is not a part of 
RMRest API doc. According to 
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#REST_APIs,
 we should be good to modify such fields. 

[~sunilg] what do u think?

> CapacityScheduler REST API produces incorrect JSON - JSON object 
> operationsInfo contains deplicate key
> --
>
> Key: YARN-6257
> URL: https://issues.apache.org/jira/browse/YARN-6257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.8.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-6257.001.patch
>
>
> In response string of CapacityScheduler REST API, 
> scheduler/schedulerInfo/health/operationsInfo have duplicate key 'entry' as a 
> JSON object :
> {code}
> "operationsInfo":{
>   
> "entry":{"key":"last-preemption","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-reservation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-allocation","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}},
>   
> "entry":{"key":"last-release","value":{"nodeId":"N/A","containerId":"N/A","queue":"N/A"}}
> }
> {code}
> To solve this problem, I suppose the type of operationsInfo field in 
> CapacitySchedulerHealthInfo class should be converted from Map to List.
> After convert to List, The operationsInfo string will be:
> {code}
> "operationInfos":[
>   
> {"operation":"last-allocation","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-release","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-preemption","nodeId":"N/A","containerId":"N/A","queue":"N/A"},
>   
> {"operation":"last-reservation","nodeId":"N/A","containerId":"N/A","queue":"N/A"}
> ]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6610:
---
Attachment: YARN-6610.YARN-3926.006.patch

Added unit tests and fixed an error.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127978#comment-16127978
 ] 

Daniel Templeton commented on YARN-6610:


My previous comment about performance was assuming that you're testing with 
more than 2 resources.  If you're testing with only 2 resources, then I'd be 
surprised to see much of a difference.  If that's the case, I can take a closer 
look.

> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-6610.001.patch, YARN-6610.YARN-3926.002.patch, 
> YARN-6610.YARN-3926.003.patch, YARN-6610.YARN-3926.004.patch, 
> YARN-6610.YARN-3926.005.patch, YARN-6610.YARN-3926.006.patch
>
>
> The {{dominant}} param assumes there are only two resources, i.e. true means 
> to compare the dominant, and false means to compare the subordinate.  Now 
> that there are _n_ resources, this parameter no longer makes sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6992) "Kill application" button is present even if the application is FINISHED in RM UI

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127982#comment-16127982
 ] 

Hadoop QA commented on YARN-6992:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882021/YARN-6992.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77405a570e88 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d265459 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16916/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> "Kill application" button is present even if the application is FINISHED in 
> RM UI
> --

[jira] [Commented] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127996#comment-16127996
 ] 

Arun Suresh commented on YARN-7019:
---

[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic. I was wondering if - instead of notifying the RM of 
the how many times a container has been re-used, maybe a more general way to 
solve this might be to introduce a *preempt-ability* score for a container. 
Initially, all containers of the AM are equally preemptible, but once the AM 
has say 're-used' a container certain number of times or perhaps decided to use 
the container for some best effort task, it can lower the preemptability score 
of the Container at the RM in the next allocate call. Thoughts ?

> Ability for applications to notify YARN about container reuse
> -
>
> Key: YARN-7019
> URL: https://issues.apache.org/jira/browse/YARN-7019
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>
> During preemption calculations YARN can try to reduce the amount of work lost 
> by considering how long a container has been running.  However when an 
> application framework like Tez reuses a container across multiple tasks it 
> changes the work lost calculation since the container has essentially 
> checkpointed between task assignments.  It would be nice if applications 
> could inform YARN when a container has been reused/checkpointed and therefore 
> is a better candidate for preemption wrt. lost work than other, younger 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7019) Ability for applications to notify YARN about container reuse

2017-08-15 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127996#comment-16127996
 ] 

Arun Suresh edited comment on YARN-7019 at 8/15/17 10:43 PM:
-

[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic (it is essentially a Tez AM - Tez container 
protocol). I was wondering if - instead of notifying the RM of the how many 
times a container has been re-used, maybe a more general way to solve this 
might be to introduce a *preempt-ability* score for a container. Initially, all 
containers of the AM are equally preemptible, but once the AM has say 're-used' 
a container certain number of times or perhaps decided to use the container for 
some best effort task, it can lower the preemptability score of the Container 
at the RM in the next allocate call. Thoughts ?


was (Author: asuresh):
[~jlowe], given that Container reuse, espescially Tez's implementation of it, 
is pretty much NM agnostic. I was wondering if - instead of notifying the RM of 
the how many times a container has been re-used, maybe a more general way to 
solve this might be to introduce a *preempt-ability* score for a container. 
Initially, all containers of the AM are equally preemptible, but once the AM 
has say 're-used' a container certain number of times or perhaps decided to use 
the container for some best effort task, it can lower the preemptability score 
of the Container at the RM in the next allocate call. Thoughts ?

> Ability for applications to notify YARN about container reuse
> -
>
> Key: YARN-7019
> URL: https://issues.apache.org/jira/browse/YARN-7019
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jason Lowe
>
> During preemption calculations YARN can try to reduce the amount of work lost 
> by considering how long a container has been running.  However when an 
> application framework like Tez reuses a container across multiple tasks it 
> changes the work lost calculation since the container has essentially 
> checkpointed between task assignments.  It would be nice if applications 
> could inform YARN when a container has been reused/checkpointed and therefore 
> is a better candidate for preemption wrt. lost work than other, younger 
> containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7020) TestAMRMProxy#testAMRMProxyTokenRenewal is flakey

2017-08-15 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-7020:
---

 Summary: TestAMRMProxy#testAMRMProxyTokenRenewal is flakey
 Key: YARN-7020
 URL: https://issues.apache.org/jira/browse/YARN-7020
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Robert Kanter
Assignee: Robert Kanter


{{TestAMRMProxy#testAMRMProxyTokenRenewal}} is flakey.  It infrequently fails 
with:
{noformat}
testAMRMProxyTokenRenewal(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy) 
 Time elapsed: 19.036 sec  <<< ERROR!
org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: 
Application attempt appattempt_1502837054903_0001_01 doesn't exist in 
ApplicationMasterService cache.
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:355)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor$3.allocate(DefaultRequestInterceptor.java:224)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptor.allocate(DefaultRequestInterceptor.java:135)
at 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.AMRMProxyService.allocate(AMRMProxyService.java:279)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1490)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1346)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy90.allocate(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy91.allocate(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyTokenRenewal(TestAMRMProxy.java:190)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6610) DominantResourceCalculator.getResourceAsValue() dominant param is no longer appropriate

2017-08-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16128019#comment-16128019
 ] 

Hadoop QA commented on YARN-6610:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4569 unchanged - 5 fixed = 4569 total (was 4574) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12882033/YARN-6610.YARN-3926.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bf843fd1c873 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3926 / 8f80907 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16918/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16918/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DominantResourceCalculator.getResourceAsValue() dominant param is no longer 
> appropriate
> ---
>
> Key: YARN-6610
> URL: https://issues.apache.org/jira/browse/YARN-6610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel 

[jira] [Updated] (YARN-5764) NUMA awareness support for launching containers

2017-08-15 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-5764:

Attachment: YARN-5764-v3.patch

> NUMA awareness support for launching containers
> ---
>
> Key: YARN-5764
> URL: https://issues.apache.org/jira/browse/YARN-5764
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Reporter: Olasoji
>Assignee: Devaraj K
> Attachments: NUMA Awareness for YARN Containers.pdf, NUMA Performance 
> Results.pdf, YARN-5764-v0.patch, YARN-5764-v1.patch, YARN-5764-v2.patch, 
> YARN-5764-v3.patch
>
>
> The purpose of this feature is to improve Hadoop performance by minimizing 
> costly remote memory accesses on non SMP systems. Yarn containers, on launch, 
> will be pinned to a specific NUMA node and all subsequent memory allocations 
> will be served by the same node, reducing remote memory accesses. The current 
> default behavior is to spread memory across all NUMA nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >