[jira] [Updated] (YARN-8722) Failed to get native service application status when security is enabled

2018-08-27 Thread Zac Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zac Zhou updated YARN-8722:
---
Environment: 
The environment context is as follows:
 1) Security enabled.
 kerberos

2) Klist output
 Ticket cache: FILE:/tmp/krb5cc_1010
 Default principal: hadoop/admin@HADOOP..COM

Valid starting Expires Service principal
 08/28/2018 10:50:07 08/28/2018 20:50:07 krbtgt/HADOOP..COM@HADOOP..COM
 renew until 08/29/2018 10:50:07

3) Service spec json.
 standlone-tf.json in the attachment

4) service HDFS path permission. 
 drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf

drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/lib
 -rw-rw-rw- 2 hadoop hdfs 2228 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/standlone-tf.json

5) Stacktrace.
 stack.txt in the attachment

6) yarn app -status -> error.
 bin/yarn app -status standlone-tf (service name)

  was:
The environment context is as follows:
1) Security enabled.
kerberos

2) Klist output
Ticket cache: FILE:/tmp/krb5cc_1010
Default principal: hadoop/admin@HADOOP..COM

Valid starting Expires Service principal
08/28/2018 10:50:07 08/28/2018 20:50:07 krbtgt/HADOOP..COM@HADOOP..COM
 renew until 08/29/2018 10:50:07
 
3) Service spec json.
standlone-tf.json in the attachment
 
4) HDFS permission. 
drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf

drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/lib
-rw-rw-rw- 2 hadoop hdfs 2228 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/standlone-tf.json

5) Stacktrace.
stack.txt in the attachment

6) yarn app -status -> error.
bin/yarn app -status standlone-tf (service name)


> Failed to get native service application status when security is enabled 
> -
>
> Key: YARN-8722
> URL: https://issues.apache.org/jira/browse/YARN-8722
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
> Environment: The environment context is as follows:
>  1) Security enabled.
>  kerberos
> 2) Klist output
>  Ticket cache: FILE:/tmp/krb5cc_1010
>  Default principal: hadoop/admin@HADOOP..COM
> Valid starting Expires Service principal
>  08/28/2018 10:50:07 08/28/2018 20:50:07 
> krbtgt/HADOOP..COM@HADOOP..COM
>  renew until 08/29/2018 10:50:07
> 3) Service spec json.
>  standlone-tf.json in the attachment
> 4) service HDFS path permission. 
>  drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf
> drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/lib
>  -rw-rw-rw- 2 hadoop hdfs 2228 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/standlone-tf.json
> 5) Stacktrace.
>  stack.txt in the attachment
> 6) yarn app -status -> error.
>  bin/yarn app -status standlone-tf (service name)
>Reporter: Zac Zhou
>Priority: Major
> Attachments: stack.txt, standlone-tf.json
>
>
> Can't get job status with the following command, after a submarine job is 
> submitted.
> bin/yarn app -status standlone-tf (service name)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8722) Failed to get native service application status when security is enabled

2018-08-27 Thread Zac Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zac Zhou updated YARN-8722:
---
Attachment: standlone-tf.json
stack.txt

> Failed to get native service application status when security is enabled 
> -
>
> Key: YARN-8722
> URL: https://issues.apache.org/jira/browse/YARN-8722
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
> Environment: The environment context is as follows:
> 1) Security enabled.
> kerberos
> 2) Klist output
> Ticket cache: FILE:/tmp/krb5cc_1010
> Default principal: hadoop/admin@HADOOP..COM
> Valid starting Expires Service principal
> 08/28/2018 10:50:07 08/28/2018 20:50:07 krbtgt/HADOOP..COM@HADOOP..COM
>  renew until 08/29/2018 10:50:07
>  
> 3) Service spec json.
> standlone-tf.json in the attachment
>  
> 4) HDFS permission. 
> drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf
> drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/lib
> -rw-rw-rw- 2 hadoop hdfs 2228 2018-08-27 15:54 
> hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/standlone-tf.json
> 5) Stacktrace.
> stack.txt in the attachment
> 6) yarn app -status -> error.
> bin/yarn app -status standlone-tf (service name)
>Reporter: Zac Zhou
>Priority: Major
> Attachments: stack.txt, standlone-tf.json
>
>
> Can't get job status with the following command, after a submarine job is 
> submitted.
> bin/yarn app -status standlone-tf (service name)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8722) Failed to get native service application status when security is enabled

2018-08-27 Thread Zac Zhou (JIRA)
Zac Zhou created YARN-8722:
--

 Summary: Failed to get native service application status when 
security is enabled 
 Key: YARN-8722
 URL: https://issues.apache.org/jira/browse/YARN-8722
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-native-services
 Environment: The environment context is as follows:
1) Security enabled.
kerberos

2) Klist output
Ticket cache: FILE:/tmp/krb5cc_1010
Default principal: hadoop/admin@HADOOP..COM

Valid starting Expires Service principal
08/28/2018 10:50:07 08/28/2018 20:50:07 krbtgt/HADOOP..COM@HADOOP..COM
 renew until 08/29/2018 10:50:07
 
3) Service spec json.
standlone-tf.json in the attachment
 
4) HDFS permission. 
drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf

drwxr-x--- - hadoop hdfs 0 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/lib
-rw-rw-rw- 2 hadoop hdfs 2228 2018-08-27 15:54 
hdfs://submarine/user/hadoop/.yarn/services/standlone-tf/standlone-tf.json

5) Stacktrace.
stack.txt in the attachment

6) yarn app -status -> error.
bin/yarn app -status standlone-tf (service name)
Reporter: Zac Zhou


Can't get job status with the following command, after a submarine job is 
submitted.
bin/yarn app -status standlone-tf (service name)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594540#comment-16594540
 ] 

Naganarasimha G R commented on YARN-7865:
-

[~sunilg], [~cheersyang] & [~bibinchundatt],

Can all of you have a look at the documentation and share if any modifications 
required ?

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: NodeAttributes.html, YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594534#comment-16594534
 ] 

genericqa commented on YARN-8721:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
26s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
54s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8721 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937365/YARN-8721-YARN-3409.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0c6413e857e1 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-3409 / 87f236c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (YARN-8720) CapacityScheduler does not enforce yarn.scheduler.capacity..maximum-allocation-mb/vcores when configured

2018-08-27 Thread Tarun Parimi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarun Parimi updated YARN-8720:
---
Attachment: YARN-8720.001.patch

> CapacityScheduler does not enforce 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
> configured
> 
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
> Attachments: YARN-8720.001.patch
>
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce yarn.scheduler.capacity..maximum-allocation-mb/vcores when configured

2018-08-27 Thread Tarun Parimi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594528#comment-16594528
 ] 

Tarun Parimi commented on YARN-8720:


Spark for example does checking on AM side based only on the 
yarn.scheduler.maximum-allocation-mb/vcores.
 So the following spark command works and yarn allocates executor of size <4505 
Memory, 4 VCores>, even when 
yarn.scheduler.capacity.root.test.maximum-allocation-mb=3072 and 
yarn.scheduler.capacity.root.test.maximum-allocation-vcores=2
{code:java}
spark-submit --class org.apache.spark.examples.SparkPi     --master yarn     
--deploy-mode cluster     --driver-memory 1g     --executor-memory 4g     
--executor-cores 4     --queue test 
$SPARK_HOME/examples/jars/spark-examples.jar 100
{code}

> CapacityScheduler does not enforce 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
> configured
> 
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7865) Node attributes documentation

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594527#comment-16594527
 ] 

genericqa commented on YARN-7865:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
24s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-7865 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937373/YARN-7865-YARN-3409.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 7bb2ede531a0 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-3409 / 87f236c |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/21696/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/21696/artifact/out/whitespace-tabs.txt
 |
| Max. process+thread count | 304 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21696/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: NodeAttributes.html, YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Y. SREENIVASULU REDDY (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594515#comment-16594515
 ] 

Y. SREENIVASULU REDDY commented on YARN-8719:
-

Thanks for the review and commit [~cheersyang]

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: YARN-7865-YARN-3409.001.patch

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: NodeAttributes.html, YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: NodeAttributes.html

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: NodeAttributes.html
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: (was: YARN-7865-YARN-3409.001.patch)

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: NodeAttributes.html
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: (was: YARN-8721-YARN-3409.001.patch)

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: YARN-7865-YARN-3409.001.patch

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7865) Node attributes documentation

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7865:

Attachment: YARN-8721-YARN-3409.001.patch

> Node attributes documentation
> -
>
> Key: YARN-7865
> URL: https://issues.apache.org/jira/browse/YARN-7865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Weiwei Yang
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7865-YARN-3409.001.patch
>
>
> We need proper docs to introduce how to enable node-attributes how to 
> configure providers, how to specify script paths, arguments in configuration, 
> what should be the proper permission of the script and who will run the 
> script. Also it would be good to add more info to the description of the 
> configuration properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8717) set memory.limit_in_bytes when NodeManager starting

2018-08-27 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594458#comment-16594458
 ] 

Weiwei Yang commented on YARN-8717:
---

Hi [~yangjiandan]

Thanks for raising up this, according to cgroups doc,

{{memory.limit_in_bytes}}

Specifies the maximum usage permitted for user memory including the file cache.

can it cause side-effect that more easier run into OOM?

+[~szegedim] for discussion.

Thanks

> set memory.limit_in_bytes when NodeManager starting
> ---
>
> Key: YARN-8717
> URL: https://issues.apache.org/jira/browse/YARN-8717
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
>
> CGroupsCpuResourceHandlerImpl sets cpu quota at hirarchy of hadoop-yarn  to 
> restrict total resource of cpu of NM when NM starting; 
> CGroupsMemoryResourceHandlerImpl also should set memory.limit_in_bytes at 
> hirachy of hadoop-yarn to control memory resource of NM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594432#comment-16594432
 ] 

Naganarasimha G R commented on YARN-8721:
-

[~sunilg], IIRC idea for the boolean was not to support different type but to 
have value as empty string and have expression to support "exist" and "not 
exist" operator. I am fine with achieving it after merge but my concern is 
whether others like Microsoft will agree during merge!  

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch, 
> YARN-8721-YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8717) set memory.limit_in_bytes when NodeManager starting

2018-08-27 Thread Jiandan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiandan Yang  updated YARN-8717:

Description: CGroupsCpuResourceHandlerImpl sets cpu quota at hirarchy of 
hadoop-yarn  to restrict total resource of cpu of NM when NM starting; 
CGroupsMemoryResourceHandlerImpl also should set memory.limit_in_bytes at 
hirachy of hadoop-yarn to control memory resource of NM  (was: 
CGroupsCpuResourceHandlerImpl sets cpu quota at hirarchy of hadoop-yarn  to 
restrict total resource of cpu of NM when NM starting; 
CGroupsMemoryResourceHandlerImpl also should set memory.limit_in_bytes at 
hirachy of hadoop-yarn to control cpu resource of NM)

> set memory.limit_in_bytes when NodeManager starting
> ---
>
> Key: YARN-8717
> URL: https://issues.apache.org/jira/browse/YARN-8717
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
>
> CGroupsCpuResourceHandlerImpl sets cpu quota at hirarchy of hadoop-yarn  to 
> restrict total resource of cpu of NM when NM starting; 
> CGroupsMemoryResourceHandlerImpl also should set memory.limit_in_bytes at 
> hirachy of hadoop-yarn to control memory resource of NM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594415#comment-16594415
 ] 

Sunil Govindan commented on YARN-8721:
--

Thanks [~cheersyang]. Attaching v2 patch.

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch, 
> YARN-8721-YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8721:
-
Attachment: YARN-8721-YARN-3409.002.patch

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch, 
> YARN-8721-YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594411#comment-16594411
 ] 

Weiwei Yang commented on YARN-8721:
---

Hi [~sunilg]
{quote}When we add next node attribute types apart from STRING (might be 
boolean), we just need to register op handler to a util class and use same in 
while comparing in scheduler
{quote}
Agree.

The patch looks good, can you fix the checkstyle issues please. Thanks

 

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594398#comment-16594398
 ] 

Sunil Govindan commented on YARN-8721:
--

Thanks [~Naganarasimha].

Yes. I actually thought abt same. When we add next node attribute types apart 
from STRING (might be boolean), we just need to register op handler to a util 
class and use same in while comparing in scheduler.

We could do this post merge this post merge for simplicity. Thoughts?

cc [~cheersyang]

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8051) TestRMEmbeddedElector#testCallbackSynchronization is flakey

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594381#comment-16594381
 ] 

genericqa commented on YARN-8051:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}336m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}375m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-8051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937056/YARN-8051-branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b0a466a84d63 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 548a595 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21693/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21693/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/21693/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 873 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21693/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRMEmbeddedElector#testCallbackSynchronization is flakey
> ---
>
> Key: YARN-8051
> 

[jira] [Comment Edited] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-08-27 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594348#comment-16594348
 ] 

Chandni Singh edited comment on YARN-8706 at 8/28/18 12:25 AM:
---

I can see 2 ways to address this:

Approach 1:
1. Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} will trigger container 
kill after the delay ms. 
2. Nothing else changes. By default, docker stop uses grace period of 10 
seconds and even if {{DelayedProcessKiller}} executes after this, it will check 
whether the process is in stoppable state.

This requires no code change except deprecating 
{{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}


Approach 2: 
1.  Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}
2. For Docker Runtime,  rely only on docker stop to calculate grace period in 
seconds from 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} 
3. {{DelayedProcessKiller}} is NOT executed for Docker Runtime but executed for 
the other runtimes.

This requires a lot of change:
1. {{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}  needs to be passed 
to {{DockerLinuxContainerRuntime}}
2. {{DelayedProcessKiller}} should be executed for all runtimes except 
{{DockerLinuxContainerRuntime}}


NOTE: {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}} should be deprecated in 
both cases



was (Author: csingh):
I can see 2 ways for addressing this:

Approach 1:
1. Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} will trigger container 
kill after the delay ms. 
2. Nothing else changes. By default, docker stop uses grace period of 10 
seconds and even if {{DelayedProcessKiller}} executes after this, it will check 
whether the process is in stoppable state.

This requires no code change except deprecating 
{{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}


Approach 2: 
1.  Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}
2. For Docker Runtime,  rely only on docker stop to calculate grace period in 
seconds from 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} 
3. {{DelayedProcessKiller}} is NOT executed for Docker Runtime but executed for 
the other runtimes.

This requires a lot of change:
1. {{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}  needs to be passed 
to {{DockerLinuxContainerRuntime}}
2. {{DelayedProcessKiller}} should be executed for all runtimes except 
{{DockerLinuxContainerRuntime}}


NOTE: {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}} should be deprecated in 
both cases


> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8706) DelayedProcessKiller is executed for Docker containers even though docker stop sends a KILL signal after the specified grace period

2018-08-27 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594348#comment-16594348
 ] 

Chandni Singh commented on YARN-8706:
-

I can see 2 ways for addressing this:

Approach 1:
1. Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} will trigger container 
kill after the delay ms. 
2. Nothing else changes. By default, docker stop uses grace period of 10 
seconds and even if {{DelayedProcessKiller}} executes after this, it will check 
whether the process is in stoppable state.

This requires no code change except deprecating 
{{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}


Approach 2: 
1.  Deprecate {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}}. 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}
2. For Docker Runtime,  rely only on docker stop to calculate grace period in 
seconds from 
{{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}} 
3. {{DelayedProcessKiller}} is NOT executed for Docker Runtime but executed for 
the other runtimes.

This requires a lot of change:
1. {{YarnConfiguration.NM_SLEEP_DELAY_BEFORE_SIGKILL_MS}}  needs to be passed 
to {{DockerLinuxContainerRuntime}}
2. {{DelayedProcessKiller}} should be executed for all runtimes except 
{{DockerLinuxContainerRuntime}}


NOTE: {{YarnConfiguration.NM_DOCKER_STOP_GRACE_PERIOD}} should be deprecated in 
both cases


> DelayedProcessKiller is executed for Docker containers even though docker 
> stop sends a KILL signal after the specified grace period
> ---
>
> Key: YARN-8706
> URL: https://issues.apache.org/jira/browse/YARN-8706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: docker
>
> {{DockerStopCommand}} adds a grace period of 10 seconds.
> 10 seconds is also the default grace time use by docker stop
>  [https://docs.docker.com/engine/reference/commandline/stop/]
> Documentation of the docker stop:
> {quote}the main process inside the container will receive {{SIGTERM}}, and 
> after a grace period, {{SIGKILL}}.
> {quote}
> There is a {{DelayedProcessKiller}} in {{ContainerExcecutor}} which executes 
> for all containers after a delay when {{sleepDelayBeforeSigKill>0}}. By 
> default this is set to {{250 milliseconds}} and so irrespective of the 
> container type, it will always get executed.
>  
> For a docker container, {{docker stop}} takes care of sending a {{SIGKILL}} 
> after the grace period
> - when sleepDelayBeforeSigKill > 10 seconds, then there is no point of 
> executing DelayedProcessKiller
> - when sleepDelayBeforeSigKill < 1 second, then the grace period should be 
> the smallest value, which is 1 second, because anyways we are forcing kill 
> after 250 ms
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8696) FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594342#comment-16594342
 ] 

genericqa commented on YARN-8696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 231 unchanged - 0 fixed = 233 total (was 231) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 11s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8696 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-8696) FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-08-27 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594283#comment-16594283
 ] 

Botong Huang commented on YARN-8696:


v3 patch uploaded, rebased after YARN-8705

> FederationInterceptor upgrade: home sub-cluster heartbeat async
> ---
>
> Key: YARN-8696
> URL: https://issues.apache.org/jira/browse/YARN-8696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8696.v1.patch, YARN-8696.v2.patch, 
> YARN-8696.v3.patch
>
>
> Today in _FederationInterceptor_, the heartbeat to home sub-cluster is 
> synchronous. After the heartbeat is sent out to home sub-cluster, it waits 
> for the home response to come back before merging and returning the (merged) 
> heartbeat result to back AM. If home sub-cluster is suffering from connection 
> issues, or down during an YarnRM master-slave switch, all heartbeat threads 
> in _FederationInterceptor_ will be blocked waiting for home response. As a 
> result, the successful UAM heartbeats from secondary sub-clusters will not be 
> returned to AM at all. Additionally, because of the fact that we kept the 
> same heartbeat responseId between AM and home RM, lots of tricky handling are 
> needed regarding the responseId resync when it comes to 
> _FederationInterceptor_ (part of AMRMProxy, NM) work preserving restart 
> (YARN-6127, YARN-1336), home RM master-slave switch etc. 
> In this patch, we change the heartbeat to home sub-cluster to asynchronous, 
> same as the way we handle UAM heartbeats in secondaries. So that any 
> sub-cluster down or connection issues won't impact AM getting responses from 
> other sub-clusters. The responseId is also managed separately for home 
> sub-cluster and AM, and they increment independently. The resync logic 
> becomes much cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8696) FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-08-27 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-8696:
---
Attachment: YARN-8696.v3.patch

> FederationInterceptor upgrade: home sub-cluster heartbeat async
> ---
>
> Key: YARN-8696
> URL: https://issues.apache.org/jira/browse/YARN-8696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8696.v1.patch, YARN-8696.v2.patch, 
> YARN-8696.v3.patch
>
>
> Today in _FederationInterceptor_, the heartbeat to home sub-cluster is 
> synchronous. After the heartbeat is sent out to home sub-cluster, it waits 
> for the home response to come back before merging and returning the (merged) 
> heartbeat result to back AM. If home sub-cluster is suffering from connection 
> issues, or down during an YarnRM master-slave switch, all heartbeat threads 
> in _FederationInterceptor_ will be blocked waiting for home response. As a 
> result, the successful UAM heartbeats from secondary sub-clusters will not be 
> returned to AM at all. Additionally, because of the fact that we kept the 
> same heartbeat responseId between AM and home RM, lots of tricky handling are 
> needed regarding the responseId resync when it comes to 
> _FederationInterceptor_ (part of AMRMProxy, NM) work preserving restart 
> (YARN-6127, YARN-1336), home RM master-slave switch etc. 
> In this patch, we change the heartbeat to home sub-cluster to asynchronous, 
> same as the way we handle UAM heartbeats in secondaries. So that any 
> sub-cluster down or connection issues won't impact AM getting responses from 
> other sub-clusters. The responseId is also managed separately for home 
> sub-cluster and AM, and they increment independently. The resync logic 
> becomes much cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594248#comment-16594248
 ] 

Naganarasimha G R commented on YARN-8721:
-

Hi [~sunilg],

          Just had a overview and the patch seems to be fine, but one query i 
had was to how are we supporting Boolean type attributes ? i.e. we are 
currently supporting EQ( =)  and NE ( != ) only but during requirement 
discussion there were lot of use cases(from Microsoft) where in they wanted to 
use exists not exists kind of  operation instead of value  based comparison. I 
think this might pop up during merge. Thoughts?

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Naganarasimha G R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-8721:

Comment: was deleted

(was: Hi [~sunilg],

There are some checkstyle issues in  YARN-8718 , can that be addressed along 
with this patch itself ?)

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Naganarasimha G R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594234#comment-16594234
 ] 

Naganarasimha G R commented on YARN-8721:
-

Hi [~sunilg],

There are some checkstyle issues in  YARN-8718 , can that be addressed along 
with this patch itself ?

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594214#comment-16594214
 ] 

genericqa commented on YARN-8721:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
56s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
3s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 135 unchanged - 0 fixed = 141 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8721 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937319/YARN-8721-YARN-3409.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a7cc212a4d0a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-3409 / 87f236c |
| maven | version: 

[jira] [Commented] (YARN-2098) App priority support in Fair Scheduler

2018-08-27 Thread Ashwin Shankar (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594168#comment-16594168
 ] 

Ashwin Shankar commented on YARN-2098:
--

Hey [~ywskycn] [~templedf] any updates on this one?

> App priority support in Fair Scheduler
> --
>
> Key: YARN-2098
> URL: https://issues.apache.org/jira/browse/YARN-2098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: Ashwin Shankar
>Assignee: Wei Yan
>Priority: Major
> Attachments: YARN-2098.patch, YARN-2098.patch
>
>
> This jira is created for supporting app priorities in fair scheduler. 
> AppSchedulable hard codes priority of apps to 1,we should
> change this to get priority from ApplicationSubmissionContext.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594107#comment-16594107
 ] 

Hudson commented on YARN-8675:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14840 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14840/])
YARN-8675. Remove default hostname for docker containers when net=host. 
(billie: rev 05b2bbeb357d4fa03e71f2bfd5d8eeb0ea6c3f60)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java


> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8675:
-
Affects Version/s: 3.1.0

> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8675:
-
Affects Version/s: 3.1.1

> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594081#comment-16594081
 ] 

Billie Rinaldi commented on YARN-8675:
--

+1 for patch 4. Thanks for the patch [~suma.shivaprasad] and for the review 
[~Jim_Brennan]!

> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8705) Refactor the UAM heartbeat thread in preparation for YARN-8696

2018-08-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594060#comment-16594060
 ] 

Giovanni Matteo Fumarola edited comment on YARN-8705 at 8/27/18 6:31 PM:
-

Thanks [~botong] . Committed to trunk and branch-2.


was (Author: giovanni.fumarola):
Thanks [~botong] . Committed to trunk.

> Refactor the UAM heartbeat thread in preparation for YARN-8696
> --
>
> Key: YARN-8705
> URL: https://issues.apache.org/jira/browse/YARN-8705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8705.v1.patch, YARN-8705.v2.patch
>
>
> Refactor the UAM heartbeat thread as well as call back method in preparation 
> for YARN-8696 FederationInterceptor upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8705) Refactor the UAM heartbeat thread in preparation for YARN-8696

2018-08-27 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8705:
---
Fix Version/s: 3.2.0

> Refactor the UAM heartbeat thread in preparation for YARN-8696
> --
>
> Key: YARN-8705
> URL: https://issues.apache.org/jira/browse/YARN-8705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8705.v1.patch, YARN-8705.v2.patch
>
>
> Refactor the UAM heartbeat thread as well as call back method in preparation 
> for YARN-8696 FederationInterceptor upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8705) Refactor the UAM heartbeat thread in preparation for YARN-8696

2018-08-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594060#comment-16594060
 ] 

Giovanni Matteo Fumarola commented on YARN-8705:


Thanks [~botong] . Committed to trunk.

> Refactor the UAM heartbeat thread in preparation for YARN-8696
> --
>
> Key: YARN-8705
> URL: https://issues.apache.org/jira/browse/YARN-8705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8705.v1.patch, YARN-8705.v2.patch
>
>
> Refactor the UAM heartbeat thread as well as call back method in preparation 
> for YARN-8696 FederationInterceptor upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8705) Refactor the UAM heartbeat thread in preparation for YARN-8696

2018-08-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594054#comment-16594054
 ] 

Hudson commented on YARN-8705:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14839 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14839/])
YARN-8705. Refactor the UAM heartbeat thread in preparation for (gifuma: rev 
f1525825623a1307b5aa55c456b6afa3e0c61135)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMHeartbeatRequestHandler.java


> Refactor the UAM heartbeat thread in preparation for YARN-8696
> --
>
> Key: YARN-8705
> URL: https://issues.apache.org/jira/browse/YARN-8705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8705.v1.patch, YARN-8705.v2.patch
>
>
> Refactor the UAM heartbeat thread as well as call back method in preparation 
> for YARN-8696 FederationInterceptor upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594049#comment-16594049
 ] 

Sunil Govindan commented on YARN-8721:
--

Fixed below issues.
 # When python!=3 is given as PC, we should accept nodes which doesnt have 
"python" attribute added.
 # When RM was restarted, stored attributes has to be refreshed for scheduler. 
In partition case also, we do same way. If we dont do this, then scheduler wont 
have the attribute information per node. Tested and its works fine.
 # Added more UT for scheduler cases.

cc [~cheersyang] [~naganarasimha...@apache.org] pls help to check.

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8721:
-
Attachment: YARN-8721-YARN-3409.001.patch

> Scheduler should accept nodes which doesnt have an attribute when 
> NodeAttributeType.NE is used
> --
>
> Key: YARN-8721
> URL: https://issues.apache.org/jira/browse/YARN-8721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8721-YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8721) Scheduler should accept nodes which doesnt have an attribute when NodeAttributeType.NE is used

2018-08-27 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8721:


 Summary: Scheduler should accept nodes which doesnt have an 
attribute when NodeAttributeType.NE is used
 Key: YARN-8721
 URL: https://issues.apache.org/jira/browse/YARN-8721
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil Govindan
Assignee: Sunil Govindan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-8289) Modify distributedshell to support Node Attributes

2018-08-27 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan resolved YARN-8289.
--
Resolution: Duplicate

> Modify distributedshell to support Node Attributes
> --
>
> Key: YARN-8289
> URL: https://issues.apache.org/jira/browse/YARN-8289
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-shell
>Affects Versions: YARN-3409
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
>
> Modifications required in Distributed shell to support NodeAttributes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593993#comment-16593993
 ] 

Hudson commented on YARN-8719:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14837 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14837/])
YARN-8719. Typo correction for yarn configuration in (wwei: rev 
e8b063f63049d781f4bd67e2ac928c03fd7b7941)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/OpportunisticContainers.md.vm


> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8699) Add Yarnclient#yarnclusterMetrics API implementation in router

2018-08-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593989#comment-16593989
 ] 

Giovanni Matteo Fumarola commented on YARN-8699:


Overall [^YARN-8699.001.patch] looks good to me.
Please fix the findbugs warnings.

> Add Yarnclient#yarnclusterMetrics API implementation in router
> --
>
> Key: YARN-8699
> URL: https://issues.apache.org/jira/browse/YARN-8699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8699.001.patch
>
>
> Implement YarnclusterMetrics API in FederationClientInterceptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593953#comment-16593953
 ] 

Weiwei Yang edited comment on YARN-8719 at 8/27/18 5:14 PM:


Just committed to trunk, branch-3.0 and branch-3.1. Thanks for the contribution 
[~sreenivasulureddy].


was (Author: cheersyang):
Just committed to trunk and branch-3.1. Thanks for the contribution 
[~sreenivasulureddy].

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8719:
--
Fix Version/s: (was: 3.0.3)
   3.0.4

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8719:
--
Fix Version/s: 3.0.3

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593955#comment-16593955
 ] 

genericqa commented on YARN-8675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8675 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937303/YARN-8675.4.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e102bcbde73b 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6eecd25 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21691/testReport/ |
| Max. process+thread count | 305 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21691/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   

[jira] [Assigned] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-8719:
-

Assignee: Y. SREENIVASULU REDDY

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8699) Add Yarnclient#yarnclusterMetrics API implementation in router

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593943#comment-16593943
 ] 

genericqa commented on YARN-8699:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 22 new + 214 unchanged - 0 fixed = 236 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
43s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router
 |
|  |  org.apache.hadoop.yarn.server.router.clientrm.ClientMethod.getParams() 
may expose 

[jira] [Commented] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593939#comment-16593939
 ] 

Weiwei Yang commented on YARN-8719:
---

+1, will commit this shortly. Thanks [~sreenivasulureddy].

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593869#comment-16593869
 ] 

Suma Shivaprasad commented on YARN-8675:


Fixed checkstyle issues

> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8675) Setting hostname of docker container breaks with "host" networking mode for Apps which do not run as a YARN service

2018-08-27 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8675:
---
Attachment: YARN-8675.4.patch

> Setting hostname of docker container breaks with "host" networking mode for 
> Apps which do not run as a YARN service
> ---
>
> Key: YARN-8675
> URL: https://issues.apache.org/jira/browse/YARN-8675
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Suma Shivaprasad
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8675.1.patch, YARN-8675.2.patch, YARN-8675.3.patch, 
> YARN-8675.4.patch
>
>
> Applications like the Spark AM currently do not run as a YARN service and 
> setting hostname breaks driver/executor communication if docker version 
> >=1.13.1 , especially with wire-encryption turned on.
> YARN-8027 sets the hostname if YARN DNS is enabled. But the cluster could 
> have a mix of YARN service/native Applications.
> The proposal is to not set the hostname when "host" networking mode is 
> enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8642) Add support for tmpfs mounts with the Docker runtime

2018-08-27 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593856#comment-16593856
 ] 

Eric Badger commented on YARN-8642:
---

{noformat}
+  const char *regex_str = "^/[^:]+$";
{noformat}
This regex is pretty permissive. Is there anything stopping a user from 
maliciously crafting the tmpfs variable? Something like  
"YARN_CONTAINER_RUNTIME_DOCKER_TMPFS_MOUNTS=/foo/bar || "? 

> Add support for tmpfs mounts with the Docker runtime
> 
>
> Key: YARN-8642
> URL: https://issues.apache.org/jira/browse/YARN-8642
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8642.001.patch, YARN-8642.002.patch
>
>
> Add support to the existing Docker runtime to allow the user to request tmpfs 
> mounts for their containers. For example:
> {code}/usr/bin/docker run --name=container_name --tmpfs /run image 
> /bootstrap/start-systemd
> {code}
> One use case is to allow systemd to run as PID 1 in a non-privileged 
> container, /run is expected to be a tmpfs mount in the container for that to 
> work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8699) Add Yarnclient#yarnclusterMetrics API implementation in router

2018-08-27 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8699:
---
Attachment: YARN-8699.001.patch

> Add Yarnclient#yarnclusterMetrics API implementation in router
> --
>
> Key: YARN-8699
> URL: https://issues.apache.org/jira/browse/YARN-8699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8699.001.patch
>
>
> Implement YarnclusterMetrics API in FederationClientInterceptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8699) Add Yarnclient#yarnclusterMetrics API implementation in router

2018-08-27 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reassigned YARN-8699:
--

Assignee: Bibin A Chundatt

> Add Yarnclient#yarnclusterMetrics API implementation in router
> --
>
> Key: YARN-8699
> URL: https://issues.apache.org/jira/browse/YARN-8699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
>
> Implement YarnclusterMetrics API in FederationClientInterceptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTrackerstate

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593804#comment-16593804
 ] 

genericqa commented on YARN-8680:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 238 unchanged - 4 fixed = 238 total (was 242) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8680 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937296/YARN-8680.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 696eac07d8eb 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 744ce20 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21689/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21689/console |
| Powered by | Apache Yetus 

[jira] [Assigned] (YARN-8720) CapacityScheduler does not enforce yarn.scheduler.capacity..maximum-allocation-mb/vcores when configured

2018-08-27 Thread Tarun Parimi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tarun Parimi reassigned YARN-8720:
--

Assignee: Tarun Parimi

> CapacityScheduler does not enforce 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
> configured
> 
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Assignee: Tarun Parimi
>Priority: Major
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8720) CapacityScheduler does not enforce yarn.scheduler.capacity..maximum-allocation-mb/vcores when configured

2018-08-27 Thread Tarun Parimi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593755#comment-16593755
 ] 

Tarun Parimi commented on YARN-8720:


This issue applies to all applications and ResourceManager should validate 
yarn.scheduler.capacity..maximum-allocation-mb/vcores when it is 
configured instead of yarn.scheduler.maximum-allocation-mb. 

I think the fix here can be to call 
CapacityScheduler#getMaximumResourceCapability(queueName) instead of 
CapacityScheduler#getMaximumResourceCapability(), whenever we 
validate/normalize ResourceRequests after the application has been placed into 
a queue.

> CapacityScheduler does not enforce 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
> configured
> 
>
> Key: YARN-8720
> URL: https://issues.apache.org/jira/browse/YARN-8720
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Tarun Parimi
>Priority: Major
>
> The value of 
> yarn.scheduler.capacity..maximum-allocation-mb/vcores is not 
> strictly enforced when applications request containers. An 
> InvalidResourceRequestException is thrown only when the ResourceRequest is 
> greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores 
> . So for an example configuration such as below,
>  
> {code:java}
> yarn.scheduler.maximum-allocation-mb=4096
> yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
> {code}
>  
> The below DSShell command runs successfully and asks an AM container of size 
> 4096MB which is greater than max 2048MB configured in test queue.
> {code:java}
> yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
> -num_containers 1 -jar 
> $YARN_HOME/hadoop-yarn-applications-distributedshell.jar -shell_command 
> "sleep 60" -container_memory=4096 -master_memory=4096 -queue=test{code}
> Instead it should not launch the application and fail with 
> InvalidResourceRequestException . The child container however will be 
> requested with size 2048MB as DSShell AppMaster does the below check before 
> ResourceRequest ask with RM.
> {code:java}
> // A resource ask cannot exceed the max.
> if (containerMemory > maxMem) {
>  LOG.info("Container memory specified above max threshold of cluster."
>  + " Using max value." + ", specified=" + containerMemory + ", max="
>  + maxMem);
>  containerMemory = maxMem;
> }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8720) CapacityScheduler does not enforce yarn.scheduler.capacity..maximum-allocation-mb/vcores when configured

2018-08-27 Thread Tarun Parimi (JIRA)
Tarun Parimi created YARN-8720:
--

 Summary: CapacityScheduler does not enforce 
yarn.scheduler.capacity..maximum-allocation-mb/vcores when 
configured
 Key: YARN-8720
 URL: https://issues.apache.org/jira/browse/YARN-8720
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler, capacityscheduler, resourcemanager
Affects Versions: 2.7.0
Reporter: Tarun Parimi


The value of yarn.scheduler.capacity..maximum-allocation-mb/vcores 
is not strictly enforced when applications request containers. An 
InvalidResourceRequestException is thrown only when the ResourceRequest is 
greater than the global value of yarn.scheduler.maximum-allocation-mb/vcores . 
So for an example configuration such as below,

 
{code:java}
yarn.scheduler.maximum-allocation-mb=4096
yarn.scheduler.capacity.root.test.maximum-allocation-mb=2048
{code}
 

The below DSShell command runs successfully and asks an AM container of size 
4096MB which is greater than max 2048MB configured in test queue.
{code:java}
yarn jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
-num_containers 1 -jar $YARN_HOME/hadoop-yarn-applications-distributedshell.jar 
-shell_command "sleep 60" -container_memory=4096 -master_memory=4096 
-queue=test{code}
Instead it should not launch the application and fail with 
InvalidResourceRequestException . The child container however will be requested 
with size 2048MB as DSShell AppMaster does the below check before 
ResourceRequest ask with RM.
{code:java}
// A resource ask cannot exceed the max.
if (containerMemory > maxMem) {
 LOG.info("Container memory specified above max threshold of cluster."
 + " Using max value." + ", specified=" + containerMemory + ", max="
 + maxMem);
 containerMemory = maxMem;
}{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8680) YARN NM: Implement Iterable Abstraction for LocalResourceTrackerstate

2018-08-27 Thread Pradeep Ambati (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pradeep Ambati updated YARN-8680:
-
Attachment: YARN-8680.01.patch

> YARN NM: Implement Iterable Abstraction for LocalResourceTrackerstate
> -
>
> Key: YARN-8680
> URL: https://issues.apache.org/jira/browse/YARN-8680
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Pradeep Ambati
>Assignee: Pradeep Ambati
>Priority: Critical
> Attachments: YARN-8680.00.patch, YARN-8680.01.patch
>
>
> Similar to YARN-8242, implement iterable abstraction for 
> LocalResourceTrackerState to load completed and in progress resources when 
> needed rather than loading them all at a time for a respective state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-08-27 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-8644:
---
Attachment: MAPREDUCE-6861.002.patch

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-08-27 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-8644:
---
Attachment: (was: MAPREDUCE-6861.002.patch)

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-08-27 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593650#comment-16593650
 ] 

Zoltan Siegl commented on YARN-8644:


Hi [~snemeth]!
 Thanks, I am done with the review. I have a few minor comments.

 * 
{{org/apache/hadoop/yarn/server/resourcemanager/rmapp/AppCreationTestHelper.java:295}}
{code:java}
Assert.assertEquals("application tracking url is not correct",
null, application.getTrackingUrl());
{code}
Shouldn't we consider using assertNull here?

 * 
{{org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java:951}}
{code:java}
Assert.assertTrue("RMAppImpl: can't handle " + rmAppEventType
 + " at state " + state, false);
{code}
If you are good boy-scouting around here anyways, might this be considered to 
become a {{Assert.fail}}?

Everything else looks good to me, even these are just minor things that you 
could consider to touch if you create a new patch anyways.
 

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593588#comment-16593588
 ] 

genericqa commented on YARN-8719:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8719 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937271/YARN-8719.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 31709b0ac817 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 91836f0 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 303 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21688/console |
| Powered by | Apache Yetus 0.9.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8095) Allow disable non-exclusive allocation

2018-08-27 Thread kyungwan nam (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593584#comment-16593584
 ] 

kyungwan nam commented on YARN-8095:


[~leftnoteasy], Sorry for bothering you. but could you share your thoughts?

Thanks

> Allow disable non-exclusive allocation
> --
>
> Key: YARN-8095
> URL: https://issues.apache.org/jira/browse/YARN-8095
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.8.3
>Reporter: kyungwan nam
>Priority: Major
> Attachments: YARN-8095-branch-2.8.001.patch
>
>
> We have 'longlived' Queue, which is used for long-lived apps.
> In situation where default Partition resources are not enough, containers for 
> long-lived app can be allocated to sharable Partition.
> Since then, containers for long-lived app can be easily preempted.
> We don’t want long-lived apps to be killed abruptly.
> Currently, non-exclusive allocation can happen regardless of whether the 
> queue is accessible to the sharable Partition.
> It would be good if non-exclusive allocation can be disabled at queue level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated YARN-8719:

Attachment: YARN-8719.001.patch

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Priority: Major
> Attachments: YARN-8719.001.patch
>
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Y. SREENIVASULU REDDY (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593515#comment-16593515
 ] 

Y. SREENIVASULU REDDY commented on YARN-8719:
-

I will attach the patch for the same.

> Typo correction for yarn configuration in OpportunisticContainers(federation) 
> docs
> --
>
> Key: YARN-8719
> URL: https://issues.apache.org/jira/browse/YARN-8719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, federation
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Y. SREENIVASULU REDDY
>Priority: Major
>
> "yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.
> https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8719) Typo correction for yarn configuration in OpportunisticContainers(federation) docs

2018-08-27 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created YARN-8719:
---

 Summary: Typo correction for yarn configuration in 
OpportunisticContainers(federation) docs
 Key: YARN-8719
 URL: https://issues.apache.org/jira/browse/YARN-8719
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation, federation
Affects Versions: 3.1.1, 3.2.0
Reporter: Y. SREENIVASULU REDDY


"yarn.resourcemanger.scheduler.address"  This configuration have typo mistake.

https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html#Quick_Guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1013) CS should watch resource utilization of containers and allocate speculative containers if appropriate

2018-08-27 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593499#comment-16593499
 ] 

Weiwei Yang commented on YARN-1013:
---

Hi [~asuresh], pls go ahead.. I am busy with something else right now so won't 
be able to come to this one any time soon. Thank you.

> CS should watch resource utilization of containers and allocate speculative 
> containers if appropriate
> -
>
> Key: YARN-1013
> URL: https://issues.apache.org/jira/browse/YARN-1013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun C Murthy
>Assignee: Weiwei Yang
>Priority: Major
>
> CS should watch resource utilization of containers (provided by NM in 
> heartbeat) and allocate speculative containers (at lower OS priority) if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1013) CS should watch resource utilization of containers and allocate speculative containers if appropriate

2018-08-27 Thread Arun Suresh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593495#comment-16593495
 ] 

Arun Suresh commented on YARN-1013:
---

[~cheersyang], If you havn't started with this, wondering if I might take this 
up...


> CS should watch resource utilization of containers and allocate speculative 
> containers if appropriate
> -
>
> Key: YARN-1013
> URL: https://issues.apache.org/jira/browse/YARN-1013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun C Murthy
>Assignee: Weiwei Yang
>Priority: Major
>
> CS should watch resource utilization of containers (provided by NM in 
> heartbeat) and allocate speculative containers (at lower OS priority) if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8535) DistributedShell unit tests are failing

2018-08-27 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593347#comment-16593347
 ] 

Abhishek Modi commented on YARN-8535:
-

[~bibinchundatt] I will upload a patch by EOD today. Thanks.

> DistributedShell unit tests are failing
> ---
>
> Key: YARN-8535
> URL: https://issues.apache.org/jira/browse/YARN-8535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell, timelineservice
>Reporter: Eric Yang
>Assignee: Abhishek Modi
>Priority: Major
>
> These tests have been failing for a while in trunk:
> |[testDSShellWithoutDomainV2|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2]|1
>  min 20 sec|Failed|
> |[testDSShellWithoutDomainV2CustomizedFlow|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2CustomizedFlow]|1
>  min 20 sec|Failed|
> |[testDSShellWithoutDomainV2DefaultFlow|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2DefaultFlow]|1
>  min 20 sec|Failed|
> The root causes are the same:
> {code:java}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.verifyEntityTypeFileExists(TestDistributedShell.java:628)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.checkTimelineV2(TestDistributedShell.java:546)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:451)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:310)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithoutDomainV2(TestDistributedShell.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8535) DistributedShell unit tests are failing

2018-08-27 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593339#comment-16593339
 ] 

Bibin A Chundatt commented on YARN-8535:


[~abmodi]

Could you post a patch for the failure.  ?

> DistributedShell unit tests are failing
> ---
>
> Key: YARN-8535
> URL: https://issues.apache.org/jira/browse/YARN-8535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-shell, timelineservice
>Reporter: Eric Yang
>Assignee: Abhishek Modi
>Priority: Major
>
> These tests have been failing for a while in trunk:
> |[testDSShellWithoutDomainV2|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2]|1
>  min 20 sec|Failed|
> |[testDSShellWithoutDomainV2CustomizedFlow|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2CustomizedFlow]|1
>  min 20 sec|Failed|
> |[testDSShellWithoutDomainV2DefaultFlow|https://builds.apache.org/job/PreCommit-YARN-Build/21243/testReport/org.apache.hadoop.yarn.applications.distributedshell/TestDistributedShell/testDSShellWithoutDomainV2DefaultFlow]|1
>  min 20 sec|Failed|
> The root causes are the same:
> {code:java}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.verifyEntityTypeFileExists(TestDistributedShell.java:628)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.checkTimelineV2(TestDistributedShell.java:546)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:451)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShell(TestDistributedShell.java:310)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell.testDSShellWithoutDomainV2(TestDistributedShell.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8718) Merge related work for YARN-3409

2018-08-27 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8718:
-
Attachment: YARN-3409.002.patch

> Merge related work for YARN-3409
> 
>
> Key: YARN-8718
> URL: https://issues.apache.org/jira/browse/YARN-8718
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Priority: Major
> Attachments: YARN-3409.001.patch, YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8718) Merge related work for YARN-3409

2018-08-27 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16593260#comment-16593260
 ] 

Sunil Govindan commented on YARN-8718:
--

Updated v2 patch with correct files.

> Merge related work for YARN-3409
> 
>
> Key: YARN-8718
> URL: https://issues.apache.org/jira/browse/YARN-8718
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Priority: Major
> Attachments: YARN-3409.001.patch, YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8718) Merge related work for YARN-3409

2018-08-27 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8718:
-
Attachment: YARN-3409.001.patch

> Merge related work for YARN-3409
> 
>
> Key: YARN-8718
> URL: https://issues.apache.org/jira/browse/YARN-8718
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil Govindan
>Priority: Major
> Attachments: YARN-3409.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8718) Merge related work for YARN-3409

2018-08-27 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8718:


 Summary: Merge related work for YARN-3409
 Key: YARN-8718
 URL: https://issues.apache.org/jira/browse/YARN-8718
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sunil Govindan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org