[jira] [Commented] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517835#comment-16517835
 ] 

genericqa commented on YARN-7449:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 20s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7449 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928430/YARN-7449.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b32edd5acfd0 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d87592 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21059/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21059/testReport/ |
| Max. process+thread count | 713 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21059/console |
| Powered b

[jira] [Commented] (YARN-8440) Typo in YarnConfiguration javadoc: "Miniumum request grant-able.."

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517836#comment-16517836
 ] 

genericqa commented on YARN-8440:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8440 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928436/YARN-8440.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9b25fa9df6f6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d87592 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21060/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21060/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Typo i

[jira] [Updated] (YARN-8442) Strange characters and missing spaces in FairScheduler documentation

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8442:
-
Attachment: YARN-8442.001.patch

> Strange characters and missing spaces in FairScheduler documentation
> 
>
> Key: YARN-8442
> URL: https://issues.apache.org/jira/browse/YARN-8442
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8442.001.patch
>
>
> [https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html]
> There are several missing spaces and strange characters in: 
> Allocation file format / queuePlacementPolicy element / nestedUserQueue
> Quoting the wrong part of the document: 
> {code:java}
> This is similar to ‘user’ rule,the difference being in ‘nestedUserQueue’ 
> rule,user...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8442) Strange characters and missing spaces in FairScheduler documentation

2018-06-19 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8442:


 Summary: Strange characters and missing spaces in FairScheduler 
documentation
 Key: YARN-8442
 URL: https://issues.apache.org/jira/browse/YARN-8442
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth


[https://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html]

There are several missing spaces and strange characters in: 

Allocation file format / queuePlacementPolicy element / nestedUserQueue

Quoting the wrong part of the document: 
{code:java}
This is similar to ‘user’ rule,the difference being in ‘nestedUserQueue’ 
rule,user...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-19 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517812#comment-16517812
 ] 

Manikandan R commented on YARN-4606:


Ok, [~eepayne]. I thought of doing it once we settled down few more open issues 
and after thorough check on junits. anyways, not a problem. Can you share your 
thoughts on move app flow?

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.003.patch, YARN-4606.004.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8441) Typo in CSQueueUtils local variable names: queueGuranteedResource

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8441:
-
Attachment: YARN-8441.001.patch

> Typo in CSQueueUtils local variable names: queueGuranteedResource
> -
>
> Key: YARN-8441
> URL: https://issues.apache.org/jira/browse/YARN-8441
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Trivial
> Attachments: YARN-8441.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8441) Typo in CSQueueUtils local variable names: queueGuranteedResource

2018-06-19 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8441:


 Summary: Typo in CSQueueUtils local variable names: 
queueGuranteedResource
 Key: YARN-8441
 URL: https://issues.apache.org/jira/browse/YARN-8441
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8440) Typo in YarnConfiguration javadoc: "Miniumum request grant-able.."

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8440:
-
Attachment: YARN-8440.001.patch

> Typo in YarnConfiguration javadoc: "Miniumum request grant-able.."
> --
>
> Key: YARN-8440
> URL: https://issues.apache.org/jira/browse/YARN-8440
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Trivial
> Attachments: YARN-8440.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8440) Typo in YarnConfiguration javadoc: "Miniumum request grant-able.."

2018-06-19 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8440:


 Summary: Typo in YarnConfiguration javadoc: "Miniumum request 
grant-able.."
 Key: YARN-8440
 URL: https://issues.apache.org/jira/browse/YARN-8440
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8439) Typos in test names in TestTaskAttempt: "testAppDiognostic"

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-8439:


Assignee: Szilard Nemeth

> Typos in test names in TestTaskAttempt: "testAppDiognostic"
> ---
>
> Key: YARN-8439
> URL: https://issues.apache.org/jira/browse/YARN-8439
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
>
> These two methods need to be renamed: 
>  * testAppDiognosticEventOnUnassignedTask
>  * testAppDiognosticEventOnNewTask



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8439) Typos in test names in TestTaskAttempt: "testAppDiognostic"

2018-06-19 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8439:


 Summary: Typos in test names in TestTaskAttempt: 
"testAppDiognostic"
 Key: YARN-8439
 URL: https://issues.apache.org/jira/browse/YARN-8439
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Szilard Nemeth


These two methods need to be renamed: 
 * testAppDiognosticEventOnUnassignedTask
 * testAppDiognosticEventOnNewTask



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517802#comment-16517802
 ] 

rangjiaheng edited comment on YARN-8435 at 6/20/18 5:01 AM:


Thanks [~giovanni.fumarola] for review, I can reproduce the exception in trunk 
code using the following test case:
{code:java}
@Test
public void testPipelineConcurrent() throws InterruptedException {
  final String user = "yarn";

  class ClientThread extends Thread {
@Override public void run() {
  try {
pipeline();
  } catch (IOException | InterruptedException e) {
e.printStackTrace();
  }
}

private void pipeline() throws IOException, InterruptedException {
  UserGroupInformation.createRemoteUser(user)
  .doAs(new PrivilegedExceptionAction() {
@Override public ClientRequestInterceptor run() throws Exception {
  RequestInterceptorChainWrapper pipeline =
  getRouterClientRMService().getInterceptorChain();
  ClientRequestInterceptor root = pipeline.getRootInterceptor();
  Assert.assertNotNull(root);
  System.out.println("init new interceptor for user " + user);
  return root;
}
  });
}
  }

  new ClientThread().start();
  Thread.sleep(1);
  new ClientThread().start();
}
{code}
 

Which out put :
{code:java}
Exception in thread "Thread-56" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:231)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:227)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.pipeline(TestRouterClientRMService.java:227)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.run(TestRouterClientRMService.java:219)
{code}
 

YARN-8435.v1.patch can pass this test case.

However, this test case code is not beautiful, should I put the test case with 
YARN-8435.patch together ?

 


was (Author: neomatrix):
Thanks [~giovanni.fumarola] for review, I can reproduce the exception in trunk 
code using the following test case:

 
{code:java}
@Test
public void testPipelineConcurrent() throws InterruptedException {
  final String user = "yarn";

  class ClientThread extends Thread {
@Override public void run() {
  try {
pipeline();
  } catch (IOException | InterruptedException e) {
e.printStackTrace();
  }
}

private void pipeline() throws IOException, InterruptedException {
  UserGroupInformation.createRemoteUser(user)
  .doAs(new PrivilegedExceptionAction() {
@Override public ClientRequestInterceptor run() throws Exception {
  RequestInterceptorChainWrapper pipeline =
  getRouterClientRMService().getInterceptorChain();
  ClientRequestInterceptor root = pipeline.getRootInterceptor();
  Assert.assertNotNull(root);
  System.out.println("init new interceptor for user " + user);
  return root;
}
  });
}
  }

  new ClientThread().start();
  Thread.sleep(1);
  new ClientThread().start();
}
{code}
 

Which out put :

 
{code:java}
Exception in thread "Thread-56" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:231)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:227)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.pipeline(TestRouterClientRMService.java:227)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.run(TestRouterClientRMService.java:219)
{code}
 

YARN-8435.v1.patch can pass this test case.

 

However, this test case code is not beautiful, should I put the test case with 
YARN-8435.patch together ?

 

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 2.9.0,

[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517802#comment-16517802
 ] 

rangjiaheng commented on YARN-8435:
---

Thanks [~giovanni.fumarola] for review, I can reproduce the exception in trunk 
code using the following test case:

 
{code:java}
@Test
public void testPipelineConcurrent() throws InterruptedException {
  final String user = "yarn";

  class ClientThread extends Thread {
@Override public void run() {
  try {
pipeline();
  } catch (IOException | InterruptedException e) {
e.printStackTrace();
  }
}

private void pipeline() throws IOException, InterruptedException {
  UserGroupInformation.createRemoteUser(user)
  .doAs(new PrivilegedExceptionAction() {
@Override public ClientRequestInterceptor run() throws Exception {
  RequestInterceptorChainWrapper pipeline =
  getRouterClientRMService().getInterceptorChain();
  ClientRequestInterceptor root = pipeline.getRootInterceptor();
  Assert.assertNotNull(root);
  System.out.println("init new interceptor for user " + user);
  return root;
}
  });
}
  }

  new ClientThread().start();
  Thread.sleep(1);
  new ClientThread().start();
}
{code}
 

Which out put :

 
{code:java}
Exception in thread "Thread-56" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:231)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread$1.run(TestRouterClientRMService.java:227)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.pipeline(TestRouterClientRMService.java:227)
at 
org.apache.hadoop.yarn.server.router.clientrm.TestRouterClientRMService$1ClientThread.run(TestRouterClientRMService.java:219)
{code}
 

YARN-8435.v1.patch can pass this test case.

 

However, this test case code is not beautiful, should I put the test case with 
YARN-8435.patch together ?

 

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 2.9.0, 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.v1.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-19 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-8438:


 Summary: TestContainer.testKillOnNew flaky on trunk
 Key: YARN-8438
 URL: https://issues.apache.org/jira/browse/YARN-8438
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth


Running this test several times (e.g. 30), it fails ~5-10 times.

Stacktrace: 
{code:java}
java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
org.junit.Assert.assertTrue(Assert.java:41) at 
org.junit.Assert.assertTrue(Assert.java:52) at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
{code}
TestContainer:594 is the following code in trunk, currently:
{code:java}
Assert.assertTrue( containerMetrics.finishTime.value() > 
containerMetrics.startTime .value());
{code}
So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517797#comment-16517797
 ] 

Szilard Nemeth commented on YARN-7449:
--

Hi [~szegedim]!

Please check the updated patch!

Accidentally, I uploaded a wrong patch as patch2 then re-uploaded the correct 
one with the same name.

Hoping that Yetus comes back with results on the latest patch.

Should I update the latest patch with an increased number in the filename in 
this case?

 

Thanks!

> Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl
> --
>
> Key: YARN-7449
> URL: https://issues.apache.org/jira/browse/YARN-7449
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, yarn
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-7449-001.patch, YARN-7449.002.patch
>
>
> {{TestYarnClient}} tests both {{YarnClient}} and {{YarnClientImpl}}. We 
> should test {{YarnClient}} without thinking of its implementation. That's the 
> whole point of {{YarnClient}}. There are bunch of refactors we could do. The 
> first thing is to Split up class {{TestYarnClient}} to {{TestYarnClient}} and 
> {{TestYarnClientImpl}}. Let {{TestYarnClient}} only tests {{YarnClient}}. All 
> implementation related stuff go to  {{TestYarnClientImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-19 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517796#comment-16517796
 ] 

Bibin A Chundatt commented on YARN-8103:


Thank you [~Naganarasimha] for review

Apologies for missing few comments.I have tried to handle all comments in 
latest patch.

{quote}
NodeCLI ln no 317 : IMO it would better to wrap each attribute in a new line ?
{quote}

I think its better to be in same line. Since normal case we expect 1-3 
attributes per node .. 
I hope thats not a major blocking issue. Lets see the usability during demo and 
updated if required , is that fine ??



> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7449:
-
Attachment: YARN-7449.002.patch

> Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl
> --
>
> Key: YARN-7449
> URL: https://issues.apache.org/jira/browse/YARN-7449
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, yarn
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-7449-001.patch, YARN-7449.002.patch
>
>
> {{TestYarnClient}} tests both {{YarnClient}} and {{YarnClientImpl}}. We 
> should test {{YarnClient}} without thinking of its implementation. That's the 
> whole point of {{YarnClient}}. There are bunch of refactors we could do. The 
> first thing is to Split up class {{TestYarnClient}} to {{TestYarnClient}} and 
> {{TestYarnClientImpl}}. Let {{TestYarnClient}} only tests {{YarnClient}}. All 
> implementation related stuff go to  {{TestYarnClientImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8103) Add CLI interface to query node attributes

2018-06-19 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8103:
---
Attachment: YARN-8103-YARN-3409.004.patch

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7449:
-
Attachment: (was: YARN-7449.002.patch)

> Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl
> --
>
> Key: YARN-7449
> URL: https://issues.apache.org/jira/browse/YARN-7449
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, yarn
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-7449-001.patch
>
>
> {{TestYarnClient}} tests both {{YarnClient}} and {{YarnClientImpl}}. We 
> should test {{YarnClient}} without thinking of its implementation. That's the 
> whole point of {{YarnClient}}. There are bunch of refactors we could do. The 
> first thing is to Split up class {{TestYarnClient}} to {{TestYarnClient}} and 
> {{TestYarnClientImpl}}. Let {{TestYarnClient}} only tests {{YarnClient}}. All 
> implementation related stuff go to  {{TestYarnClientImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7449:
-
Attachment: YARN-7449.002.patch

> Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl
> --
>
> Key: YARN-7449
> URL: https://issues.apache.org/jira/browse/YARN-7449
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, yarn
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-7449-001.patch, YARN-7449.002.patch
>
>
> {{TestYarnClient}} tests both {{YarnClient}} and {{YarnClientImpl}}. We 
> should test {{YarnClient}} without thinking of its implementation. That's the 
> whole point of {{YarnClient}}. There are bunch of refactors we could do. The 
> first thing is to Split up class {{TestYarnClient}} to {{TestYarnClient}} and 
> {{TestYarnClientImpl}}. Let {{TestYarnClient}} only tests {{YarnClient}}. All 
> implementation related stuff go to  {{TestYarnClientImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4929) Explore a better way than sleeping for a while in some test cases

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-4929:


Assignee: Szilard Nemeth

> Explore a better way than sleeping for a while in some test cases
> -
>
> Key: YARN-4929
> URL: https://issues.apache.org/jira/browse/YARN-4929
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Major
>
> The following unit test cases failed because we removed the minimum wait time 
> for attempt in YARN-4807. I manually added sleeps so the tests pass and added 
> a TODO in the code. We can explore a better way to do it.
> - TestAMRestart.testRMAppAttemptFailuresValidityInterval 
> - TestApplicationMasterService.testResourceTypes
> - TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers
> - TestRMApplicationHistoryWriter.testRMWritingMassiveHistoryForFairSche
> - TestRMApplicationHistoryWriter.testRMWritingMassiveHistoryForCapacitySche



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5803) Random test failure TestAMRestart#testRMAppAttemptFailuresValidityInterval

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-5803:


Assignee: Szilard Nemeth

> Random test failure TestAMRestart#testRMAppAttemptFailuresValidityInterval
> --
>
> Key: YARN-5803
> URL: https://issues.apache.org/jira/browse/YARN-5803
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Bibin A Chundatt
>Assignee: Szilard Nemeth
>Priority: Major
>
> Random test failure TestAMRestart#testRMAppAttemptFailuresValidityInterval
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testRMAppAttemptFailuresValidityInterval(TestAMRestart.java:742)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8433) TestAMRestart flaky in trunk

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-8433:


Assignee: Szilard Nemeth

> TestAMRestart flaky in trunk
> 
>
> Key: YARN-8433
> URL: https://issues.apache.org/jira/browse/YARN-8433
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Szilard Nemeth
>Priority: Major
>
>  
> [org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testContainersFromPreviousAttemptsWithRMRestart[FAIR]|https://builds.apache.org/job/PreCommit-YARN-Build/21002/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testContainersFromPreviousAttemptsWithRMRestart_FAIR_/]
> Attempt state is not correct (timeout). expected: but 
> was:
>  
> [org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart[FAIR]|https://builds.apache.org/job/PreCommit-YARN-Build/21014/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testPreemptedAMRestartOnRMRestart_FAIR_/]
> test timed out after 6 milliseconds



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-19 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517776#comment-16517776
 ] 

Szilard Nemeth commented on YARN-8391:
--

Hey [~szegedim]!

No, it's not related.

The failed testcase is flaky in trunk, see: 
https://issues.apache.org/jira/browse/YARN-8433

So I think it's safe to commit this in this sense.

Thanks!

> Investigate AllocationFileLoaderService.reloadListener locking issue
> 
>
> Key: YARN-8391
> URL: https://issues.apache.org/jira/browse/YARN-8391
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Critical
> Attachments: YARN-8191.001.patch, YARN-8391.002.patch
>
>
> Per findbugs report in YARN-8390, there is some inconsistent locking of  
> reloadListener
>  
> h1. Warnings
> Click on a warning row to see full context information.
> h2. Multithreaded correctness Warnings
> ||Code||Warning||
> |IS|Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
>  locked 75% of time|
> | |[Bug type IS2_INCONSISTENT_SYNC (click for 
> details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
>  
> In class 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
> Field 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
> Synchronized 75% of the time
> Unsynchronized access at AllocationFileLoaderService.java:[line 117]
> Synchronized access at AllocationFileLoaderService.java:[line 212]
> Synchronized access at AllocationFileLoaderService.java:[line 228]
> Synchronized access at AllocationFileLoaderService.java:[line 269]|
> h1. Details
> h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization
> The fields of this class appear to be accessed inconsistently with respect to 
> synchronization.  This bug report indicates that the bug pattern detector 
> judged that
>  * The class contains a mix of locked and unlocked accesses,
>  * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
>  * At least one locked access was performed by one of the class's own 
> methods, and
>  * The number of unsynchronized field accesses (reads and writes) was no more 
> than one third of all accesses, with writes being weighed twice as high as 
> reads
> A typical bug matching this bug pattern is forgetting to synchronize one of 
> the methods in a class that is intended to be thread-safe.
> You can select the nodes labeled "Unsynchronized access" to show the code 
> locations where the detector believed that a field was accessed without 
> synchronization.
> Note that there are various sources of inaccuracy in this detector; for 
> example, the detector cannot statically detect all situations in which a lock 
> is held.  Also, even when the detector is accurate in distinguishing locked 
> vs. unlocked accesses, the code in question may still be correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517772#comment-16517772
 ] 

genericqa commented on YARN-8435:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
25s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928422/YARN-8435.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 976aa406fc80 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d87592 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21057/testReport/ |
| Max. process+thread count | 719 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21057/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.

[jira] [Updated] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rangjiaheng updated YARN-8435:
--
Attachment: YARN-8435.v1.patch

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 2.9.0, 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.v1.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rangjiaheng updated YARN-8435:
--
Attachment: (was: YARN-8435.patch)

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 2.9.0, 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.v1.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8437) Build oom-listener on older versions

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517715#comment-16517715
 ] 

genericqa commented on YARN-8437:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8437 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928419/YARN-8437.000.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 9c2f9b6de36b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d87592 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21056/testReport/ |
| Max. process+thread count | 397 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21056/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Build oom-listener on older versions
> 
>
> Key: YARN-8437
> URL: https://issues.apache.org/jira/browse/YARN-8437
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8437.000.patch
>
>
> oom-listener was introduced in YARN-4599. We have seen some build issues on 
> centos6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8437) Build oom-listener on older versions

2018-06-19 Thread Miklos Szegedi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-8437:
-
Attachment: YARN-8437.000.patch

> Build oom-listener on older versions
> 
>
> Key: YARN-8437
> URL: https://issues.apache.org/jira/browse/YARN-8437
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8437.000.patch
>
>
> oom-listener was introduced in YARN-4599. We have seen some build issues on 
> centos6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8437) Build oom-listener on older versions

2018-06-19 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-8437:


 Summary: Build oom-listener on older versions
 Key: YARN-8437
 URL: https://issues.apache.org/jira/browse/YARN-8437
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi


oom-listener was introduced in YARN-4599. We have seen some build issues on 
centos6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6794) Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the node where they're running

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517673#comment-16517673
 ] 

genericqa commented on YARN-6794:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
7s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-1011 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 319 unchanged - 0 fixed = 321 total (was 319) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 
18s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt.opportunisticContainerPromoted(RMContainer)
 does not release lock on all exception paths  At FSAppAttempt.java:on all 
exception paths  At FSAppAttempt.java:[line 519] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928412/YARN-6794-YARN-1011.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0d8bfb67470b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (YARN-6794) Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the node where they're running

2018-06-19 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517650#comment-16517650
 ] 

Miklos Szegedi commented on YARN-6794:
--

Thank you for the update [~haibochen].
{code}
519   writeLock.lock();
520   attemptOpportunisticResourceUsage.decUsed(resource);
521   attemptResourceUsage.incUsed(resource);
522   getQueue().incUsedGuaranteedResource(resource);
523   writeLock.unlock();
{code}
It is safer to unlock in a finally to handle runtime exceptions properly.

> Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the 
> node where they're running
> 
>
> Key: YARN-6794
> URL: https://issues.apache.org/jira/browse/YARN-6794
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-6794-YARN-1011.00.patch, 
> YARN-6794-YARN-1011.01.patch, YARN-6794-YARN-1011.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517635#comment-16517635
 ] 

genericqa commented on YARN-4606:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 51 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Switch statement found in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(SchedulerEvent)
 where one case falls through to the next case  At CapacityScheduler.java:where 
one case falls through to the next case  At CapacityScheduler.java:[lines 
1832-1838] |
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMHA |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.TestAppSchedulingInfo |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-4606 |
| JIRA Patch UR

[jira] [Updated] (YARN-6794) Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the node where they're running

2018-06-19 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6794:
-
Attachment: YARN-6794-YARN-1011.01.patch

> Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the 
> node where they're running
> 
>
> Key: YARN-6794
> URL: https://issues.apache.org/jira/browse/YARN-6794
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-6794-YARN-1011.00.patch, 
> YARN-6794-YARN-1011.01.patch, YARN-6794-YARN-1011.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8436) FSParentQueue: Comparison method violates its general contract

2018-06-19 Thread Wilfred Spiegelenburg (JIRA)
Wilfred Spiegelenburg created YARN-8436:
---

 Summary: FSParentQueue: Comparison method violates its general 
contract
 Key: YARN-8436
 URL: https://issues.apache.org/jira/browse/YARN-8436
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.1.0
Reporter: Wilfred Spiegelenburg


The ResourceManager can fail while sorting queues if an update comes in:
{code:java}
FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type NODE_UPDATE to the scheduler
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
at java.util.TimSort.mergeLo(TimSort.java:777)
at java.util.TimSort.mergeAt(TimSort.java:514)
...
at java.util.Collections.sort(Collections.java:175)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:223){code}
The reason it breaks is a change in the sorted object itself. 
This is why it fails:
 * an update from a node comes in as a heartbeat.
 * the update triggers a check to see if we can assign a container on the node.
 * walk over the queue hierarchy to find a queue to assign a container to: top 
down.
 * for each parent queue we sort the child queues in {{assignContainer}} to 
decide which queue to descent into.
 * we lock the parent queue when sort to prevent changes, but we do not lock 
the child queues that we are sorting.

If during this sorting a different node update changes a child queue then we 
allow that. This means that the objects that we are trying to sort now might be 
out of order. That causes the issue with the comparator. The comparator itself 
is not broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8436) FSParentQueue: Comparison method violates its general contract

2018-06-19 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg reassigned YARN-8436:
---

Assignee: Wilfred Spiegelenburg

> FSParentQueue: Comparison method violates its general contract
> --
>
> Key: YARN-8436
> URL: https://issues.apache.org/jira/browse/YARN-8436
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
>
> The ResourceManager can fail while sorting queues if an update comes in:
> {code:java}
> FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>   at java.util.TimSort.mergeLo(TimSort.java:777)
>   at java.util.TimSort.mergeAt(TimSort.java:514)
> ...
>   at java.util.Collections.sort(Collections.java:175)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:223){code}
> The reason it breaks is a change in the sorted object itself. 
> This is why it fails:
>  * an update from a node comes in as a heartbeat.
>  * the update triggers a check to see if we can assign a container on the 
> node.
>  * walk over the queue hierarchy to find a queue to assign a container to: 
> top down.
>  * for each parent queue we sort the child queues in {{assignContainer}} to 
> decide which queue to descent into.
>  * we lock the parent queue when sort to prevent changes, but we do not lock 
> the child queues that we are sorting.
> If during this sorting a different node update changes a child queue then we 
> allow that. This means that the objects that we are trying to sort now might 
> be out of order. That causes the issue with the comparator. The comparator 
> itself is not broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-19 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517557#comment-16517557
 ] 

Eric Payne commented on YARN-4606:
--

I put this Jira into PATCH AVAILABLE mode so that it would kick the pre-commit 
build.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.003.patch, YARN-4606.004.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517549#comment-16517549
 ] 

genericqa commented on YARN-8435:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928390/YARN-8435.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 59a615980f67 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2d87592 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21053/testReport/ |
| Max. process+thread count | 690 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21053/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.ap

[jira] [Commented] (YARN-8417) Should skip passing HDFS_HOME, HADOOP_CONF_DIR, JAVA_HOME, etc. to Docker container.

2018-06-19 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517531#comment-16517531
 ] 

Jim Brennan commented on YARN-8417:
---

Just to clarify - this is only for the ENTRY_POINT case, right?  For the 
non-ENTRY_POINT case, we are using a launch_container.sh script to specify 
environment variables, and for those in the NM whitelist, we specify them using 
the syntax that allows the value in the docker file to override it. e.g.:

{noformat}
JAVA_HOME=${JAVA_HOME:-/nm/java/home}
{noformat}

{quote}
For Docker container, it actually doesn't make sense to pass JAVA_HOME, 
HDFS_HOME, etc. because inside docker image we have a separate Java/Hadoop 
installed or mounted to exactly same directory of host machine.
{quote}

In the non-ENTRY_POINT case, we need to support the ability for NM to set these 
if they are in the whitelist and they are not defined in the docker container, 
which is why we use the above syntax.


> Should skip passing HDFS_HOME, HADOOP_CONF_DIR, JAVA_HOME, etc. to Docker 
> container.
> 
>
> Key: YARN-8417
> URL: https://issues.apache.org/jira/browse/YARN-8417
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Priority: Critical
>
> Currently, YARN NM passes JAVA_HOME, HDFS_HOME, CLASSPATH environments before 
> launching Docker container no matter if ENTRY_POINT is used or not. This will 
> overwrite environments defined inside Dockerfile (by using \{{ENV}}). For 
> Docker container, it actually doesn't make sense to pass JAVA_HOME, 
> HDFS_HOME, etc. because inside docker image we have a separate Java/Hadoop 
> installed or mounted to exactly same directory of host machine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6794) Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the node where they're running

2018-06-19 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517513#comment-16517513
 ] 

Haibo Chen commented on YARN-6794:
--

Thanks [~szegedim] for the review!.
{quote}I think this should refer to allocatedResourceOpportunistic
{quote}
Good catch. I have corrected this.
{quote}It might be safer to do a clone here and return a read only copy.
{quote}
I have addressed this in the new patch.
{quote}Is this really true? I could imagine that we try but cannot assign a 
reserved resource but we do not even try the opportunistic queue in that case.
{quote}
I have made it more explicitly by saying no more resources are available for 
promotion. The purpose here is that we satisfy resource requests that are 
eligible for guaranteed resources in FIFO order. If a reservation is made 
before an opportunistic container is allocated. We should allow opportunistic 
container promotion only after the reservation is assigned successfully.  The 
FIFO order also holds in cases where opportunistic containers are allocated 
before a reservation is made.

For the synchronization related issues, I have made quite some changes that 
move the code in order to  improve the locking.

> Fair Scheduler to explicitly promote OPPORTUNISITIC containers locally at the 
> node where they're running
> 
>
> Key: YARN-6794
> URL: https://issues.apache.org/jira/browse/YARN-6794
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-6794-YARN-1011.00.patch, 
> YARN-6794-YARN-1011.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8435:
---
Affects Version/s: 2.9.0

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 2.9.0, 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517419#comment-16517419
 ] 

Giovanni Matteo Fumarola commented on YARN-8435:


Thanks [~NeoMatrix] for working on it. I will take a look of the patch.

Can you please name the patch with the version e.g. YARN-8435.v1.patch?

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rangjiaheng updated YARN-8435:
--
Attachment: YARN-8435.patch

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
> Attachments: YARN-8435.patch
>
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517395#comment-16517395
 ] 

rangjiaheng edited comment on YARN-8435 at 6/19/18 6:20 PM:


 
{code:java}
private RequestInterceptorChainWrapper getInterceptorChain()
throws IOException {
  String user = UserGroupInformation.getCurrentUser().getUserName();
  if (!userPipelineMap.containsKey(user)) {
initializePipeline(user);
  }
  return userPipelineMap.get(user);
}

private void initializePipeline(String user) {
  RequestInterceptorChainWrapper chainWrapper = null;
  synchronized (this.userPipelineMap) {
if (this.userPipelineMap.containsKey(user)) {
  LOG.info("Request to start an already existing user: {}"
  + " was received, so ignoring.", user);
  return;
}

chainWrapper = new RequestInterceptorChainWrapper();
this.userPipelineMap.put(user, chainWrapper);
  }

  // We register the pipeline instance in the map first and then initialize it
  // later because chain initialization can be expensive and we would like to
  // release the lock as soon as possible to prevent other applications from
  // blocking when one application's chain is initializing
  LOG.info("Initializing request processing pipeline for application "
  + "for the user: {}", user);

  try {
ClientRequestInterceptor interceptorChain =
this.createRequestInterceptorChain();
interceptorChain.init(user);
chainWrapper.init(interceptorChain);
  } catch (Exception e) {
synchronized (this.userPipelineMap) {
  this.userPipelineMap.remove(user);
}
throw e;
  }
}
{code}
 

As we can see, when two client process begin to connect to router, they may 
call getInterceptorChain() at the same time. Suppose the first thread get 
nothing from userPipelineMap, it then call initializePipeline(), build a 
chainWrapper and put it in userPipelineMap in synchronized 
(this.userPipelineMap) {}, now the second thread may get chainWrapper out from 
userPipelineMap, but it is not init !

Then the second thread call pipeline.getRootInterceptor().submitApplication(), 
a NullPointerException happened since pipeline.getRootInterceptor() was null.

 


was (Author: neomatrix):
 
{code:java}
private RequestInterceptorChainWrapper getInterceptorChain()
throws IOException {
  String user = UserGroupInformation.getCurrentUser().getUserName();
  if (!userPipelineMap.containsKey(user)) {
initializePipeline(user);
  }
  return userPipelineMap.get(user);
}

private void initializePipeline(String user) {
  RequestInterceptorChainWrapper chainWrapper = null;
  synchronized (this.userPipelineMap) {
if (this.userPipelineMap.containsKey(user)) {
  LOG.info("Request to start an already existing user: {}"
  + " was received, so ignoring.", user);
  return;
}

chainWrapper = new RequestInterceptorChainWrapper();
this.userPipelineMap.put(user, chainWrapper);
  }

  // We register the pipeline instance in the map first and then initialize it
  // later because chain initialization can be expensive and we would like to
  // release the lock as soon as possible to prevent other applications from
  // blocking when one application's chain is initializing
  LOG.info("Initializing request processing pipeline for application "
  + "for the user: {}", user);

  try {
ClientRequestInterceptor interceptorChain =
this.createRequestInterceptorChain();
interceptorChain.init(user);
chainWrapper.init(interceptorChain);
  } catch (Exception e) {
synchronized (this.userPipelineMap) {
  this.userPipelineMap.remove(user);
}
throw e;
  }
}
{code}
 

As we can see, when two client process begin to connect to router, they may 
call getInterceptorChain() at the same time. Suppose the first thread get 
nothing from userPipelineMap, it then call initializePipeline(), build a 
chainWrapper and put it in userPipelineMap in synchronized 
(this.userPipelineMap) {}, now the second thread may get chainWrapper out from 
userPipelineMap, but it is not init !

Then the second thread call pipeline.getRootInterceptor().submitApplication(), 
a NullPointerException happened since pipeline.getRootInterceptor() was null.

 

 

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerExcep

[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517395#comment-16517395
 ] 

rangjiaheng commented on YARN-8435:
---

 
{code:java}
private RequestInterceptorChainWrapper getInterceptorChain()
throws IOException {
  String user = UserGroupInformation.getCurrentUser().getUserName();
  if (!userPipelineMap.containsKey(user)) {
initializePipeline(user);
  }
  return userPipelineMap.get(user);
}

private void initializePipeline(String user) {
  RequestInterceptorChainWrapper chainWrapper = null;
  synchronized (this.userPipelineMap) {
if (this.userPipelineMap.containsKey(user)) {
  LOG.info("Request to start an already existing user: {}"
  + " was received, so ignoring.", user);
  return;
}

chainWrapper = new RequestInterceptorChainWrapper();
this.userPipelineMap.put(user, chainWrapper);
  }

  // We register the pipeline instance in the map first and then initialize it
  // later because chain initialization can be expensive and we would like to
  // release the lock as soon as possible to prevent other applications from
  // blocking when one application's chain is initializing
  LOG.info("Initializing request processing pipeline for application "
  + "for the user: {}", user);

  try {
ClientRequestInterceptor interceptorChain =
this.createRequestInterceptorChain();
interceptorChain.init(user);
chainWrapper.init(interceptorChain);
  } catch (Exception e) {
synchronized (this.userPipelineMap) {
  this.userPipelineMap.remove(user);
}
throw e;
  }
}
{code}
 

As we can see, when two client process begin to connect to router, they may 
call getInterceptorChain() at the same time. Suppose the first thread get 
nothing from userPipelineMap, it then call initializePipeline(), build a 
chainWrapper and put it in userPipelineMap in synchronized 
(this.userPipelineMap) {}, now the second thread may get chainWrapper out from 
userPipelineMap, but it is not init !

Then the second thread call pipeline.getRootInterceptor().submitApplication(), 
a NullPointerException happened since pipeline.getRootInterceptor() was null.

 

 

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rangjiaheng updated YARN-8435:
--
Description: 
When Two client process (with the same user name and the same hostname) begin 
to connect to yarn router at the same time, to submit application, kill 
application, ... and so on, then a java.lang.NullPointerException may throws 
from yarn router.

 

 

> NullPointerException when client first time connect to Yarn Router
> --
>
> Key: YARN-8435
> URL: https://issues.apache.org/jira/browse/YARN-8435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: router
>Affects Versions: 3.0.2
>Reporter: rangjiaheng
>Priority: Critical
>
> When Two client process (with the same user name and the same hostname) begin 
> to connect to yarn router at the same time, to submit application, kill 
> application, ... and so on, then a java.lang.NullPointerException may throws 
> from yarn router.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-19 Thread rangjiaheng (JIRA)
rangjiaheng created YARN-8435:
-

 Summary: NullPointerException when client first time connect to 
Yarn Router
 Key: YARN-8435
 URL: https://issues.apache.org/jira/browse/YARN-8435
 Project: Hadoop YARN
  Issue Type: Bug
  Components: router
Affects Versions: 3.0.2
Reporter: rangjiaheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-19 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517222#comment-16517222
 ] 

Bibin A Chundatt edited comment on YARN-8434 at 6/19/18 5:50 PM:
-

[~subru]

Seems we have to add {{RM_RESOURCE_TRACKER_ADDRESS}} also to handle cluster 
setup in HA.

# Add ResourceTracker bind address to SubClusterInfo and  set 
YarnConfiguration.RM_RESOURCE_TRACKER_ADDRESS}} in {{updateRMAddress}}
# Second option is to handle failover  to rm2 inside 
FederationRMFailoverProxyProvider by setting rmId on performFailover operation.



was (Author: bibinchundatt):
[~subru]

Seems we have to add {{RM_RESOURCE_TRACKER_ADDRESS}} also to handle cluster 
setup in HA.

# Add ResourceTracker bind address to SubClusterInfo and  set 
YarnConfiguration.RM_RESOURCE_TRACKER_ADDRESS}} in {{updateRMAddress}}


> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8412) Move ResourceRequest.clone logic everywhere into a proper API

2018-06-19 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517358#comment-16517358
 ] 

Botong Huang commented on YARN-8412:


Great! I think we need it in trunk and branch-2, both patches are uploaded. Thx!

> Move ResourceRequest.clone logic everywhere into a proper API
> -
>
> Key: YARN-8412
> URL: https://issues.apache.org/jira/browse/YARN-8412
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-8412-branch-2.v2.patch, YARN-8412.v1.patch, 
> YARN-8412.v2.patch
>
>
> ResourceRequest.clone code is replicated in lots of places, some missing to 
> copy one field or two due to new fields added over time. This JIRA attempts 
> to move them into a proper API so that everyone can use this single 
> implementation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7145) Identify potential flaky unit tests

2018-06-19 Thread Miklos Szegedi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi reassigned YARN-7145:


Assignee: (was: Miklos Szegedi)

> Identify potential flaky unit tests
> ---
>
> Key: YARN-7145
> URL: https://issues.apache.org/jira/browse/YARN-7145
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: nodemanager, resourcemanager
>Reporter: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-7145.000.patch, YARN-7145.001.patch
>
>
> I intend to add a 200 milliseconds sleep into AsyncDispatcher, and run the 
> job to identify the tests that are potentially flaky.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7449) Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl

2018-06-19 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517344#comment-16517344
 ] 

Miklos Szegedi commented on YARN-7449:
--

Thank you for the patch [~snemeth]. I still see references to the 
implementation class in TestYarnClient:
{code:java}
1164YarnClientImpl impl = (YarnClientImpl) client;
1165YarnClientImpl spyClient = spy(impl);{code}
Also, could you address the whitespace issue?

> Split up class TestYarnClient to TestYarnClient and TestYarnClientImpl
> --
>
> Key: YARN-7449
> URL: https://issues.apache.org/jira/browse/YARN-7449
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, yarn
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: newbie, newbie++
> Attachments: YARN-7449-001.patch
>
>
> {{TestYarnClient}} tests both {{YarnClient}} and {{YarnClientImpl}}. We 
> should test {{YarnClient}} without thinking of its implementation. That's the 
> whole point of {{YarnClient}}. There are bunch of refactors we could do. The 
> first thing is to Split up class {{TestYarnClient}} to {{TestYarnClient}} and 
> {{TestYarnClientImpl}}. Let {{TestYarnClient}} only tests {{YarnClient}}. All 
> implementation related stuff go to  {{TestYarnClientImpl}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-19 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517335#comment-16517335
 ] 

Miklos Szegedi commented on YARN-8391:
--

Thank you for the patch [~snemeth]. Is the unit test does not seem to be 
related, is it?

> Investigate AllocationFileLoaderService.reloadListener locking issue
> 
>
> Key: YARN-8391
> URL: https://issues.apache.org/jira/browse/YARN-8391
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Critical
> Attachments: YARN-8191.001.patch, YARN-8391.002.patch
>
>
> Per findbugs report in YARN-8390, there is some inconsistent locking of  
> reloadListener
>  
> h1. Warnings
> Click on a warning row to see full context information.
> h2. Multithreaded correctness Warnings
> ||Code||Warning||
> |IS|Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
>  locked 75% of time|
> | |[Bug type IS2_INCONSISTENT_SYNC (click for 
> details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
>  
> In class 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
> Field 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
> Synchronized 75% of the time
> Unsynchronized access at AllocationFileLoaderService.java:[line 117]
> Synchronized access at AllocationFileLoaderService.java:[line 212]
> Synchronized access at AllocationFileLoaderService.java:[line 228]
> Synchronized access at AllocationFileLoaderService.java:[line 269]|
> h1. Details
> h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization
> The fields of this class appear to be accessed inconsistently with respect to 
> synchronization.  This bug report indicates that the bug pattern detector 
> judged that
>  * The class contains a mix of locked and unlocked accesses,
>  * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
>  * At least one locked access was performed by one of the class's own 
> methods, and
>  * The number of unsynchronized field accesses (reads and writes) was no more 
> than one third of all accesses, with writes being weighed twice as high as 
> reads
> A typical bug matching this bug pattern is forgetting to synchronize one of 
> the methods in a class that is intended to be thread-safe.
> You can select the nodes labeled "Unsynchronized access" to show the code 
> locations where the detector believed that a field was accessed without 
> synchronization.
> Note that there are various sources of inaccuracy in this detector; for 
> example, the detector cannot statically detect all situations in which a lock 
> is held.  Also, even when the detector is accurate in distinguishing locked 
> vs. unlocked accesses, the code in question may still be correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8326) Yarn 3.0 seems runs slower than Yarn 2.6

2018-06-19 Thread Tanping Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517330#comment-16517330
 ] 

Tanping Wang commented on YARN-8326:


[~shaneku...@gmail.com]  When do you think you can have the patch uploaded?  
This is causing a big slow down for 3.0 Yarn applications.

> Yarn 3.0 seems runs slower than Yarn 2.6
> 
>
> Key: YARN-8326
> URL: https://issues.apache.org/jira/browse/YARN-8326
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
> Environment: This is the yarn-site.xml for 3.0. 
>  
> 
> 
>  hadoop.registry.dns.bind-port
>  5353
>  
> 
>  hadoop.registry.dns.domain-name
>  hwx.site
>  
> 
>  hadoop.registry.dns.enabled
>  true
>  
> 
>  hadoop.registry.dns.zone-mask
>  255.255.255.0
>  
> 
>  hadoop.registry.dns.zone-subnet
>  172.17.0.0
>  
> 
>  manage.include.files
>  false
>  
> 
>  yarn.acl.enable
>  false
>  
> 
>  yarn.admin.acl
>  yarn
>  
> 
>  yarn.client.nodemanager-connect.max-wait-ms
>  6
>  
> 
>  yarn.client.nodemanager-connect.retry-interval-ms
>  1
>  
> 
>  yarn.http.policy
>  HTTP_ONLY
>  
> 
>  yarn.log-aggregation-enable
>  false
>  
> 
>  yarn.log-aggregation.retain-seconds
>  2592000
>  
> 
>  yarn.log.server.url
>  
> [http://xx:19888/jobhistory/logs|http://whiny2.fyre.ibm.com:19888/jobhistory/logs]
>  
> 
>  yarn.log.server.web-service.url
>  
> [http://xx:8188/ws/v1/applicationhistory|http://whiny2.fyre.ibm.com:8188/ws/v1/applicationhistory]
>  
> 
>  yarn.node-labels.enabled
>  false
>  
> 
>  yarn.node-labels.fs-store.retry-policy-spec
>  2000, 500
>  
> 
>  yarn.node-labels.fs-store.root-dir
>  /system/yarn/node-labels
>  
> 
>  yarn.nodemanager.address
>  0.0.0.0:45454
>  
> 
>  yarn.nodemanager.admin-env
>  MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
>  
> 
>  yarn.nodemanager.aux-services
>  mapreduce_shuffle,spark2_shuffle,timeline_collector
>  
> 
>  yarn.nodemanager.aux-services.mapreduce_shuffle.class
>  org.apache.hadoop.mapred.ShuffleHandler
>  
> 
>  yarn.nodemanager.aux-services.spark2_shuffle.class
>  org.apache.spark.network.yarn.YarnShuffleService
>  
> 
>  yarn.nodemanager.aux-services.spark2_shuffle.classpath
>  /usr/spark2/aux/*
>  
> 
>  yarn.nodemanager.aux-services.spark_shuffle.class
>  org.apache.spark.network.yarn.YarnShuffleService
>  
> 
>  yarn.nodemanager.aux-services.timeline_collector.class
>  
> org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService
>  
> 
>  yarn.nodemanager.bind-host
>  0.0.0.0
>  
> 
>  yarn.nodemanager.container-executor.class
>  
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
>  
> 
>  yarn.nodemanager.container-metrics.unregister-delay-ms
>  6
>  
> 
>  yarn.nodemanager.container-monitor.interval-ms
>  3000
>  
> 
>  yarn.nodemanager.delete.debug-delay-sec
>  0
>  
> 
>  
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
>  90
>  
> 
>  yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
>  1000
>  
> 
>  yarn.nodemanager.disk-health-checker.min-healthy-disks
>  0.25
>  
> 
>  yarn.nodemanager.health-checker.interval-ms
>  135000
>  
> 
>  yarn.nodemanager.health-checker.script.timeout-ms
>  6
>  
> 
>  
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage
>  false
>  
> 
>  yarn.nodemanager.linux-container-executor.group
>  hadoop
>  
> 
>  
> yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users
>  false
>  
> 
>  yarn.nodemanager.local-dirs
>  /hadoop/yarn/local
>  
> 
>  yarn.nodemanager.log-aggregation.compression-type
>  gz
>  
> 
>  yarn.nodemanager.log-aggregation.debug-enabled
>  false
>  
> 
>  yarn.nodemanager.log-aggregation.num-log-files-per-app
>  30
>  
> 
>  
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
>  3600
>  
> 
>  yarn.nodemanager.log-dirs
>  /hadoop/yarn/log
>  
> 
>  yarn.nodemanager.log.retain-seconds
>  604800
>  
> 
>  yarn.nodemanager.pmem-check-enabled
>  false
>  
> 
>  yarn.nodemanager.recovery.dir
>  /var/log/hadoop-yarn/nodemanager/recovery-state
>  
> 
>  yarn.nodemanager.recovery.enabled
>  true
>  
> 
>  yarn.nodemanager.recovery.supervised
>  true
>  
> 
>  yarn.nodemanager.remote-app-log-dir
>  /app-logs
>  
> 
>  yarn.nodemanager.remote-app-log-dir-suffix
>  logs
>  
> 
>  yarn.nodemanager.resource-plugins
>  
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
>  auto
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.docker-plugin
>  nvidia-docker-v1
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidiadocker-
>  v1.endpoint
>  [http://localhost:3476/v1.0/docker/cli]
>  
> 
>  
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
>  
>  
> 
>  yarn.nodemanager.resource.cpu-vcores
>  6
>  
> 
>  yarn.no

[jira] [Commented] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-19 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517222#comment-16517222
 ] 

Bibin A Chundatt commented on YARN-8434:


[~subru]

Seems we have to add {{RM_RESOURCE_TRACKER_ADDRESS}} also to handle cluster 
setup in HA.

# Add ResourceTracker bind address to SubClusterInfo and  set 
YarnConfiguration.RM_RESOURCE_TRACKER_ADDRESS}} in {{updateRMAddress}}


> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-19 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-8434:
--

 Summary: Nodemanager not registering to active RM in federation
 Key: YARN-8434
 URL: https://issues.apache.org/jira/browse/YARN-8434
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8326) Yarn 3.0 seems runs slower than Yarn 2.6

2018-06-19 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16517066#comment-16517066
 ] 

Shane Kumpf commented on YARN-8326:
---

Thanks for the analysis, [~eyang]. Based on your findings, it does appear this 
should be removed.

> Yarn 3.0 seems runs slower than Yarn 2.6
> 
>
> Key: YARN-8326
> URL: https://issues.apache.org/jira/browse/YARN-8326
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0
> Environment: This is the yarn-site.xml for 3.0. 
>  
> 
> 
>  hadoop.registry.dns.bind-port
>  5353
>  
> 
>  hadoop.registry.dns.domain-name
>  hwx.site
>  
> 
>  hadoop.registry.dns.enabled
>  true
>  
> 
>  hadoop.registry.dns.zone-mask
>  255.255.255.0
>  
> 
>  hadoop.registry.dns.zone-subnet
>  172.17.0.0
>  
> 
>  manage.include.files
>  false
>  
> 
>  yarn.acl.enable
>  false
>  
> 
>  yarn.admin.acl
>  yarn
>  
> 
>  yarn.client.nodemanager-connect.max-wait-ms
>  6
>  
> 
>  yarn.client.nodemanager-connect.retry-interval-ms
>  1
>  
> 
>  yarn.http.policy
>  HTTP_ONLY
>  
> 
>  yarn.log-aggregation-enable
>  false
>  
> 
>  yarn.log-aggregation.retain-seconds
>  2592000
>  
> 
>  yarn.log.server.url
>  
> [http://xx:19888/jobhistory/logs|http://whiny2.fyre.ibm.com:19888/jobhistory/logs]
>  
> 
>  yarn.log.server.web-service.url
>  
> [http://xx:8188/ws/v1/applicationhistory|http://whiny2.fyre.ibm.com:8188/ws/v1/applicationhistory]
>  
> 
>  yarn.node-labels.enabled
>  false
>  
> 
>  yarn.node-labels.fs-store.retry-policy-spec
>  2000, 500
>  
> 
>  yarn.node-labels.fs-store.root-dir
>  /system/yarn/node-labels
>  
> 
>  yarn.nodemanager.address
>  0.0.0.0:45454
>  
> 
>  yarn.nodemanager.admin-env
>  MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
>  
> 
>  yarn.nodemanager.aux-services
>  mapreduce_shuffle,spark2_shuffle,timeline_collector
>  
> 
>  yarn.nodemanager.aux-services.mapreduce_shuffle.class
>  org.apache.hadoop.mapred.ShuffleHandler
>  
> 
>  yarn.nodemanager.aux-services.spark2_shuffle.class
>  org.apache.spark.network.yarn.YarnShuffleService
>  
> 
>  yarn.nodemanager.aux-services.spark2_shuffle.classpath
>  /usr/spark2/aux/*
>  
> 
>  yarn.nodemanager.aux-services.spark_shuffle.class
>  org.apache.spark.network.yarn.YarnShuffleService
>  
> 
>  yarn.nodemanager.aux-services.timeline_collector.class
>  
> org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService
>  
> 
>  yarn.nodemanager.bind-host
>  0.0.0.0
>  
> 
>  yarn.nodemanager.container-executor.class
>  
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
>  
> 
>  yarn.nodemanager.container-metrics.unregister-delay-ms
>  6
>  
> 
>  yarn.nodemanager.container-monitor.interval-ms
>  3000
>  
> 
>  yarn.nodemanager.delete.debug-delay-sec
>  0
>  
> 
>  
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
>  90
>  
> 
>  yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
>  1000
>  
> 
>  yarn.nodemanager.disk-health-checker.min-healthy-disks
>  0.25
>  
> 
>  yarn.nodemanager.health-checker.interval-ms
>  135000
>  
> 
>  yarn.nodemanager.health-checker.script.timeout-ms
>  6
>  
> 
>  
> yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage
>  false
>  
> 
>  yarn.nodemanager.linux-container-executor.group
>  hadoop
>  
> 
>  
> yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users
>  false
>  
> 
>  yarn.nodemanager.local-dirs
>  /hadoop/yarn/local
>  
> 
>  yarn.nodemanager.log-aggregation.compression-type
>  gz
>  
> 
>  yarn.nodemanager.log-aggregation.debug-enabled
>  false
>  
> 
>  yarn.nodemanager.log-aggregation.num-log-files-per-app
>  30
>  
> 
>  
> yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds
>  3600
>  
> 
>  yarn.nodemanager.log-dirs
>  /hadoop/yarn/log
>  
> 
>  yarn.nodemanager.log.retain-seconds
>  604800
>  
> 
>  yarn.nodemanager.pmem-check-enabled
>  false
>  
> 
>  yarn.nodemanager.recovery.dir
>  /var/log/hadoop-yarn/nodemanager/recovery-state
>  
> 
>  yarn.nodemanager.recovery.enabled
>  true
>  
> 
>  yarn.nodemanager.recovery.supervised
>  true
>  
> 
>  yarn.nodemanager.remote-app-log-dir
>  /app-logs
>  
> 
>  yarn.nodemanager.remote-app-log-dir-suffix
>  logs
>  
> 
>  yarn.nodemanager.resource-plugins
>  
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
>  auto
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.docker-plugin
>  nvidia-docker-v1
>  
> 
>  yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidiadocker-
>  v1.endpoint
>  [http://localhost:3476/v1.0/docker/cli]
>  
> 
>  
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
>  
>  
> 
>  yarn.nodemanager.resource.cpu-vcores
>  6
>  
> 
>  yarn.nodemanager.resource.memory-mb
>  12288
> 

[jira] [Commented] (YARN-8180) YARN Federation has not implemented blacklist sub-cluster for AM routing

2018-06-19 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516994#comment-16516994
 ] 

Abhishek Modi commented on YARN-8180:
-

[~giovanni.fumarola]  [~elgoiri] could you please review it.

> YARN Federation has not implemented blacklist sub-cluster for AM routing
> 
>
> Key: YARN-8180
> URL: https://issues.apache.org/jira/browse/YARN-8180
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Reporter: Shen Yinjie
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8180.001.patch
>
>
> Property "yarn.federation.blacklist-subclusters" is defined in 
> yarn-fedeartion doc,but it has not been defined and implemented in Java code.
> In FederationClientInterceptor#submitApplication()
> {code:java}
> List blacklist = new ArrayList();
> for (int i = 0; i < numSubmitRetries; ++i) {
> SubClusterId subClusterId = policyFacade.getHomeSubcluster(
> request.getApplicationSubmissionContext(), blacklist);
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-19 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516929#comment-16516929
 ] 

genericqa commented on YARN-8391:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
35s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  1m 
43s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928318/YARN-8391.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8491ae4bea56 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f386e78 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/21052/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn

[jira] [Commented] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-19 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516834#comment-16516834
 ] 

Szilard Nemeth commented on YARN-8391:
--

Adding new patch as the previous was not addressed the issue.

> Investigate AllocationFileLoaderService.reloadListener locking issue
> 
>
> Key: YARN-8391
> URL: https://issues.apache.org/jira/browse/YARN-8391
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Critical
> Attachments: YARN-8191.001.patch, YARN-8391.002.patch
>
>
> Per findbugs report in YARN-8390, there is some inconsistent locking of  
> reloadListener
>  
> h1. Warnings
> Click on a warning row to see full context information.
> h2. Multithreaded correctness Warnings
> ||Code||Warning||
> |IS|Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
>  locked 75% of time|
> | |[Bug type IS2_INCONSISTENT_SYNC (click for 
> details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
>  
> In class 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
> Field 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
> Synchronized 75% of the time
> Unsynchronized access at AllocationFileLoaderService.java:[line 117]
> Synchronized access at AllocationFileLoaderService.java:[line 212]
> Synchronized access at AllocationFileLoaderService.java:[line 228]
> Synchronized access at AllocationFileLoaderService.java:[line 269]|
> h1. Details
> h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization
> The fields of this class appear to be accessed inconsistently with respect to 
> synchronization.  This bug report indicates that the bug pattern detector 
> judged that
>  * The class contains a mix of locked and unlocked accesses,
>  * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
>  * At least one locked access was performed by one of the class's own 
> methods, and
>  * The number of unsynchronized field accesses (reads and writes) was no more 
> than one third of all accesses, with writes being weighed twice as high as 
> reads
> A typical bug matching this bug pattern is forgetting to synchronize one of 
> the methods in a class that is intended to be thread-safe.
> You can select the nodes labeled "Unsynchronized access" to show the code 
> locations where the detector believed that a field was accessed without 
> synchronization.
> Note that there are various sources of inaccuracy in this detector; for 
> example, the detector cannot statically detect all situations in which a lock 
> is held.  Also, even when the detector is accurate in distinguishing locked 
> vs. unlocked accesses, the code in question may still be correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8391) Investigate AllocationFileLoaderService.reloadListener locking issue

2018-06-19 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8391:
-
Attachment: YARN-8391.002.patch

> Investigate AllocationFileLoaderService.reloadListener locking issue
> 
>
> Key: YARN-8391
> URL: https://issues.apache.org/jira/browse/YARN-8391
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Critical
> Attachments: YARN-8191.001.patch, YARN-8391.002.patch
>
>
> Per findbugs report in YARN-8390, there is some inconsistent locking of  
> reloadListener
>  
> h1. Warnings
> Click on a warning row to see full context information.
> h2. Multithreaded correctness Warnings
> ||Code||Warning||
> |IS|Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
>  locked 75% of time|
> | |[Bug type IS2_INCONSISTENT_SYNC (click for 
> details)|https://builds.apache.org/job/PreCommit-YARN-Build/20939/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html#IS2_INCONSISTENT_SYNC]
>  
> In class 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
> Field 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener
> Synchronized 75% of the time
> Unsynchronized access at AllocationFileLoaderService.java:[line 117]
> Synchronized access at AllocationFileLoaderService.java:[line 212]
> Synchronized access at AllocationFileLoaderService.java:[line 228]
> Synchronized access at AllocationFileLoaderService.java:[line 269]|
> h1. Details
> h2. IS2_INCONSISTENT_SYNC: Inconsistent synchronization
> The fields of this class appear to be accessed inconsistently with respect to 
> synchronization.  This bug report indicates that the bug pattern detector 
> judged that
>  * The class contains a mix of locked and unlocked accesses,
>  * The class is *not* annotated as javax.annotation.concurrent.NotThreadSafe,
>  * At least one locked access was performed by one of the class's own 
> methods, and
>  * The number of unsynchronized field accesses (reads and writes) was no more 
> than one third of all accesses, with writes being weighed twice as high as 
> reads
> A typical bug matching this bug pattern is forgetting to synchronize one of 
> the methods in a class that is intended to be thread-safe.
> You can select the nodes labeled "Unsynchronized access" to show the code 
> locations where the detector believed that a field was accessed without 
> synchronization.
> Note that there are various sources of inaccuracy in this detector; for 
> example, the detector cannot statically detect all situations in which a lock 
> is held.  Also, even when the detector is accurate in distinguishing locked 
> vs. unlocked accesses, the code in question may still be correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7668) Remove unused variables from ContainerLocalizer

2018-06-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516808#comment-16516808
 ] 

ASF GitHub Bot commented on YARN-7668:
--

Github user aajisaka commented on the issue:

https://github.com/apache/hadoop/pull/364
  
Thanks!


> Remove unused variables from ContainerLocalizer
> ---
>
> Key: YARN-7668
> URL: https://issues.apache.org/jira/browse/YARN-7668
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Ray Chiang
>Assignee: Dedunu Dhananjaya
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.2.0
>
> Attachments: 364.patch
>
>
> While figuring out something else, I found two class constants in 
> ContainerLocalizer that look like aren't being used anymore.
> {noformat}
>   public static final String OUTPUTDIR = "output";
>   public static final String WORKDIR = "work";
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Add missing tests to verify the presence of custom resources of RM apps and scheduler webservice endpoints

2018-06-19 Thread Gergo Repas (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516800#comment-16516800
 ] 

Gergo Repas commented on YARN-7451:
---

[~snemeth] Thanks for the updated patch!

+1 (non-binding)

> Add missing tests to verify the presence of custom resources of RM apps and 
> scheduler webservice endpoints
> --
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, YARN-7451.004.patch, YARN-7451.005.patch, 
> YARN-7451.006.patch, YARN-7451.007.patch, YARN-7451.008.patch, 
> YARN-7451.009.patch, YARN-7451.010.patch, YARN-7451.011.patch, 
> YARN-7451.012.patch, YARN-7451.013.patch, YARN-7451.014.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
>  
> Originally, this issue was about serializing custom resources along with 
> normal resources in the RM apps and scheduler webservice endpoints.
> However, as YARN-7817 implemented this sooner, this issue is a complement of 
> YARN-7817, adding several unit tests to verify the correctness of the 
> responses of these webservice endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7579) Add support for FPGA information shown in webUI

2018-06-19 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16516736#comment-16516736
 ] 

Zhankun Tang commented on YARN-7579:


I'd like to move this forward. But not quite sure that, which version should we 
implement this for? YARN web UI v2 or both v1 and v2?

Any suggestion? [~leftnoteasy] , [~jianhe]

> Add support for FPGA information shown in webUI
> ---
>
> Key: YARN-7579
> URL: https://issues.apache.org/jira/browse/YARN-7579
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Priority: Major
>
> Supports retrieving FPGA information from REST and viewing from webUI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org