[jira] [Commented] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635024#comment-16635024
 ] 

Szilard Nemeth commented on YARN-8644:
--

patch005 fixes whitespace issues.

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch, YARN-8644.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8732:
-
Attachment: YARN-8732.005.patch

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: unittest
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch, YARN-8732.004.patch, YARN-8732.005.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635023#comment-16635023
 ] 

Szilard Nemeth commented on YARN-8732:
--

patch005 fixes whitespace and checkstyle issues.

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: unittest
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch, YARN-8732.004.patch, YARN-8732.005.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4254) ApplicationAttempt stuck for ever due to UnknowHostexception

2018-10-01 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16635022#comment-16635022
 ] 

Bibin A Chundatt commented on YARN-4254:


[~sunilg]/[~jlowe]

Could you please help in review of the same..

> ApplicationAttempt stuck for ever due to UnknowHostexception
> 
>
> Key: YARN-4254
> URL: https://issues.apache.org/jira/browse/YARN-4254
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: 0001-YARN-4254.patch, Logs.txt, Test.patch, 
> YARN-4254.002.patch
>
>
> Scenario
> ===
> 1. RM HA and 5 NMs available in cluster and are working fine 
> 2. Add one more NM to the same cluster but RM /etc/hosts not updated.
> 3. Submit application to the same cluster
> If Am get allocated to the newly added NM the *application attempt will get 
> stuck for ever*.User will not get to know why the same happened.
> Impact
> 1.RM logs gets overloaded with exception
> 2.Application gets stuck for ever.
> Handling suggestion YARN-261 allows for Fail application attempt .
> If we fail the same next attempt could get assigned to another NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8644:
-
Attachment: YARN-8644.005.patch

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch, YARN-8644.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8644:
-
Attachment: (was: YARN-8644.005.patch)

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8644:
-
Attachment: YARN-8644.005.patch

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634873#comment-16634873
 ] 

Hadoop QA commented on YARN-8644:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 14 new + 121 unchanged - 15 fixed = 135 total (was 136) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 74m 
18s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8644 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942055/YARN-8644.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b892ad4e289c 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f6c5ef9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22024/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22024/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634840#comment-16634840
 ] 

Hadoop QA commented on YARN-8732:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 0 unchanged - 19 fixed = 3 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8732 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942051/YARN-8732.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f45d0777d763 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d08219 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22022/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22022/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-8553) Reduce complexity of AHSWebService getApps method

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634839#comment-16634839
 ] 

Szilard Nemeth commented on YARN-8553:
--

Hi [~bsteinbach]!
Thanks for your comments.
1. Booleans have false values as a default, so I'm just relying on that fact. 
Also removed the long initializers.
2. Good point, renamed the local variable.
3. I would keep those as well. The tests that provide normal values for the 
setters are proving that the builder methods work well independently of the 
other builder methods, thus acting as a safety-net for future changes.

> Reduce complexity of AHSWebService getApps method
> -
>
> Key: YARN-8553
> URL: https://issues.apache.org/jira/browse/YARN-8553
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8553.001.patch, YARN-8553.001.patch, 
> YARN-8553.002.patch, YARN-8553.003.patch
>
>
> YARN-8501 refactor the RMWebService#getApp. Similar refactoring required in 
> AHSWebservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8553) Reduce complexity of AHSWebService getApps method

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8553:
-
Attachment: YARN-8553.003.patch

> Reduce complexity of AHSWebService getApps method
> -
>
> Key: YARN-8553
> URL: https://issues.apache.org/jira/browse/YARN-8553
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8553.001.patch, YARN-8553.001.patch, 
> YARN-8553.002.patch, YARN-8553.003.patch
>
>
> YARN-8501 refactor the RMWebService#getApp. Similar refactoring required in 
> AHSWebservice. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634818#comment-16634818
 ] 

Hadoop QA commented on YARN-8785:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  4s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestNMProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a607c02 |
| JIRA Issue | YARN-8785 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 15b5f4a96de3 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7d08219 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22023/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22023/testReport/ |
| Max. process+thread count | 439 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22023/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
>  

[jira] [Commented] (YARN-8590) Fair scheduler promotion does not update container execution type and token

2018-10-01 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634800#comment-16634800
 ] 

Zoltan Siegl commented on YARN-8590:


As for the tests you could extract some copy-paste functionality by creating 
something like:
{code:java}
  private List createAppAttemptAndGetAllocatedContainers(
  RMNode node, int memory, int totalMemory, String queueName,
  String userName, ExecutionType expectedExecutionType, int 
expectedContainerSize) {
ApplicationAttemptId appAttempt =
createSchedulingRequest(memory, queueName, userName, 1, false);
scheduler.handle(new NodeUpdateSchedulerEvent(node));
assertEquals(totalMemory, scheduler.getQueueManager().getQueue(queueName).
getGuaranteedResourceUsage().getMemorySize());
List allocatedContainers =
scheduler.getSchedulerApp(appAttempt).pullNewlyAllocatedContainers();
assertTrue(allocatedContainers.size() == expectedContainerSize);
assertEquals("unexpected container execution type",
expectedExecutionType,
allocatedContainers.get(0).getExecutionType());
return allocatedContainers;
  }
{code}

Otherwise LGTM +1 (Non binding)

> Fair scheduler promotion does not update container execution type and token
> ---
>
> Key: YARN-8590
> URL: https://issues.apache.org/jira/browse/YARN-8590
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8590-YARN-1011.00.patch, 
> YARN-8590-YARN-1011.01.patch, YARN-8590-YARN-1011.02.patch
>
>
> Fair Scheduler promotion of opportunistic containers does not update 
> container execution type and token. This leads to incorrect resource 
> accounting when the promoted containers are released.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634795#comment-16634795
 ] 

Szilard Nemeth commented on YARN-8644:
--

Thanks [~haibochen] for your comments!
Removed the AppCreationTestHelper class as we agreed it did not give much of a 
value.
Please check my latest patch!
Thanks!

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8644:
-
Attachment: YARN-8644.004.patch

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch, YARN-8644.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8839) Define a protocol exchange between websocket client and server for interactive shell

2018-10-01 Thread Eric Yang (JIRA)
Eric Yang created YARN-8839:
---

 Summary: Define a protocol exchange between websocket client and 
server for interactive shell
 Key: YARN-8839
 URL: https://issues.apache.org/jira/browse/YARN-8839
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Reporter: Eric Yang


Running interactive shell is more than piping stdio from docker exec through a 
web socket.  For enabling terminal based program to run, there are certain 
functions that work outside of stdio streams to the destination program.  A 
couple known functions to improve terminal usability:

# Resize terminal columns and rows
# Set title of the window
# Upload files via zmodem protocol
# Set terminal type
# Heartbeat (poll server side for more data)
# Send keystroke payload to server side

If we want to be on parity with commonly supported ssh terminal functions, we 
need to develop a set of protocols between websocket client and server.  Client 
and server intercept the messages to enable functions that are normally outside 
of the stdio streams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8732:
-
Attachment: YARN-8732.004.patch

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: unittest
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch, YARN-8732.004.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634751#comment-16634751
 ] 

Szilard Nemeth commented on YARN-8732:
--

Couple of changes with the new patch004:
1. Javadoc added for new test classes
2. Test annotation added to testValidateRequestCapacityAgainstMinMaxAllocation 
/ testRequestCapacityMinMaxAllocationForResourceTypes to the base test class, 
so I can avoid having all testcases calling super. for every abstract 
test method.
3. As a consequence of 2., the queue name for CS / FS test classes are now 
provided by the implementors of the abstract method from the base class.
4. The atomic integer fields are moved to TestApplicationMasterInterceptor as 
they were used exclusively by this class.
5. TestApplicationMasterInterceptor is no longer a subclas of 
ApplicationMasterServiceTestBase, as it have not used any helper methods and 
only required a configuration instance to be present. Moreover, it is 
semantically different from the CS / FS test classes.

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: unittest
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch, YARN-8732.004.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8838) Add security check for container user is same as websocket user

2018-10-01 Thread Eric Yang (JIRA)
Eric Yang created YARN-8838:
---

 Summary: Add security check for container user is same as 
websocket user
 Key: YARN-8838
 URL: https://issues.apache.org/jira/browse/YARN-8838
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Reporter: Eric Yang


When user is authenticate via SPNEGO entry point, node manager must verify the 
remote user is the same as the container user to start the web socket session.  
One possible solution is to verify the web request user matches yarn container 
local directory owne during onWebSocketConnect..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8750) Refactor TestQueueMetrics

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634691#comment-16634691
 ] 

Szilard Nemeth edited comment on YARN-8750 at 10/1/18 10:13 PM:


The changes in TestQueueMetrics could have been more simple if I had used a 
map, but using a separate "checker" class for verification is having some 
advantages that are not visible in the first place:

1. The possiblity of accidentally interchange Resource metrics and App metrics 
assertions are enforced, there are dedicated checker classes for those along 
with their respective enums. For example, {{AppMetricsChecker}} only accepts 
{{AppMetricsKey}} s. The same goes with {{ResourceMetricsChecker}} and 
{{ResourceMetricsKey}} s.
2. The mentioned enums guarantee that only existing resource metrics / app 
metrics keys are used in tests. 
3. The methods named as {{checkAll}} in the two checker classes are hiding the 
complexity of asserting gauge and counter values. As the functionality of 
{{checkAll}} could be replaced with 3 maps in every test classes where the test 
want to verify metrics, this could lead to unnecessary code duplication, so the 
current solution is more reusable.
4. Methods {{gaugeLong}}, {{gaugeInt}} and {{counter}} in 
{{ResourceMetricsChecker}} put the values in the correct map. If the tests 
themselves were referencing those maps, it would be easier to put the value to 
a wrong map unintentionally.

I'm open to rename the {{checkAll}} method as one can come up with a better 
name, but that's what I got for now.


was (Author: snemeth):
The changes in TestQueueMetrics could have been more simple if I had used a 
map, but using a separate "checker" class for verification is having some 
advantages that are not visible in the first place:

1. The possiblity of accidentally interchange Resource metrics and App metrics 
assertions are enforced, there are dedicated checker classes for those along 
with their respective enums. For example, {{AppMetricsChecker}} only accepts 
{{AppMetricsKey}}s. The same goes with {{ResourceMetricsChecker}} and 
{{ResourceMetricsKey}}s.
2. The mentioned enums guarantee that only existing resource metrics / app 
metrics keys are used in tests. 
3. The methods named as {{checkAll}} in the two checker classes are hiding the 
complexity of asserting gauge and counter values. As the functionality of 
{{checkAll}} could be replaced with 3 maps in every test classes where the test 
want to verify metrics, this could lead to unnecessary code duplication, so the 
current solution is more reusable.
4. Methods {{gaugeLong}}, {{gaugeInt}} and {{counter}} in 
{{ResourceMetricsChecker}} put the values in the correct map. If the tests 
themselves were referencing those maps, it would be easier to put the value to 
a wrong map unintentionally.

I'm open to rename the {{checkAll}} method as one can come up with a better 
name, but that's what I got for now.

> Refactor TestQueueMetrics
> -
>
> Key: YARN-8750
> URL: https://issues.apache.org/jira/browse/YARN-8750
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8750.001.patch, YARN-8750.002.patch, 
> YARN-8750.003.patch
>
>
> {{TestQueueMetrics#checkApps}} and {{TestQueueMetrics#checkResources}} have 8 
> and 14 parameters, respectively.
> It is very hard to read the testcases that are using these methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8621) Add test coverage of custom Resource Types for the apps/ REST API endpoint

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634697#comment-16634697
 ] 

Hudson commented on YARN-8621:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15087 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15087/])
YARN-8621. Add test coverage of custom Resource Types for the (haibochen: rev 
d0ee6fbe281b9edfab5913ca46a0f89ee9a2a6cc)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppCustomResourceTypes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/XmlCustomResourceTypeTestCase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/BufferedClientResponse.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/helper/JsonCustomResourceTypeTestcase.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppAttempts.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsCustomResourceTypes.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCustomResourceTypesCommons.java


> Add test coverage of custom Resource Types for the apps/ REST API 
> endpoint
> -
>
> Key: YARN-8621
> URL: https://issues.apache.org/jira/browse/YARN-8621
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8621.001.patch, YARN-8621.002.patch
>
>
> This is a complement for YARN-7451 that already added unit tests for the apps 
> and scheduler endpoints.
> The following API endpoints should be tested as well:
> /ws/v1/cluster/apps/
> -/ws/v1/cluster/apps//appattempts-
> -/ws/v1/cluster/apps//appattempts/-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8760) [AMRMProxy] Fix concurrent re-register due to YarnRM failover in AMRMClientRelayer

2018-10-01 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634695#comment-16634695
 ] 

Botong Huang commented on YARN-8760:


Thanks [~giovanni.fumarola] for the review and commit!

> [AMRMProxy] Fix concurrent re-register due to YarnRM failover in 
> AMRMClientRelayer
> --
>
> Key: YARN-8760
> URL: https://issues.apache.org/jira/browse/YARN-8760
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8760.v1.patch
>
>
> When home YarnRM is failing over, FinishApplicationMaster call from AM can 
> have multiple retry threads outstanding in FederationInterceptor. When new 
> YarnRM come back up, all retry threads will re-register to YarnRM. The first 
> one will succeed but the rest will get "Application Master is already 
> registered" exception. We should catch and swallow this exception and move 
> on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8750) Refactor TestQueueMetrics

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634691#comment-16634691
 ] 

Szilard Nemeth commented on YARN-8750:
--

The changes in TestQueueMetrics could have been more simple if I had used a 
map, but using a separate "checker" class for verification is having some 
advantages that are not visible in the first place:

1. The possiblity of accidentally interchange Resource metrics and App metrics 
assertions are enforced, there are dedicated checker classes for those along 
with their respective enums. For example, {{AppMetricsChecker}} only accepts 
{{AppMetricsKey}}s. The same goes with {{ResourceMetricsChecker}} and 
{{ResourceMetricsKey}}s.
2. The mentioned enums guarantee that only existing resource metrics / app 
metrics keys are used in tests. 
3. The methods named as {{checkAll}} in the two checker classes are hiding the 
complexity of asserting gauge and counter values. As the functionality of 
{{checkAll}} could be replaced with 3 maps in every test classes where the test 
want to verify metrics, this could lead to unnecessary code duplication, so the 
current solution is more reusable.
4. Methods {{gaugeLong}}, {{gaugeInt}} and {{counter}} in 
{{ResourceMetricsChecker}} put the values in the correct map. If the tests 
themselves were referencing those maps, it would be easier to put the value to 
a wrong map unintentionally.

I'm open to rename the {{checkAll}} method as one can come up with a better 
name, but that's what I got for now.

> Refactor TestQueueMetrics
> -
>
> Key: YARN-8750
> URL: https://issues.apache.org/jira/browse/YARN-8750
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8750.001.patch, YARN-8750.002.patch, 
> YARN-8750.003.patch
>
>
> {{TestQueueMetrics#checkApps}} and {{TestQueueMetrics#checkResources}} have 8 
> and 14 parameters, respectively.
> It is very hard to read the testcases that are using these methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8732:
-
Labels: unittest  (was: )

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>  Labels: unittest
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634665#comment-16634665
 ] 

Szilard Nemeth commented on YARN-8732:
--

Patch003 removes {{TestApplicationMasterService}}, this was missing from the 
previous patch.

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8732:
-
Attachment: YARN-8732.003.patch

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch, 
> YARN-8732.003.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8621) Add test coverage of custom Resource Types for the apps/ REST API endpoint

2018-10-01 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634656#comment-16634656
 ] 

Haibo Chen commented on YARN-8621:
--

Thanks [~snemeth] for the patch. +1 on the latest patch. I'll commit it shortly.

> Add test coverage of custom Resource Types for the apps/ REST API 
> endpoint
> -
>
> Key: YARN-8621
> URL: https://issues.apache.org/jira/browse/YARN-8621
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8621.001.patch, YARN-8621.002.patch
>
>
> This is a complement for YARN-7451 that already added unit tests for the apps 
> and scheduler endpoints.
> The following API endpoints should be tested as well:
> /ws/v1/cluster/apps/
> -/ws/v1/cluster/apps//appattempts-
> -/ws/v1/cluster/apps//appattempts/-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8621) Add test coverage of custom Resource Types for the apps/ REST API endpoint

2018-10-01 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8621:
-
Summary: Add test coverage of custom Resource Types for the apps/ 
REST API endpoint  (was: Add REST API tests for Resource Types fields for the 
apps/ endpoint)

> Add test coverage of custom Resource Types for the apps/ REST API 
> endpoint
> -
>
> Key: YARN-8621
> URL: https://issues.apache.org/jira/browse/YARN-8621
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8621.001.patch, YARN-8621.002.patch
>
>
> This is a complement for YARN-7451 that already added unit tests for the apps 
> and scheduler endpoints.
> The following API endpoints should be tested as well:
> /ws/v1/cluster/apps/
> -/ws/v1/cluster/apps//appattempts-
> -/ws/v1/cluster/apps//appattempts/-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Attachment: (was: YARN-8785-branch-3.1.002.patch)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: YARN-8785-branch-3.1.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634635#comment-16634635
 ] 

ASF GitHub Bot commented on YARN-8785:
--

GitHub user simonprewo opened a pull request:

https://github.com/apache/hadoop/pull/420

YARN-8785-branch-3.1.002.patch



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/simonprewo/hadoop patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/420.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #420


commit ef2f56f2ba085e8ad22e24ac3b29b8e67a6929e0
Author: Simon Prewo 
Date:   2018-10-01T21:05:20Z

YARN-8785-branch-3.1.002.patch




> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Attachment: (was: YARN-8785-branch-3.1.002.patch)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8732:
-
Attachment: YARN-8732.002.patch

> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8732.001.patch, YARN-8732.002.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634630#comment-16634630
 ] 

Szilard Nemeth edited comment on YARN-8732 at 10/1/18 9:01 PM:
---

As the testcases in {{ApplicationMasterService}} were tied to either Fair 
scheduler or Capacity scheduler and they were using many helper methods, I 
decided to create a common base class with the helper methods and tests that 
are not specific to any scheduler and created two test classes extending the 
base class.
This way, we have a separate test class for {{FS}} and {{CS}} and have the 
common testcases in the base class.

Some notes for reviewers:
1. The base class contains two abstract methods:
a.) {{createYarnConfig}}: The child test classes create their respective 
{{YarnConfiguration}}s, with the appropriate scheduler in place.
b.) {{getResourceUsageForQueue}}: As the name implies, returns resource usage 
for a given queue. As the implementation differs between {{CS}} / {{FS}}, hence 
this method is abstract.

2. Comparing {{TestApplicationMasterService}} (removed class) and 
{{ApplicationMasterServiceTestBase}} gives a diff that only contains the 
methods moved to the specific scheduler test classes plus some minor formatting 
fixes.



was (Author: snemeth):
As the testcases in {{ApplicationMasterService}} were tied to either Fair 
scheduler or Capacity scheduler and they were using many helper methods, I 
decided to create a common base class with the helper methods and tests that 
are not specific to any scheduler and created two test classes extending the 
base class.
This way, we have a separate test class for {{FS}} and {{CS}} and have the 
common testcases in the base class.

Some notes for reviewers:
1. The base class contains two abstract methods:
a.) {{createYarnConfig}}: The child test classes create their respective yarn 
configs, with the appropriate scheduler in place.
b.) {{getResourceUsageForQueue}}: As the name implies, returns resource usage 
for a given queue. As the implementation differs between {{CS}} / {{FS}}, hence 
this is abstract.

2. Comparing {{TestApplicationMasterService}} (removed class) and 
{{ApplicationMasterServiceTestBase}} gives a diff that only contains the 
methods moved to the specific scheduler test classes plus some minor formatting 
fixes


> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8732.001.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634628#comment-16634628
 ] 

ASF GitHub Bot commented on YARN-8785:
--

Github user simonprewo closed the pull request at:

https://github.com/apache/hadoop/pull/417


> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: YARN-8785-branch-3.1.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8732) Add unit tests of min/max allocation for custom resource types in FairScheduler

2018-10-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634630#comment-16634630
 ] 

Szilard Nemeth commented on YARN-8732:
--

As the testcases in {{ApplicationMasterService}} were tied to either Fair 
scheduler or Capacity scheduler and they were using many helper methods, I 
decided to create a common base class with the helper methods and tests that 
are not specific to any scheduler and created two test classes extending the 
base class.
This way, we have a separate test class for {{FS}} and {{CS}} and have the 
common testcases in the base class.

Some notes for reviewers:
1. The base class contains two abstract methods:
a.) {{createYarnConfig}}: The child test classes create their respective yarn 
configs, with the appropriate scheduler in place.
b.) {{getResourceUsageForQueue}}: As the name implies, returns resource usage 
for a given queue. As the implementation differs between {{CS}} / {{FS}}, hence 
this is abstract.

2. Comparing {{TestApplicationMasterService}} (removed class) and 
{{ApplicationMasterServiceTestBase}} gives a diff that only contains the 
methods moved to the specific scheduler test classes plus some minor formatting 
fixes


> Add unit tests of min/max allocation for custom resource types in 
> FairScheduler
> ---
>
> Key: YARN-8732
> URL: https://issues.apache.org/jira/browse/YARN-8732
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8732.001.patch
>
>
> Create testcase like this, but for FS: 
> org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService#testValidateRequestCapacityAgainstMinMaxAllocationFor3rdResourceTypes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634625#comment-16634625
 ] 

Hadoop QA commented on YARN-8808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 1s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestCapacitySchedulerMetrics |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942000/YARN-8808-YARN-1011.03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 078d48310785 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / 8d217ee |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22018/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22018/testReport/ |
| Max. process+thread count | 1027 (vs. ulimit of 1) |
| 

[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Attachment: (was: YARN-8785-branch-3.1.002.patch)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1, 3.1.2
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Fix For: 3.1.2
>
> Attachments: YARN-8785-branch-3.1.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Affects Version/s: 3.1.0

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.0, 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8785-branch-3.1.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8760) [AMRMProxy] Fix concurrent re-register due to YarnRM failover in AMRMClientRelayer

2018-10-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634611#comment-16634611
 ] 

Hudson commented on YARN-8760:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15085 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15085/])
YARN-8760. [AMRMProxy] Fix concurrent re-register due to YarnRM failover 
(gifuma: rev 59d5af21b7a8f52e8c89cbc2d25fe3d449b2657a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/TestAMRMClientRelayer.java


> [AMRMProxy] Fix concurrent re-register due to YarnRM failover in 
> AMRMClientRelayer
> --
>
> Key: YARN-8760
> URL: https://issues.apache.org/jira/browse/YARN-8760
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8760.v1.patch
>
>
> When home YarnRM is failing over, FinishApplicationMaster call from AM can 
> have multiple retry threads outstanding in FederationInterceptor. When new 
> YarnRM come back up, all retry threads will re-register to YarnRM. The first 
> one will succeed but the rest will get "Application Master is already 
> registered" exception. We should catch and swallow this exception and move 
> on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634585#comment-16634585
 ] 

Hadoop QA commented on YARN-8785:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-8785 does not apply to branch-3.1. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942032/YARN-8785-branch-3.1.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22021/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8785-branch-3.1.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634581#comment-16634581
 ] 

Hadoop QA commented on YARN-8785:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-8785 does not apply to branch-3.1. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8785 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942027/YARN-8785-branch-3.1.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22020/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Attachment: (was: YARN-8785-branch-3.1.001.patch)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8760) [AMRMProxy] Fix concurrent re-register due to YarnRM failover in AMRMClientRelayer

2018-10-01 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634566#comment-16634566
 ] 

Giovanni Matteo Fumarola commented on YARN-8760:


The change is straight-forward. 
Thanks [~botong] for that patch. Committed to Trunk.

> [AMRMProxy] Fix concurrent re-register due to YarnRM failover in 
> AMRMClientRelayer
> --
>
> Key: YARN-8760
> URL: https://issues.apache.org/jira/browse/YARN-8760
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8760.v1.patch
>
>
> When home YarnRM is failing over, FinishApplicationMaster call from AM can 
> have multiple retry threads outstanding in FederationInterceptor. When new 
> YarnRM come back up, all retry threads will re-register to YarnRM. The first 
> one will succeed but the rest will get "Application Master is already 
> registered" exception. We should catch and swallow this exception and move 
> on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8760) [AMRMProxy] Fix concurrent re-register due to YarnRM failover in AMRMClientRelayer

2018-10-01 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-8760:
---
Fix Version/s: 3.2.0

> [AMRMProxy] Fix concurrent re-register due to YarnRM failover in 
> AMRMClientRelayer
> --
>
> Key: YARN-8760
> URL: https://issues.apache.org/jira/browse/YARN-8760
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8760.v1.patch
>
>
> When home YarnRM is failing over, FinishApplicationMaster call from AM can 
> have multiple retry threads outstanding in FederationInterceptor. When new 
> YarnRM come back up, all retry threads will re-register to YarnRM. The first 
> one will succeed but the rest will get "Application Master is already 
> registered" exception. We should catch and swallow this exception and move 
> on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8644) Make RMAppImpl$FinalTransition more readable + add more test coverage

2018-10-01 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634549#comment-16634549
 ] 

Haibo Chen commented on YARN-8644:
--

Thanks [~snemeth] for the patch!  I am not sure if moving methods into 
AppCreationTestHelper helps reduce complexity. Besides moving the methods, now 
we have  a new Builder class, and more parameters to pass around. I'd suggest 
we get rid of AppCreationTestHelper changes. Additionally, changes like "r = 
func(); return r;" => "return func()" are not technically necessary.   Can we 
revert those two changes so that the patch is smaller and more focused?

> Make RMAppImpl$FinalTransition more readable + add more test coverage
> -
>
> Key: YARN-8644
> URL: https://issues.apache.org/jira/browse/YARN-8644
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-8644.001.patch, YARN-8644.002.patch, 
> YARN-8644.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-10-01 Thread Simon Prewo (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Prewo updated YARN-8785:
--
Attachment: (was: YARN-8785.001.patch)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8837) TestNMProxy.testNMProxyRPCRetry Improvement

2018-10-01 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634523#comment-16634523
 ] 

Botong Huang commented on YARN-8837:


Other than improving surfacing the exception message, can we try fix this unit 
test as well? It is failing in trunk now. 

> TestNMProxy.testNMProxyRPCRetry Improvement
> ---
>
> Key: YARN-8837
> URL: https://issues.apache.org/jira/browse/YARN-8837
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-8789.1.patch
>
>
> The unit test 
> {{org.apache.hadoop.yarn.server.nodemanager.containermanager.TestNMProxy.testNMProxyRetry()}}
>  has had some issues in the past. You can search JIRA for it, but one example 
> is [YARN-5104].  I recently had some issues with it myself and found the 
> follow change helpful in troubleshooting.
> {code:java|title=Current Implementation}
> } catch (IOException e) {
> // socket exception should be thrown immediately, without RPC retries.
> Assert.assertTrue(e instanceof java.net.SocketException);
> }
> {code}
> The issue here is that the test is true/false.  The testing framework does 
> not give me any feedback regarding the type of exception that was thrown, it 
> just says "assertion failed."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634516#comment-16634516
 ] 

Hadoop QA commented on YARN-6989:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-6989 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12942013/YARN-6989.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4a67307b36b2 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cc80ac2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22019/testReport/ |
| Max. process+thread count | 406 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22019/console |
| Powered by | Apache Yetus 0.8.0  

[jira] [Commented] (YARN-8763) Add WebSocket logic to the Node Manager web server to establish servlet

2018-10-01 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634486#comment-16634486
 ] 

Zian Chen commented on YARN-8763:
-

Hi [~eyang] , make sense.  I'll work on patch 003 to address comments & Jenkins 
failures. 

> Add WebSocket logic to the Node Manager web server to establish servlet
> ---
>
> Key: YARN-8763
> URL: https://issues.apache.org/jira/browse/YARN-8763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8763-001.patch, YARN-8763.002.patch
>
>
> The reason we want to use WebSocket servlet to serve the backend instead of 
> establishing the connection through HTTP is that WebSocket solves a few 
> issues with HTTP which needed for our scenario,
>  # In HTTP, the request is always initiated by the client and the response is 
> processed by the server — making HTTP a unidirectional protocol, while web 
> socket provides the Bi-directional protocol which means either client/server 
> can send a message to the other party.
>  # Full-duplex communication — client and server can talk to each other 
> independently at the same time
>  # Single TCP connection — After upgrading the HTTP connection in the 
> beginning, client and server communicate over that same TCP connection 
> throughout the lifecycle of WebSocket connection



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-10-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-6989:

Attachment: YARN-6989.002.patch

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-6989.001.patch, YARN-6989.002.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-10-01 Thread Abhishek Modi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1663#comment-1663
 ] 

Abhishek Modi commented on YARN-6989:
-

[~vrushalic] getUser is not being used anywhere else apart from 
TimelineReaderWebServices and TimelineReaderWhitelistAuthorization.

getCalleruserGroupInformation is not being called from anywhere now - I will 
attach a new patch for removing this and moving the code into getUser function 
only.

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-6989.001.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-10-01 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634419#comment-16634419
 ] 

Vrushali C commented on YARN-6989:
--

Hmm, So now getUser is now changing to always return the principal user, 
earlier it was always returning remoteUser. So the function is changing. Was 
the getUser used anywhere else in the code? 

Also, is getCallerUserGroupInformation used anywhere else in the code? If not, 
we can perhaps remove the remote user related code and always return principal 
user? What do you think  

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-6989.001.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8758) Support getting PreemptionMessage when using AMRMClientAsync

2018-10-01 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634405#comment-16634405
 ] 

Wangda Tan commented on YARN-8758:
--

Thanks [~Zian Chen], patch LGTM, +1. Will commit tomorrow if no objections.

> Support getting PreemptionMessage when using AMRMClientAsync
> 
>
> Key: YARN-8758
> URL: https://issues.apache.org/jira/browse/YARN-8758
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Krishna Kishore
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8758.001.patch
>
>
> There's no way to get PreemptionMessage sent by RM from AMRMClientAsync, we 
> should add support for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8758) Support getting PreemptionMessage when using AMRMClientAsync

2018-10-01 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8758:
-
Summary: Support getting PreemptionMessage when using AMRMClientAsync  
(was: PreemptionMessage when using AMRMClientAsync)

> Support getting PreemptionMessage when using AMRMClientAsync
> 
>
> Key: YARN-8758
> URL: https://issues.apache.org/jira/browse/YARN-8758
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Krishna Kishore
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8758.001.patch
>
>
> There's no way to get PreemptionMessage sent by RM from AMRMClientAsync, we 
> should add support for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8758) PreemptionMessage when using AMRMClientAsync

2018-10-01 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8758:
-
Description: There's no way to get PreemptionMessage sent by RM from 
AMRMClientAsync, we should add support for that.   (was: Hi,

   The preemption notification messages sent in the time period defined by the 
following parameter now work only on AMRMClient, but not on AMRMClientAsync.

*yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill*

We want this work on the AMRMClientAsync also because our implementations are 
based on this one. 

 

Thanks,

Kishore)

> PreemptionMessage when using AMRMClientAsync
> 
>
> Key: YARN-8758
> URL: https://issues.apache.org/jira/browse/YARN-8758
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Krishna Kishore
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8758.001.patch
>
>
> There's no way to get PreemptionMessage sent by RM from AMRMClientAsync, we 
> should add support for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-10-01 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8808:
-
Attachment: YARN-8808-YARN-1011.03.patch

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch, YARN-8808-YARN-1011.02.patch, 
> YARN-8808-YARN-1011.03.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8146) Remove LinkedList From resourcemanager.reservation.planning Package

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634333#comment-16634333
 ] 

Hadoop QA commented on YARN-8146:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 73m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8146 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918554/YARN-8146.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2eadc0b18dbe 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f7ff8c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22017/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22017/testReport/ |
| Max. process+thread count | 866 

[jira] [Comment Edited] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634230#comment-16634230
 ] 

Antal Bálint Steinbach edited comment on YARN-8468 at 10/1/18 4:03 PM:
---

The feature should be ported to 3.1.x and 3.2.x

 

Sorry, the original plan was 3.2.x. I updated my comment. Anyway, I will upload 
a patch for branch-3.1 and branch-3.2


was (Author: bsteinbach):
The feature should be ported to 3.1.x and 3.2.x

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634173#comment-16634173
 ] 

Antal Bálint Steinbach edited comment on YARN-8468 at 10/1/18 4:00 PM:
---

Hi [~cheersyang] ,

Thanks again for the detailed reviews. It helped me a lot. I am not completely 
aware of the existing branches, but I would say it would make sense to at least 
put this into 3.2.x which was the original planned release.


was (Author: bsteinbach):
Hi [~cheersyang] ,

Thanks again for the detailed reviews. It helped me a lot. I am not completely 
aware of the existing branches, but I would say it would make sense to at least 
put this into 3.1.x which was the original planned release.

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Bálint Steinbach reopened YARN-8468:
--

The feature should be ported to 3.1.x and 3.2.x

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8760) [AMRMProxy] Fix concurrent re-register due to YarnRM failover in AMRMClientRelayer

2018-10-01 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634229#comment-16634229
 ] 

Botong Huang commented on YARN-8760:


TestNMProxy failure is irrelevant and is tracked under YARN-8837

> [AMRMProxy] Fix concurrent re-register due to YarnRM failover in 
> AMRMClientRelayer
> --
>
> Key: YARN-8760
> URL: https://issues.apache.org/jira/browse/YARN-8760
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8760.v1.patch
>
>
> When home YarnRM is failing over, FinishApplicationMaster call from AM can 
> have multiple retry threads outstanding in FederationInterceptor. When new 
> YarnRM come back up, all retry threads will re-register to YarnRM. The first 
> one will succeed but the rest will get "Application Master is already 
> registered" exception. We should catch and swallow this exception and move 
> on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-01 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634192#comment-16634192
 ] 

Weiwei Yang commented on YARN-8468:
---

Hi [~bsteinbach]

Sure, good to know. Can you provide a patch for branch-3.1? You can reopen this 
Jira and upload a patch for branch-3.1 to trigger jenkins job again. I'll help 
to review once that jenkins gives +1. And then we can backport this to 
branch-3.1 (for 3.1.x versions).

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4254) ApplicationAttempt stuck for ever due to UnknowHostexception

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634178#comment-16634178
 ] 

Hadoop QA commented on YARN-4254:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 269 unchanged - 0 fixed = 272 total (was 269) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 74m 
12s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-4254 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941955/YARN-4254.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 000773b076dc 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 

[jira] [Commented] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-10-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634173#comment-16634173
 ] 

Antal Bálint Steinbach commented on YARN-8468:
--

Hi [~cheersyang] ,

Thanks again for the detailed reviews. It helped me a lot. I am not completely 
aware of the existing branches, but I would say it would make sense to at least 
put this into 3.1.x which was the original planned release.

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Fix For: 3.2.0, 3.3.0
>
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch, 
> YARN-8468.017.patch, YARN-8468.018.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8837) TestNMProxy.testNMProxyRPCRetry Improvement

2018-10-01 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16634068#comment-16634068
 ] 

Jason Lowe commented on YARN-8837:
--

Thanks for the patch!  Wouldn't it be much simpler to have the patch catch 
SocketException directly rather than catch IOException with a check for not 
SocketException?  Then any other exception type will not be caught, so it will 
bubble up and fail the test with a corresponding exception message and 
stacktrace.


> TestNMProxy.testNMProxyRPCRetry Improvement
> ---
>
> Key: YARN-8837
> URL: https://issues.apache.org/jira/browse/YARN-8837
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-8789.1.patch
>
>
> The unit test 
> {{org.apache.hadoop.yarn.server.nodemanager.containermanager.TestNMProxy.testNMProxyRetry()}}
>  has had some issues in the past. You can search JIRA for it, but one example 
> is [YARN-5104].  I recently had some issues with it myself and found the 
> follow change helpful in troubleshooting.
> {code:java|title=Current Implementation}
> } catch (IOException e) {
> // socket exception should be thrown immediately, without RPC retries.
> Assert.assertTrue(e instanceof java.net.SocketException);
> }
> {code}
> The issue here is that the test is true/false.  The testing framework does 
> not give me any feedback regarding the type of exception that was thrown, it 
> just says "assertion failed."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8146) Remove LinkedList From resourcemanager.reservation.planning Package

2018-10-01 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned YARN-8146:
-

Assignee: BELUGA BEHR

> Remove LinkedList From resourcemanager.reservation.planning Package
> ---
>
> Key: YARN-8146
> URL: https://issues.apache.org/jira/browse/YARN-8146
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: reservation system
>Affects Versions: 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-8146.1.patch
>
>
> Remove {{LinkedList}} instances in favor of {{ArrayList}}.  {{ArrayList}} is 
> generally more memory efficient, require less memory fragmentation, and with 
> memory localization, faster to iterate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4254) ApplicationAttempt stuck for ever due to UnknowHostexception

2018-10-01 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4254:
---
Attachment: YARN-4254.002.patch

> ApplicationAttempt stuck for ever due to UnknowHostexception
> 
>
> Key: YARN-4254
> URL: https://issues.apache.org/jira/browse/YARN-4254
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: 0001-YARN-4254.patch, Logs.txt, Test.patch, 
> YARN-4254.002.patch
>
>
> Scenario
> ===
> 1. RM HA and 5 NMs available in cluster and are working fine 
> 2. Add one more NM to the same cluster but RM /etc/hosts not updated.
> 3. Submit application to the same cluster
> If Am get allocated to the newly added NM the *application attempt will get 
> stuck for ever*.User will not get to know why the same happened.
> Impact
> 1.RM logs gets overloaded with exception
> 2.Application gets stuck for ever.
> Handling suggestion YARN-261 allows for Fail application attempt .
> If we fail the same next attempt could get assigned to another NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4254) ApplicationAttempt stuck for ever due to UnknowHostexception

2018-10-01 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633758#comment-16633758
 ] 

Bibin A Chundatt commented on YARN-4254:


[~jlowe]

As discussed in earlier attaching patch to skip registration for unresolved 
node managers.
Currently wrong configuration is really messing up RM. 

> ApplicationAttempt stuck for ever due to UnknowHostexception
> 
>
> Key: YARN-4254
> URL: https://issues.apache.org/jira/browse/YARN-4254
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: 0001-YARN-4254.patch, Logs.txt, Test.patch
>
>
> Scenario
> ===
> 1. RM HA and 5 NMs available in cluster and are working fine 
> 2. Add one more NM to the same cluster but RM /etc/hosts not updated.
> 3. Submit application to the same cluster
> If Am get allocated to the newly added NM the *application attempt will get 
> stuck for ever*.User will not get to know why the same happened.
> Impact
> 1.RM logs gets overloaded with exception
> 2.Application gets stuck for ever.
> Handling suggestion YARN-261 allows for Fail application attempt .
> If we fail the same next attempt could get assigned to another NM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633687#comment-16633687
 ] 

Hadoop QA commented on YARN-8834:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 11 new 
+ 0 unchanged - 0 fixed = 11 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8834 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941945/YARN-8834.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 282ce8034efc 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fd6be58 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22015/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22015/testReport/ |
| Max. process+thread count | 300 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8788) mvn package -Pyarn-ui fails on JDK9

2018-10-01 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633662#comment-16633662
 ] 

Akira Ajisaka commented on YARN-8788:
-

Hi [~vbmudalige], we don't need to wait for wro4j 1.8.1 release. We can update 
the mockito version in the dependency of the plugin as follows:
{code:title=pom.xml}
  
ro.isdc.wro4j
wro4j-maven-plugin
1.7.9

  

  

   (snip)
  
{code}

> mvn package -Pyarn-ui fails on JDK9
> ---
>
> Key: YARN-8788
> URL: https://issues.apache.org/jira/browse/YARN-8788
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: Java 9.0.4, CentOS 7.5
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> {{mvn package -Pdist,native,yarn-ui -Dtar -DskipTests}} failed on trunk.
> {noformat}
> [ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run 
> (default) on project hadoop-yarn-ui: Execution default of goal 
> ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run failed: An API incompatibility was 
> encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.7.9:run: 
> java.lang.ExceptionInInitializerError: null
> [ERROR] -
> [ERROR] realm =plugin>ro.isdc.wro4j:wro4j-maven-plugin:1.7.9
> [ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy
> [ERROR] urls[0] = 
> file:/home/aajisaka/.m2/repository/ro/isdc/wro4j/wro4j-maven-plugin/1.7.9/wro4j-maven-plugin-1.7.9.jar
> [ERROR] urls[1] = 
> file:/home/aajisaka/.m2/repository/ro/isdc/wro4j/wro4j-core/1.7.9/wro4j-core-1.7.9.jar
> [ERROR] urls[2] = 
> file:/home/aajisaka/.m2/repository/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar
> (snip)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633664#comment-16633664
 ] 

Rohith Sharma K S commented on YARN-8834:
-

Thanks [~abmodi] for the patch! Few comments
TimelineReaderClientImpl.java
# {{client = Client.create(cc);}}  should handle security login. So webclient 
should be created like below. Otherwise try making use of TimelineConnector but 
it retry internally few minutes.. 
{code}
webServiceClient = new Client(new URLConnectionClientHandler(
  new HttpURLConnectionFactory() {
  @Override
  public HttpURLConnection getHttpURLConnection(URL url)
  throws IOException {
AuthenticatedURL.Token token = new AuthenticatedURL.Token();
HttpURLConnection conn = null;
try {
  conn = new AuthenticatedURL().openConnection(url, token);
} catch (AuthenticationException e) {
  throw new IOException(e);
}
return conn;
  }
}));
{code}
# I think we should create generic method improving the existing doGetUri to 
take multiple parameters which construct query parameter.   Say doGetUri(URI 
base, String path, MultivaluedMap params). This could avoid 
code duplication.
# "application/json" could be changed to make use of 
MediaType.APPLICATION_JSON. 
# Add a test case for retrieval of entities. If we refactor the code, then in 
test getEntities method could be override and mock it.

> Provide Java client for fetching entities from TimelineReader
> -
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide EntityType and EntityId along with filters.Currently fetching 
> entities from TimelineReader is only via REST call or somebody need to write 
> java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8834:

Attachment: (was: YARN-8834.001.patch)

> Provide Java client for fetching entities from TimelineReader
> -
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide EntityType and EntityId along with filters.Currently fetching 
> entities from TimelineReader is only via REST call or somebody need to write 
> java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8834:

Attachment: YARN-8834.001.patch

> Provide Java client for fetching entities from TimelineReader
> -
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide EntityType and EntityId along with filters.Currently fetching 
> entities from TimelineReader is only via REST call or somebody need to write 
> java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8834:

Issue Type: Sub-task  (was: Bug)
Parent: YARN-7055

> Provide Java client for fetching entities from TimelineReader
> -
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide EntityType and EntityId along with filters.Currently fetching 
> entities from TimelineReader is only via REST call or somebody need to write 
> java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching entities from TimelineReader

2018-10-01 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8834:

Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-7055)

> Provide Java client for fetching entities from TimelineReader
> -
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide EntityType and EntityId along with filters.Currently fetching 
> entities from TimelineReader is only via REST call or somebody need to write 
> java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6989) Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a consistent way

2018-10-01 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16633635#comment-16633635
 ] 

Rohith Sharma K S commented on YARN-6989:
-

+1 lgtm.. cc:/ [~vrushalic]

> Ensure timeline service v2 codebase gets UGI from HttpServletRequest in a 
> consistent way
> 
>
> Key: YARN-6989
> URL: https://issues.apache.org/jira/browse/YARN-6989
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-6989.001.patch
>
>
> As noticed during discussions in YARN-6820, the webservices in timeline 
> service v2 get the UGI created from the user obtained by invoking 
> getRemoteUser on the HttpServletRequest . 
> It will be good to use getUserPrincipal instead of invoking getRemoteUser on 
> the HttpServletRequest. 
> Filing jira to update the code. 
> Per Java EE documentations for 6 and 7, the behavior around getRemoteUser and 
> getUserPrincipal is listed at:
> http://docs.oracle.com/javaee/6/tutorial/doc/gjiie.html#bncba
> https://docs.oracle.com/javaee/7/tutorial/security-webtier003.htm
> {code}
> getRemoteUser, which determines the user name with which the client 
> authenticated. The getRemoteUser method returns the name of the remote user 
> (the caller) associated by the container with the request. If no user has 
> been authenticated, this method returns null.
> getUserPrincipal, which determines the principal name of the current user and 
> returns a java.security.Principal object. If no user has been authenticated, 
> this method returns null. Calling the getName method on the Principal 
> returned by getUserPrincipal returns the name of the remote user.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org