[jira] [Commented] (YARN-8452) FairScheduler.update can take long time if yarn.scheduler.fair.sizebasedweight is on

2018-06-25 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523246#comment-16523246
 ] 

Wilfred Spiegelenburg commented on YARN-8452:
-

[~szegedim] thank you looking at this optimisation.

I was wondering if we can not make the change even simpler:
* The {{weight == -1}} check is only used the first time after that it is 
always true, why not use the {{memoryWeight}} or {{priorityWeight}} not being 
set to trigger the first calculation? After the first run those two values will 
just be used
{code}
  private float weight = 0;
  private long weightMemory = -1;
  private int weightPriority = -1;
{code}
* The synchronised lock seems to be unneeded.



> FairScheduler.update can take long time if 
> yarn.scheduler.fair.sizebasedweight is on
> 
>
> Key: YARN-8452
> URL: https://issues.apache.org/jira/browse/YARN-8452
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8452.000.patch
>
>
> Basically we recalculate the weight every time, even if the inputs did not 
> change. This causes high cpu usage, if the cluster has lots of apps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-25 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523208#comment-16523208
 ] 

Wangda Tan commented on YARN-8423:
--

+1, thanks [~sunilg], could u create a JIRA to add tests? Let's get this in 
first.

> GPU does not get released even though the application gets killed.
> --
>
> Key: YARN-8423
> URL: https://issues.apache.org/jira/browse/YARN-8423
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8423.001.patch, YARN-8423.002.patch, 
> YARN-8423.003.patch, kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i /ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"","uuid":"GPU-","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"","uuid":"GPU-","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":""},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523166#comment-16523166
 ] 

genericqa commented on YARN-8108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929122/YARN-8108.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d06f1a17cff 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 35ec940 |
| maven | 

[jira] [Commented] (YARN-8455) Add basic acl check for all TS v2 REST APIs

2018-06-25 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523159#comment-16523159
 ] 

Rohith Sharma K S commented on YARN-8455:
-

bq. Can we also sync the exception handling similar to RMWebServices.
Patch is doing similar to RMwebService by throwing forbidden exception. It is 
only exception which doesn't have bigger stack trace compared to others since 
it created in reader web service. 

bq. Also AccessControlException is thrown as INTERNAL_SERVER_ERROR now if the 
table acl is not available for reader rt ??
Currently we don't have any ACL story. This is very basic  strict restriction 
that two authenticated user can't see each others data. We have complete ACL 
coming up and it is in progress. Default flow doesn't hit this check in any 
case. 

> Add basic acl check for all TS v2 REST APIs
> ---
>
> Key: YARN-8455
> URL: https://issues.apache.org/jira/browse/YARN-8455
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8455.001.patch
>
>
> YARN-8319 filter check for flows pages. The same behavior need to be added 
> for all other REST API as long as ATS provides support for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523130#comment-16523130
 ] 

genericqa commented on YARN-8423:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m  
1s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8423 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929121/YARN-8423.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 784c127504ce 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 35ec940 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21105/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21105/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> GPU does not get released even though the application gets 

[jira] [Commented] (YARN-8455) Add basic acl check for all TS v2 REST APIs

2018-06-25 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523128#comment-16523128
 ] 

Bibin A Chundatt commented on YARN-8455:


[~rohithsharma]

Can we also sync the exception handling similar to RMWebServices. IIRC all the 
stack trace was avoided in RMWebServices.
Currently {{TimelineReaderWebServices}} throws complete stacktrace as part for 
response.

Also {{AccessControlException}} is thrown as {{INTERNAL_SERVER_ERROR}} now if 
the table acl is not available for reader rt ??
Can we make it more detail.


> Add basic acl check for all TS v2 REST APIs
> ---
>
> Key: YARN-8455
> URL: https://issues.apache.org/jira/browse/YARN-8455
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8455.001.patch
>
>
> YARN-8319 filter check for flows pages. The same behavior need to be added 
> for all other REST API as long as ATS provides support for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8361) Change App Name Placement Rule to use App Name instead of App Id for configuration

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523111#comment-16523111
 ] 

genericqa commented on YARN-8361:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 17s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.placement.TestAppNameMappingPlacementRule |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929115/YARN-8361.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  

[jira] [Updated] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-25 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8434:
---
Attachment: YARN-8434.002.patch

> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: YARN-8434.001.patch, YARN-8434.002.patch
>
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523086#comment-16523086
 ] 

genericqa commented on YARN-8379:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 12s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
36s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyPreemptToBalance
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8379 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929105/YARN-8379.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae8d5c30f508 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a3c6e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21101/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21101/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/21101/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 939 (vs. ulimit of 1) |
| modules 

[jira] [Commented] (YARN-8459) Capacity Scheduler should properly handle container allocation on app/node when app/node being removed by scheduler

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523075#comment-16523075
 ] 

genericqa commented on YARN-8459:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8459 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929112/YARN-8459.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 02b0c0fa91df 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a3c6e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21102/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21102/testReport/ |
| Max. process+thread count | 935 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523073#comment-16523073
 ] 

Hudson commented on YARN-8438:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14480 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14480/])
YARN-8438. TestContainer.testKillOnNew flaky on trunk. Contributed by 
(miklos.szegedi: rev 35ec9401e829bfa10994790659a26b0babacae35)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java


> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch, 
> YARN-8438.003.patch, YARN-8438.004.patch, YARN-8438.005.patch, 
> YARN-8438.006.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-06-25 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523071#comment-16523071
 ] 

Sunil Govindan commented on YARN-8108:
--

Post YARN-7605, we need a way to safely protect servlet certain endpoints like 
Api server. However it need not have to load all existing filters for its 
pathSpecs (or mappings). Currently it is loading all pathSpecs across all 
filters and causing request replay problem.

{{pathSpecs=*.html,*.jsp,/stacks,/logLevel,/jmx,/conf,/cluster/*,/ws/*,/proxy/*,/app/*,/proxy/*,/*}}

/proxy is added two times. Now to fix this, its upto the servlet to decide 
whether to load all its pathSpecs under all loaded filters. This patch gives 
that options now. Debugged with [~eyang] and [~vinodkv]. Thank you for the 
support.

Attaching patch which address this issue.

Also tested in a single node kerberized cluster and did below tests
 # Accessed */proxy* endpoint directly from a kerberized browser (Before this 
patch, we used to get a GSSEXception saying request is a reply)
 # Accessed proxy link from a RUNNING and COMPLETED application from RM Web UI. 
This works fine.
 # From a kinited shell, accessed /proxy endpoint and it loaded fine.
 # From a kinited shell, accessed /api endpoint to submit an app and also to 
get service details. Both are working as per expectation.
 # From an unsecure shell, both /proxy and /api endpoint are giving 401 
exception which is expected.

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8108.001.patch, YARN-8108.002.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8460) 'yarn.cluster.max-application-priority' need to be exposed via CLI/REST

2018-06-25 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523066#comment-16523066
 ] 

Weiwei Yang commented on YARN-8460:
---

Hi [~ssath...@hortonworks.com]

I think we have conf exposed via REST, can you try something like
{code}
curl --header "accept: application/xml" 
http://rm.host.address:8088/conf?name=yarn.cluster.max-application-priority
{code}


> 'yarn.cluster.max-application-priority' need to be exposed via CLI/REST
> ---
>
> Key: YARN-8460
> URL: https://issues.apache.org/jira/browse/YARN-8460
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Priority: Major
>
> Add a method to fetch  value for 'yarn.cluster.max-application-priority'.
> Since the property is not available by default , please add either REST api 
> /CLI method. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-06-25 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8108:
-
Attachment: YARN-8108.002.patch

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8108.001.patch, YARN-8108.002.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8461) Support strict memory control on individual container with elastic control memory mechanism

2018-06-25 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523054#comment-16523054
 ] 

Miklos Szegedi commented on YARN-8461:
--

[~haibochen], thank you for the patch.
{code:java}
188 if (status.contains(CGroupsHandler.UNDER_OOM)) {
189 LOG.warn("Container " + containerId + " under OOM based on cgroups.");
190 return Optional.of(true);
191 }{code}
The else branch should return {{Optional.of(false);}}

> Support strict memory control on individual container with elastic control 
> memory mechanism
> ---
>
> Key: YARN-8461
> URL: https://issues.apache.org/jira/browse/YARN-8461
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8461.00.patch
>
>
> YARN-4599 adds elastic memory control that disables oom killer for the root 
> container cgroup. Hence, all containers have their oom killer disabled 
> because they inherit the setting from the root container cgroup. Hence, when 
> strict memory control on individual containers is also enabled, the container 
> will be frozen but not killed. We can let the container monitoring thread to 
> take care of the frozen containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-25 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523053#comment-16523053
 ] 

Sunil Govindan commented on YARN-8423:
--

Thanks [~leftnoteasy]

Quickly fixed this issue. Somehow test case seems tricky as container states 
are not picked into test gpu class. I think i ll create another Jira to improve 
tests. Thoughts? Cud u pls add if i overlooked some tests. Thank you.

> GPU does not get released even though the application gets killed.
> --
>
> Key: YARN-8423
> URL: https://issues.apache.org/jira/browse/YARN-8423
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8423.001.patch, YARN-8423.002.patch, 
> YARN-8423.003.patch, kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i /ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"","uuid":"GPU-","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"","uuid":"GPU-","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":""},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8423) GPU does not get released even though the application gets killed.

2018-06-25 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8423:
-
Attachment: YARN-8423.003.patch

> GPU does not get released even though the application gets killed.
> --
>
> Key: YARN-8423
> URL: https://issues.apache.org/jira/browse/YARN-8423
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8423.001.patch, YARN-8423.002.patch, 
> YARN-8423.003.patch, kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i /ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"","uuid":"GPU-","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"","uuid":"GPU-","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":""},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-25 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523050#comment-16523050
 ] 

Miklos Szegedi commented on YARN-8438:
--

+1 LGTM.

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch, 
> YARN-8438.003.patch, YARN-8438.004.patch, YARN-8438.005.patch, 
> YARN-8438.006.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8461) Support strict memory control on individual container with elastic control memory mechanism

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523035#comment-16523035
 ] 

genericqa commented on YARN-8461:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 55s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8461 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929111/YARN-8461.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 698559b3766d 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a3c6e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21103/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21103/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523020#comment-16523020
 ] 

genericqa commented on YARN-8220:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
17s{color} | {color:orange} The patch generated 387 new + 0 unchanged - 0 fixed 
= 387 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 12s{color} 
| {color:red} hadoop-yarn-applications in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-deep-learning-frameworks in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929097/YARN-8220.004.patch |
| Optional Tests |  asflicense  mvnsite  xml  compile  javac  javadoc  
mvninstall  unit  shadedclient  pylint  |
| uname | Linux 66762fc11f91 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (YARN-8455) Add basic acl check for all TS v2 REST APIs

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523006#comment-16523006
 ] 

genericqa commented on YARN-8455:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929110/YARN-8455.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1a71fce6acec 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7a3c6e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21100/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21100/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-8361) Change App Name Placement Rule to use App Name instead of App Id for configuration

2018-06-25 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522987#comment-16522987
 ] 

Zian Chen commented on YARN-8361:
-

[~suma.shivaprasad], thanks for the comments. Fixed the documentation issue and 
re-uploaded the patch. Could you help do the review for the patch 002? Thanks

> Change App Name Placement Rule to use App Name instead of App Id for 
> configuration
> --
>
> Key: YARN-8361
> URL: https://issues.apache.org/jira/browse/YARN-8361
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8361.001.patch, YARN-8361.002.patch
>
>
> 1. AppNamePlacementRule used app id while specifying queue mapping placement 
> rules, should change to app name
> 2. Change documentation as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8361) Change App Name Placement Rule to use App Name instead of App Id for configuration

2018-06-25 Thread Zian Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8361:

Attachment: YARN-8361.002.patch

> Change App Name Placement Rule to use App Name instead of App Id for 
> configuration
> --
>
> Key: YARN-8361
> URL: https://issues.apache.org/jira/browse/YARN-8361
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8361.001.patch, YARN-8361.002.patch
>
>
> 1. AppNamePlacementRule used app id while specifying queue mapping placement 
> rules, should change to app name
> 2. Change documentation as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522985#comment-16522985
 ] 

genericqa commented on YARN-8220:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m 
15s{color} | {color:orange} The patch generated 387 new + 0 unchanged - 0 fixed 
= 387 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 31s{color} 
| {color:red} hadoop-yarn-applications in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-deep-learning-frameworks in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8220 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929091/YARN-8220.003.patch |
| Optional Tests |  asflicense  mvnsite  xml  compile  javac  javadoc  
mvninstall  unit  shadedclient  pylint  |
| uname | Linux 00045d11878b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c687a66 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| pylint | v1.9.1 |
| 

[jira] [Commented] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-06-25 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522977#comment-16522977
 ] 

Zian Chen commented on YARN-8379:
-

[~eepayne], for the suggestions you mentioned before which is make the 
balancing was done all at once instead of add FifoSelector twice. Let me 
explain this in two aspects,

1. in TempQueuePerPartition#offer. When we calculate ideal assignment, first 
time we calculate accepted using this,
{code:java}
// accepted = min{avail,
//   max - assigned,
//   current + pending - assigned,
//   # Make sure a queue will not get more than max of its
//   # used/guaranteed, this is to make sure preemption won't
//   # happen if all active queues are beyond their guaranteed
//   # This is for leaf queue only.
//   max(guaranteed, used) - assigned}
{code}
 
The second time, we calculate accepted without check max(guaranteed, used), as 
far as I can see, this two steps should be done sequentially instead of in one 
shot.
2. Another reason is we add an option to set configureable timeout for 
preempt-to-balance selected containers (selected by fifo2) which can let user 
set custom timeout for these preempt-to-balance containers and actually kill 
them faster/slower based on user needs, which leads to control the balance 
process to be faster or slower. But this timeout should only be affects 
containers selected for balance, not for an underutilized queue to reach its 
guaranteed resource. So we need to separate these two process.
 
All other comments should be handled by the latest patch already. Thanks!

> Add an option to allow Capacity Scheduler preemption to balance satisfied 
> queues
> 
>
> Key: YARN-8379
> URL: https://issues.apache.org/jira/browse/YARN-8379
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8379.001.patch, YARN-8379.002.patch, 
> YARN-8379.003.patch, YARN-8379.004.patch, ericpayne.confs.tgz
>
>
> Existing capacity scheduler only supports preemption for an underutilized 
> queue to reach its guaranteed resource. In addition to that, there’s an 
> requirement to get better balance between queues when all of them reach 
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c 
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing 
> scheduler preemption won't happen. But this is unfair to queue_a since 
> queue_a has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance 
> queues. We changed the logic since it could preempt too many containers 
> between queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8459) Capacity Scheduler should properly handle container allocation on app/node when app/node being removed by scheduler

2018-06-25 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522972#comment-16522972
 ] 

Wangda Tan commented on YARN-8459:
--

Attached ver.1 patch to run Jenkins, I felt it might be not straightforward to 
add tests. We need a lot of mock. I'm thinking to add a chaos-monkey-like UT to 
just randomly start/stop nodes/apps. We should be able to get some interesting 
results from that. 

Will update ver.2 patch with tests. 

cc: [~sunil.gov...@gmail.com], [~Tao Yang], [~cheersyang]. 

> Capacity Scheduler should properly handle container allocation on app/node 
> when app/node being removed by scheduler
> ---
>
> Key: YARN-8459
> URL: https://issues.apache.org/jira/browse/YARN-8459
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.1.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8459.001.patch
>
>
> Thanks [~gopalv] for reporting this issue. 
> In async mode, capacity scheduler can allocate/reserve containers on node/app 
> when node/app is being removed ({{doneApplicationAttempt}}/{{removeNode}}).
> This will cause some issues, for example.
> a. Container for app_1 reserved on node_x.
> b. At the same time, app_1 is being removed.
> c. Reserve on node operation finished after app_1 removed 
> ({{doneApplicationAttempt}}). 
> For all the future runs, the node_x is completely blocked by the invalid 
> reservation. It keep reporting "Trying to schedule for a finished app, please 
> double check" for the node_x.
> We need a fix to make sure this won't happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8459) Capacity Scheduler should properly handle container allocation on app/node when app/node being removed by scheduler

2018-06-25 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8459:
-
Attachment: YARN-8459.001.patch

> Capacity Scheduler should properly handle container allocation on app/node 
> when app/node being removed by scheduler
> ---
>
> Key: YARN-8459
> URL: https://issues.apache.org/jira/browse/YARN-8459
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.1.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-8459.001.patch
>
>
> Thanks [~gopalv] for reporting this issue. 
> In async mode, capacity scheduler can allocate/reserve containers on node/app 
> when node/app is being removed ({{doneApplicationAttempt}}/{{removeNode}}).
> This will cause some issues, for example.
> a. Container for app_1 reserved on node_x.
> b. At the same time, app_1 is being removed.
> c. Reserve on node operation finished after app_1 removed 
> ({{doneApplicationAttempt}}). 
> For all the future runs, the node_x is completely blocked by the invalid 
> reservation. It keep reporting "Trying to schedule for a finished app, please 
> double check" for the node_x.
> We need a fix to make sure this won't happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8461) Support strict memory control on individual container with elastic control memory mechanism

2018-06-25 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8461:
-
Attachment: YARN-8461.00.patch

> Support strict memory control on individual container with elastic control 
> memory mechanism
> ---
>
> Key: YARN-8461
> URL: https://issues.apache.org/jira/browse/YARN-8461
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8461.00.patch
>
>
> YARN-4599 adds elastic memory control that disables oom killer for the root 
> container cgroup. Hence, all containers have their oom killer disabled 
> because they inherit the setting from the root container cgroup. Hence, when 
> strict memory control on individual containers is also enabled, the container 
> will be frozen but not killed. We can let the container monitoring thread to 
> take care of the frozen containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8461) Support strict memory control on individual container with elastic control memory mechanism

2018-06-25 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8461:


 Summary: Support strict memory control on individual container 
with elastic control memory mechanism
 Key: YARN-8461
 URL: https://issues.apache.org/jira/browse/YARN-8461
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 3.2.0
Reporter: Haibo Chen
Assignee: Haibo Chen


YARN-4599 adds elastic memory control that disables oom killer for the root 
container cgroup. Hence, all containers have their oom killer disabled because 
they inherit the setting from the root container cgroup. Hence, when strict 
memory control on individual containers is also enabled, the container will be 
frozen but not killed. We can let the container monitoring thread to take care 
of the frozen containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8455) Add basic acl check for all TS v2 REST APIs

2018-06-25 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8455:

Attachment: YARN-8455.001.patch

> Add basic acl check for all TS v2 REST APIs
> ---
>
> Key: YARN-8455
> URL: https://issues.apache.org/jira/browse/YARN-8455
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8455.001.patch
>
>
> YARN-8319 filter check for flows pages. The same behavior need to be added 
> for all other REST API as long as ATS provides support for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8455) Add basic acl check for all TS v2 REST APIs

2018-06-25 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522958#comment-16522958
 ] 

Rohith Sharma K S commented on YARN-8455:
-

cc:/ [~sunilg]

> Add basic acl check for all TS v2 REST APIs
> ---
>
> Key: YARN-8455
> URL: https://issues.apache.org/jira/browse/YARN-8455
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8455.001.patch
>
>
> YARN-8319 filter check for flows pages. The same behavior need to be added 
> for all other REST API as long as ATS provides support for ACLs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8214) Change default RegistryDNS port

2018-06-25 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522949#comment-16522949
 ] 

Billie Rinaldi commented on YARN-8214:
--

Thanks for the comment, [~eyang]. Sounds like we should pick a different port. 
Any suggestions?

> Change default RegistryDNS port
> ---
>
> Key: YARN-8214
> URL: https://issues.apache.org/jira/browse/YARN-8214
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8214.1.patch, YARN-8214.2.patch
>
>
> The current default port (5353) is used by mdns, so we should change the 
> default to something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8460) 'yarn.cluster.max-application-priority' need to be exposed via CLI/REST

2018-06-25 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8460:
-
Summary: 'yarn.cluster.max-application-priority' need to be exposed via 
CLI/REST  (was: please add a way to fetch 
'yarn.cluster.max-application-priority' )

> 'yarn.cluster.max-application-priority' need to be exposed via CLI/REST
> ---
>
> Key: YARN-8460
> URL: https://issues.apache.org/jira/browse/YARN-8460
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Priority: Major
>
> Add a method to fetch  value for 'yarn.cluster.max-application-priority'.
> Since the property is not available by default , please add either REST api 
> /CLI method. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8460) please add a way to fetch 'yarn.cluster.max-application-priority'

2018-06-25 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-8460:


 Summary: please add a way to fetch 
'yarn.cluster.max-application-priority' 
 Key: YARN-8460
 URL: https://issues.apache.org/jira/browse/YARN-8460
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish


Add a method to fetch  value for 'yarn.cluster.max-application-priority'.

Since the property is not available by default , please add either REST api 
/CLI method. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on trunk

2018-06-25 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522912#comment-16522912
 ] 

Chandni Singh edited comment on YARN-8458 at 6/25/18 11:33 PM:
---

Result of running {{TestCapacitySchedulerPerf}} on branch-3.1
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.312 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf

#ResourceTypes = 2. Avg of fastest 20: 34602.074

#ResourceTypes = 5. Avg of fastest 20: 25000.0

#ResourceTypes = 4. Avg of fastest 20: 26420.08

#ResourceTypes = 3. Avg of fastest 20: 27173.912
{code}
Result of running {{TestCapacitySchedulerPerf}} on branch-3.0
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.687 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf

#ResourceTypes = 2. Avg of fastest 20: 35460.992

#ResourceTypes = 5. Avg of fastest 20: 28129.395

#ResourceTypes = 4. Avg of fastest 20: 29498.525

#ResourceTypes = 3. Avg of fastest 20: 31201.248
{code}


was (Author: csingh):
Result of running {{TestCapacitySchedulerPerf}} on branch-3.1
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.312 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] 
{code}
Result of running {{TestCapacitySchedulerPerf}} on branch-3.0
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.687 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf

#ResourceTypes = 2. Avg of fastest 20: 35460.992

#ResourceTypes = 5. Avg of fastest 20: 28129.395

#ResourceTypes = 4. Avg of fastest 20: 29498.525

#ResourceTypes = 3. Avg of fastest 20: 31201.248
{code}

> Perform SLS testing and run TestCapacitySchedulerPerf on trunk
> --
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on trunk

2018-06-25 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522912#comment-16522912
 ] 

Chandni Singh edited comment on YARN-8458 at 6/25/18 11:19 PM:
---

Result of running {{TestCapacitySchedulerPerf}} on branch-3.1
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.312 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] 
{code}
Result of running {{TestCapacitySchedulerPerf}} on branch-3.0
{code:java}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.687 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf

#ResourceTypes = 2. Avg of fastest 20: 35460.992

#ResourceTypes = 5. Avg of fastest 20: 28129.395

#ResourceTypes = 4. Avg of fastest 20: 29498.525

#ResourceTypes = 3. Avg of fastest 20: 31201.248
{code}


was (Author: csingh):
Result of running {{TestCapacitySchedulerPerf}} on branch-3.1
{code}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.312 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] 
{code}

Result of running {{TestCapacitySchedulerPerf}} on branch-3.0
{code}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.687 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
{code}

> Perform SLS testing and run TestCapacitySchedulerPerf on trunk
> --
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on trunk

2018-06-25 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8458:

Summary: Perform SLS testing and run TestCapacitySchedulerPerf on trunk  
(was: Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1)

> Perform SLS testing and run TestCapacitySchedulerPerf on trunk
> --
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-06-25 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522930#comment-16522930
 ] 

Zian Chen commented on YARN-8379:
-

Fixed all the failed cases and re-upload the patch.[~eepayne], could you please 
help review the newest patch and share your comments? Thanks!

 

[~leftnoteasy], [~sunilg], could you also share your thoughts on the latest 
patch please?

> Add an option to allow Capacity Scheduler preemption to balance satisfied 
> queues
> 
>
> Key: YARN-8379
> URL: https://issues.apache.org/jira/browse/YARN-8379
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8379.001.patch, YARN-8379.002.patch, 
> YARN-8379.003.patch, YARN-8379.004.patch, ericpayne.confs.tgz
>
>
> Existing capacity scheduler only supports preemption for an underutilized 
> queue to reach its guaranteed resource. In addition to that, there’s an 
> requirement to get better balance between queues when all of them reach 
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c 
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing 
> scheduler preemption won't happen. But this is unfair to queue_a since 
> queue_a has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance 
> queues. We changed the logic since it could preempt too many containers 
> between queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8379) Add an option to allow Capacity Scheduler preemption to balance satisfied queues

2018-06-25 Thread Zian Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8379:

Attachment: YARN-8379.004.patch

> Add an option to allow Capacity Scheduler preemption to balance satisfied 
> queues
> 
>
> Key: YARN-8379
> URL: https://issues.apache.org/jira/browse/YARN-8379
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8379.001.patch, YARN-8379.002.patch, 
> YARN-8379.003.patch, YARN-8379.004.patch, ericpayne.confs.tgz
>
>
> Existing capacity scheduler only supports preemption for an underutilized 
> queue to reach its guaranteed resource. In addition to that, there’s an 
> requirement to get better balance between queues when all of them reach 
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c 
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing 
> scheduler preemption won't happen. But this is unfair to queue_a since 
> queue_a has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance 
> queues. We changed the logic since it could preempt too many containers 
> between queues when all queues are satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8214) Change default RegistryDNS port

2018-06-25 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522918#comment-16522918
 ] 

Eric Yang commented on YARN-8214:
-

Port 5300 is used by W32.Kibuv.Worm.  This port may generate false positive 
during security scan.  I am ok to commit this as it is, but keeping this open 
until tomorrow for others to provide their feedback.

> Change default RegistryDNS port
> ---
>
> Key: YARN-8214
> URL: https://issues.apache.org/jira/browse/YARN-8214
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8214.1.patch, YARN-8214.2.patch
>
>
> The current default port (5353) is used by mdns, so we should change the 
> default to something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1

2018-06-25 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522912#comment-16522912
 ] 

Chandni Singh commented on YARN-8458:
-

Result of running {{TestCapacitySchedulerPerf}} on branch-3.1
{code}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 373.312 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] 
{code}

Result of running {{TestCapacitySchedulerPerf}} on branch-3.0
{code}
[INFO] Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 277.687 
s - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerPerf
{code}

> Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1
> ---
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1

2018-06-25 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522911#comment-16522911
 ] 

Chandni Singh commented on YARN-8458:
-

SLS result:

Total has 441027 container allocated, 1470.09 containers allocated per second
Total has 441480 proposal accepted, 1562 rejected

> Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1
> ---
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8459) Capacity Scheduler should properly handle container allocation on app/node when app/node being removed by scheduler

2018-06-25 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-8459:


 Summary: Capacity Scheduler should properly handle container 
allocation on app/node when app/node being removed by scheduler
 Key: YARN-8459
 URL: https://issues.apache.org/jira/browse/YARN-8459
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan
Assignee: Wangda Tan


Thanks [~gopalv] for reporting this issue. 

In async mode, capacity scheduler can allocate/reserve containers on node/app 
when node/app is being removed ({{doneApplicationAttempt}}/{{removeNode}}).

This will cause some issues, for example.

a. Container for app_1 reserved on node_x.
b. At the same time, app_1 is being removed.
c. Reserve on node operation finished after app_1 removed 
({{doneApplicationAttempt}}). 

For all the future runs, the node_x is completely blocked by the invalid 
reservation. It keep reporting "Trying to schedule for a finished app, please 
double check" for the node_x.

We need a fix to make sure this won't happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8459) Capacity Scheduler should properly handle container allocation on app/node when app/node being removed by scheduler

2018-06-25 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8459:
-
Affects Version/s: 3.1.0
 Target Version/s: 3.1.1
 Priority: Blocker  (was: Major)
  Component/s: capacity scheduler

> Capacity Scheduler should properly handle container allocation on app/node 
> when app/node being removed by scheduler
> ---
>
> Key: YARN-8459
> URL: https://issues.apache.org/jira/browse/YARN-8459
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.1.0
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> Thanks [~gopalv] for reporting this issue. 
> In async mode, capacity scheduler can allocate/reserve containers on node/app 
> when node/app is being removed ({{doneApplicationAttempt}}/{{removeNode}}).
> This will cause some issues, for example.
> a. Container for app_1 reserved on node_x.
> b. At the same time, app_1 is being removed.
> c. Reserve on node operation finished after app_1 removed 
> ({{doneApplicationAttempt}}). 
> For all the future runs, the node_x is completely blocked by the invalid 
> reservation. It keep reporting "Trying to schedule for a finished app, please 
> double check" for the node_x.
> We need a fix to make sure this won't happen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8453) Allocation to a queue is dishonored if one resource is at the limit

2018-06-25 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8453:
-
Target Version/s: 3.1.1, 3.0.4
Priority: Blocker  (was: Major)

> Allocation to a queue is dishonored if one resource is at the limit
> ---
>
> Key: YARN-8453
> URL: https://issues.apache.org/jira/browse/YARN-8453
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.2
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Blocker
>
> Post support of additional resource types other then CPU and Memory, it could 
> be possible that one such new resource is exhausted its quota on a given 
> queue. But other resources such as Memory / CPU is still there beyond its 
> guaranteed limit (under max-limit). However as new resource is exhausted, 
> still containers will be failed to get that delta resources (cpu and memory). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522897#comment-16522897
 ] 

Wangda Tan commented on YARN-8220:
--

Attached ver.4 patch, removed duplicated contents inside Dockerfile and make 
them built from base images.

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch, YARN-8220.002.patch, 
> YARN-8220.003.patch, YARN-8220.004.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8220:
-
Attachment: YARN-8220.004.patch

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch, YARN-8220.002.patch, 
> YARN-8220.003.patch, YARN-8220.004.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8180) YARN Federation has not implemented blacklist sub-cluster for AM routing

2018-06-25 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522891#comment-16522891
 ] 

Giovanni Matteo Fumarola commented on YARN-8180:


[~abmodi] can you please update the title and the description?

> YARN Federation has not implemented blacklist sub-cluster for AM routing
> 
>
> Key: YARN-8180
> URL: https://issues.apache.org/jira/browse/YARN-8180
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation
>Reporter: Shen Yinjie
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8180.001.patch
>
>
> Property "yarn.federation.blacklist-subclusters" is defined in 
> yarn-fedeartion doc,but it has not been defined and implemented in Java code.
> In FederationClientInterceptor#submitApplication()
> {code:java}
> List blacklist = new ArrayList();
> for (int i = 0; i < numSubmitRetries; ++i) {
> SubClusterId subClusterId = policyFacade.getHomeSubcluster(
> request.getApplicationSubmissionContext(), blacklist);
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522890#comment-16522890
 ] 

genericqa commented on YARN-8434:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 68m 
43s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 19s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.TestFederationRMFailoverProxyProvider 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8434 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-5123) SQL based RM state store

2018-06-25 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522858#comment-16522858
 ] 

Giovanni Matteo Fumarola commented on YARN-5123:


[~lavkesh] are you currently working on this? If not, do you mind if I start 
working on it?

> SQL based RM state store
> 
>
> Key: YARN-5123
> URL: https://issues.apache.org/jira/browse/YARN-5123
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Lavkesh Lahngir
>Assignee: Lavkesh Lahngir
>Priority: Major
> Attachments: 0001-SQL-Based-RM-state-store-trunk.patch, High 
> Availability In YARN Resource Manager using SQL Based StateStore.pdf, 
> sqlstatestore.patch
>
>
> In our setup,  zookeeper based RM state store didn't work. We ended up 
> implementing our own SQL based state store. Here is a patch, if anybody else 
> wants to use it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522855#comment-16522855
 ] 

Wangda Tan commented on YARN-8220:
--

Attached ver.3 patch, added several fixes to submit-tf-job.py helper script. 
And added tensorboard to example launch spec. Thanks [~yanboliang] for offline 
suggestions and helps of these changes.

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch, YARN-8220.002.patch, 
> YARN-8220.003.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8220) Running Tensorflow on YARN with GPU and Docker - Examples

2018-06-25 Thread Wangda Tan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8220:
-
Attachment: YARN-8220.003.patch

> Running Tensorflow on YARN with GPU and Docker - Examples
> -
>
> Key: YARN-8220
> URL: https://issues.apache.org/jira/browse/YARN-8220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Critical
> Attachments: YARN-8220.001.patch, YARN-8220.002.patch, 
> YARN-8220.003.patch
>
>
> Tensorflow could be run on YARN and could leverage YARN's distributed 
> features.
> This spec fill will help to run Tensorflow on yarn with GPU/docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1

2018-06-25 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8458:

Attachment: sls_snapshot_memory_snapshot_june_25.nps
sls_snapshot_cpu_snapshot_june_25.nps

> Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1
> ---
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: sls_snapshot_cpu_snapshot_june_25.nps, 
> sls_snapshot_memory_snapshot_june_25.nps
>
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1

2018-06-25 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8458:

Description: Run SLS test and TestCapacitySchedulerPerf  (was: Run )

> Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1
> ---
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Run SLS test and TestCapacitySchedulerPerf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1

2018-06-25 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8458:

Summary: Perform SLS testing and run TestCapacitySchedulerPerf on 
branch-3.1  (was: Perform SLS testing and run TestCapacitySchedulerPerf)

> Perform SLS testing and run TestCapacitySchedulerPerf on branch-3.1
> ---
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Run 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf

2018-06-25 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8458:

Description: Run 

> Perform SLS testing and run TestCapacitySchedulerPerf
> -
>
> Key: YARN-8458
> URL: https://issues.apache.org/jira/browse/YARN-8458
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Run 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8458) Perform SLS testing and run TestCapacitySchedulerPerf

2018-06-25 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8458:
---

 Summary: Perform SLS testing and run TestCapacitySchedulerPerf
 Key: YARN-8458
 URL: https://issues.apache.org/jira/browse/YARN-8458
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Chandni Singh
Assignee: Chandni Singh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-25 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522811#comment-16522811
 ] 

Eric Payne commented on YARN-4606:
--

{quote}At the same time, this patch is less "strict" in terms of updates 
(specifically on when? ) compared to approaches discussed in our earlier 
patches.
{quote}
The value for number of active apps per user used to be calculated every time 
through the scheduler loop, which was a performance problem. In order to avoid 
this heavy calculation, YARN-5889 created the {{UsersManager}}. Instead of 
doing the calculation every time through the loop, YARN-5889 only recalculates 
these values when events occurs that could affect this count like new 
application, app completes, new container request, completed container, etc. In 
the latest POC patch, {{activeUsersWithOnlyPendingApps}} is part of this flow, 
so it will always be updated whenever anything happens that could affect this 
value.
{quote}Also, based on our earlier discussions, We need to depend on 
activeUsers.get() only in certain context and sum of activeUsers.get() and 
activeUsersWithOnlyPendingApps.get() in some other places. But POC patch always 
depends on later value. I didn't understand this part.
{quote}
I think you are referencing this comment from above:
{quote}My understanding is that user limit would use activeUsers and things 
like max AM limit per user, we'd use activeUsers + activeUsersOfPendingApps
{quote}
{{LeafQueue#activateApplications}} is the only thing that calls 
{{UsersManager#getNumActiveUsers}}, which it uses to calculate the 
user-specific AM limit, so it's the one that needs both activeusers + 
{{activeUsersWithOnlyPendingApps}}.
 {{UsersManager#computeUserLimit}} uses only activeUsers to calculate the 
headroom and user limit, which is what we decided in the comment above. Is that 
your understanding of these comments?

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.003.patch, YARN-4606.004.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.3.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8214) Change default RegistryDNS port

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522749#comment-16522749
 ] 

genericqa commented on YARN-8214:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8214 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929071/YARN-8214.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  

[jira] [Commented] (YARN-8451) Multiple NM heartbeat thread created when a slow NM resync with RM

2018-06-25 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522730#comment-16522730
 ] 

Botong Huang commented on YARN-8451:


Hi [~jlowe], can you help take a look please? 

> Multiple NM heartbeat thread created when a slow NM resync with RM
> --
>
> Key: YARN-8451
> URL: https://issues.apache.org/jira/browse/YARN-8451
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8451.v1.patch
>
>
> During a NM resync with RM (say RM did a master slave switch), if NM is 
> running slow, more than one RESYNC event may be put into the NM dispatcher by 
> the existing heartbeat thread before they are processed. As a result, 
> multiple new heartbeat thread are later created and start to hb to RM 
> concurrently with their own responseId. If at some point of time, one thread 
> becomes more than one step behind others, RM will send back a resync signal 
> in this heartbeat response, killing all containers in this NM. 
> See comments below for details on how this can happen. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8214) Change default RegistryDNS port

2018-06-25 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-8214:
-
Attachment: YARN-8214.2.patch

> Change default RegistryDNS port
> ---
>
> Key: YARN-8214
> URL: https://issues.apache.org/jira/browse/YARN-8214
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-8214.1.patch, YARN-8214.2.patch
>
>
> The current default port (5353) is used by mdns, so we should change the 
> default to something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522619#comment-16522619
 ] 

Hudson commented on YARN-8457:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14475 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14475/])
YARN-8457. Compilation is broken with -Pyarn-ui. (rohithsharmaks: rev 
4ffe68a6f70ce01a5654da8991b4cdb35ae0bf1f)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc


> Compilation is broken with -Pyarn-ui
> 
>
> Key: YARN-8457
> URL: https://issues.apache.org/jira/browse/YARN-8457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: YARN-8457.patch
>
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui 
> ---
> [INFO] Running 'bower install' in 
> /Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
> https://bower.herokuapp.com/packages/ember-load-initializers failed with 
> 502{code}
> This needs to corrected by pointing to correct registry which is 
> {{"registry": "https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8418) App local logs could leaked if log aggregation fails to initialize for the app

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522614#comment-16522614
 ] 

genericqa commented on YARN-8418:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  4m 
25s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
53s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 53s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
33s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929047/YARN-8418.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5609fefa3704 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1ba4e62 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/21095/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn.txt
 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-25 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522601#comment-16522601
 ] 

Bibin A Chundatt commented on YARN-8434:


Attached patch providing option to define server proxy . For backward 
compatibility default value is set same for CLIENT and server.

> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: YARN-8434.001.patch
>
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-25 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522599#comment-16522599
 ] 

Manikandan R commented on YARN-4606:


[~eepayne] Thanks for the patch.

At a high level, POC is very simple from implementation perspective and changes 
would be minimal with this approach. At the same time, this patch is less 
"strict" in terms of updates (specifically on when? ) compared to approaches 
discussed in our earlier patches. For example, In earlier approach, 
numActiveUsersWithOnlyPendingApps would be incremented as soon as app gets 
activated and gets decremented as soon as AM container gets allocated. In 
addition, all of these things happens immediately and only after the dependent 
steps gets completed for sure. Whereas, new POC patch depends on the values 
(pendingApplications, activeApplications etc of User object), conditions before 
the actual work (for example, assuming AM container would be allocated 
successfully based on checks in LeafQueue#activateApplications) and updates 
numActiveUsersWithOnlyPendingApps as part of regular computeUserLimits flow. 
All these things is creating a slight discomfort and lead to some of the 
questions like

What is the time frame that we are seeing between accepting the app and 
updating numActiveUsersWithOnlyPendingApps? Is this time frame acceptable? 
Aren't we running little slower in doing updates? Is there any chance by which 
AM container has been failed to allocate? Lets say, If AM container allocation 
goes through successfully, Would be there any delay in allocating AM 
containers? During this delayed duration, we are considering the user as active 
user rather than treating the user as "activeUsersWithOnlyPendingApps". Is this 
acceptable? I am interested in understanding your thoughts behind this tradeoff.

Also, based on our earlier discussions, We need to depend on 
{{activeUsers.get()}} only in certain context and sum of {{activeUsers.get()}} 
and {{activeUsersWithOnlyPendingApps.get()}} in some other places. But POC 
patch always depends on later value. I didn't understand this part.

On the other hand, We can avoid {{AppAMAttemptsFailedSchedulerEvent}} related 
changes completely with this new patch as anyway {{User.finishApplication()}} 
would be called for sure even when max AM attempts has been reached.

Please share your thoughts.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.003.patch, YARN-4606.004.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.3.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-25 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8434:
---
Attachment: YARN-8434.001.patch

> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: YARN-8434.001.patch
>
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522589#comment-16522589
 ] 

Rohith Sharma K S commented on YARN-8457:
-

+1 lgtm..  I built and verified locally with the patch.  

> Compilation is broken with -Pyarn-ui
> 
>
> Key: YARN-8457
> URL: https://issues.apache.org/jira/browse/YARN-8457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8457.patch
>
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui 
> ---
> [INFO] Running 'bower install' in 
> /Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
> https://bower.herokuapp.com/packages/ember-load-initializers failed with 
> 502{code}
> This needs to corrected by pointing to correct registry which is 
> {{"registry": "https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522561#comment-16522561
 ] 

genericqa commented on YARN-8457:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929045/YARN-8457.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux f6377a58a992 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1ba4e62 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21094/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Compilation is broken with -Pyarn-ui
> 
>
> Key: YARN-8457
> URL: https://issues.apache.org/jira/browse/YARN-8457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8457.patch
>
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui 
> ---
> [INFO] Running 'bower install' in 
> /Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
> https://bower.herokuapp.com/packages/ember-load-initializers failed with 
> 502{code}
> This needs to corrected by pointing to correct registry which is 
> {{"registry": "https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6672) Add NM preemption of opportunistic containers when utilization goes high

2018-06-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522509#comment-16522509
 ] 

Íñigo Goiri commented on YARN-6672:
---

Thanks [~haibochen] for  [^YARN-6672-YARN-1011.02.patch].
A few comments:
* Use logger style in ContainerScheduler#228.
* Add links in {{ContainerSchedulerOverallocationPreemptionEvent}} javadoc 
description.
* Make the fields in {{SnapshotBasedOverAllocationPreemptionPolicy}} final.
* In {{SnapshotBasedOverAllocationPreemptionPolicy}}, I would always return the 
newInstance for ResourceUtilization. So it should be a matter of sanitzing both 
vcoreOverLimit and memoryOverLimit.
* I would add a unit test for SnapshotBasedOverAllocationPreemptionPolicy with 
the 4/5 cases (both OK, bad in CPU, bad in memory, bad for both, and a couple 
negative cases).
* For the unit tests in TestContainerSchedulerWithOverAllocation, I would try 
to do the new unit tests with and without the feature enabled. This would 
require some refactor.
* Instead of {{2.0f/2}} in testPreemptionUponHighCPUUtilization, we should have 
some constant/extracted variable.

> Add NM preemption of opportunistic containers when utilization goes high
> 
>
> Key: YARN-6672
> URL: https://issues.apache.org/jira/browse/YARN-6672
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-6672-YARN-1011.00.patch, 
> YARN-6672-YARN-1011.01.patch, YARN-6672-YARN-1011.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8418) App local logs could leaked if log aggregation fails to initialize for the app

2018-06-25 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522495#comment-16522495
 ] 

Bibin A Chundatt commented on YARN-8418:


Attaching patch with testcase .. Could someone help in review ??

> App local logs could leaked if log aggregation fails to initialize for the app
> --
>
> Key: YARN-8418
> URL: https://issues.apache.org/jira/browse/YARN-8418
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-8418.001.patch, YARN-8418.002.patch, 
> YARN-8418.003.patch, YARN-8418.004.patch
>
>
> If log aggregation fails init createApp directory container logs could get 
> leaked in NM directory
> For log running application restart of NM after token renewal this case is 
> possible/  Application submission with invalid delegation token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8418) App local logs could leaked if log aggregation fails to initialize for the app

2018-06-25 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8418:
---
Attachment: YARN-8418.004.patch

> App local logs could leaked if log aggregation fails to initialize for the app
> --
>
> Key: YARN-8418
> URL: https://issues.apache.org/jira/browse/YARN-8418
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-8418.001.patch, YARN-8418.002.patch, 
> YARN-8418.003.patch, YARN-8418.004.patch
>
>
> If log aggregation fails init createApp directory container logs could get 
> leaked in NM directory
> For log running application restart of NM after token renewal this case is 
> possible/  Application submission with invalid delegation token



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522484#comment-16522484
 ] 

Sunil Govindan commented on YARN-8457:
--

cc [~rohithsharma] pls help to review

> Compilation is broken with -Pyarn-ui
> 
>
> Key: YARN-8457
> URL: https://issues.apache.org/jira/browse/YARN-8457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8457.patch
>
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui 
> ---
> [INFO] Running 'bower install' in 
> /Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
> https://bower.herokuapp.com/packages/ember-load-initializers failed with 
> 502{code}
> This needs to corrected by pointing to correct registry which is 
> {{"registry": "https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8457:
-
Attachment: YARN-8457.patch

> Compilation is broken with -Pyarn-ui
> 
>
> Key: YARN-8457
> URL: https://issues.apache.org/jira/browse/YARN-8457
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil Govindan
>Assignee: Sunil Govindan
>Priority: Major
> Attachments: YARN-8457.patch
>
>
> {code:java}
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui 
> ---
> [INFO] Running 'bower install' in 
> /Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
> https://bower.herokuapp.com/packages/ember-load-initializers failed with 
> 502{code}
> This needs to corrected by pointing to correct registry which is 
> {{"registry": "https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8457) Compilation is broken with -Pyarn-ui

2018-06-25 Thread Sunil Govindan (JIRA)
Sunil Govindan created YARN-8457:


 Summary: Compilation is broken with -Pyarn-ui
 Key: YARN-8457
 URL: https://issues.apache.org/jira/browse/YARN-8457
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Reporter: Sunil Govindan
Assignee: Sunil Govindan


{code:java}
[INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui ---

[INFO] Running 'bower install' in 
/Users/sunilgovindan/Work/hadoop/commit/sb_trunk/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp

[ERROR] bower ember-load-initializers#0.1.7          EINVRES Request to 
https://bower.herokuapp.com/packages/ember-load-initializers failed with 
502{code}

This needs to corrected by pointing to correct registry which is {{"registry": 
"https://registry.bower.io"}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522196#comment-16522196
 ] 

genericqa commented on YARN-8103:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 9s{color} | {color:green} YARN-3409 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 22m  
2s{color} | {color:red} root in YARN-3409 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
35s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
40s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
1s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
52s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 21m 
11s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 21m 11s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 21m 11s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  0s{color} | {color:orange} root: The patch generated 1 new + 514 unchanged 
- 29 fixed = 515 total (was 543) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 22s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 36s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-server-common in the patch 

[jira] [Commented] (YARN-8270) Adding JMX Metrics for Timeline Collector and Reader

2018-06-25 Thread Sushil Ks (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522094#comment-16522094
 ] 

Sushil Ks commented on YARN-8270:
-

Thanks [~haibochen] for reviewing, I have resolved your comments in the new 
patch, kindly review it.

> Adding JMX Metrics for Timeline Collector and Reader
> 
>
> Key: YARN-8270
> URL: https://issues.apache.org/jira/browse/YARN-8270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Reporter: Sushil Ks
>Assignee: Sushil Ks
>Priority: Major
> Attachments: YARN-8270.001.patch
>
>
> This Jira is for emitting JMX Metrics for ATS v2 Timeline Collector and 
> Timeline Reader, basically for Timeline Collector it tries to capture 
> success, failure and latencies for *putEntities* and *putEntitiesAsync*  from 
> *TimelineCollectorWebService* and all the API's success, failure and 
> latencies for fetching TimelineEntities from *TimelineReaderWebServices*. 
> This would actually help in monitoring and measuring performance for ATSv2 at 
> scale.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8270) Adding JMX Metrics for Timeline Collector and Reader

2018-06-25 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521971#comment-16521971
 ] 

genericqa commented on YARN-8270:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
5s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8270 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928997/YARN-8270.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cbbd262eb5da 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 440140c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21092/testReport/ |
| Max. process+thread count | 436 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21092/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding JMX Metrics for Timeline Collector and 

[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-25 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521952#comment-16521952
 ] 

Bibin A Chundatt commented on YARN-8103:


[~Naganarasimha]

Attached latest patch handling checkstyle. As discussed offline sort will skip 
in this patch.

{code}
./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestClusterCLI.java:188:
 pw.println(" -lna,--list-node-attributes List cluster node-attribute");: Line 
is longer than 80 characters (found 89). [LineLength]
{code}

Skipped since that is the pattern followed for rest of the code for better 
readability.

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.005.patch, 
> YARN-8103-YARN-3409.006.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8103) Add CLI interface to query node attributes

2018-06-25 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8103:
---
Attachment: YARN-8103-YARN-3409.006.patch

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.005.patch, 
> YARN-8103-YARN-3409.006.patch, YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8270) Adding JMX Metrics for Timeline Collector and Reader

2018-06-25 Thread Sushil Ks (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushil Ks updated YARN-8270:

Attachment: (was: YARN-8270.001.patch)

> Adding JMX Metrics for Timeline Collector and Reader
> 
>
> Key: YARN-8270
> URL: https://issues.apache.org/jira/browse/YARN-8270
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Reporter: Sushil Ks
>Assignee: Sushil Ks
>Priority: Major
>
> This Jira is for emitting JMX Metrics for ATS v2 Timeline Collector and 
> Timeline Reader, basically for Timeline Collector it tries to capture 
> success, failure and latencies for *putEntities* and *putEntitiesAsync*  from 
> *TimelineCollectorWebService* and all the API's success, failure and 
> latencies for fetching TimelineEntities from *TimelineReaderWebServices*. 
> This would actually help in monitoring and measuring performance for ATSv2 at 
> scale.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8401) Yarnui2 not working with out internet connection

2018-06-25 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521885#comment-16521885
 ] 

Bibin A Chundatt commented on YARN-8401:


[~sunilg]

Any update required??

> Yarnui2 not working with out internet connection
> 
>
> Key: YARN-8401
> URL: https://issues.apache.org/jira/browse/YARN-8401
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: YARN-8401.001.patch
>
>
> {code}
> 2018-06-06 21:10:58,611 WARN org.eclipse.jetty.webapp.WebAppContext: Failed 
> startup of context 
> o.e.j.w.WebAppContext@108a46d6{/ui2,file:///opt/HA/310/install/hadoop/resourcemanager/share/hadoop/yarn/webapps/ui2/,null}
> java.net.UnknownHostException: java.sun.com
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at java.net.Socket.connect(Socket.java:538)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> at sun.net.www.http.HttpClient.(HttpClient.java:211)
> at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1168)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1104)
> at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:998)
> at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:932)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1512)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1440)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:646)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEntityManager.java:1300)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startDTDEntity(XMLEntityManager.java:1267)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.setInputSource(XMLDTDScannerImpl.java:263)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(XMLDocumentScannerImpl.java:1164)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(XMLDocumentScannerImpl.java:1050)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:964)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
> at 
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
> at 
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1213)
> at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:649)
> at 
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl.parse(SAXParserImpl.java:333)
> at org.eclipse.jetty.xml.XmlParser.parse(XmlParser.java:255)
> at org.eclipse.jetty.webapp.Descriptor.parse(Descriptor.java:54)
> at 
> org.eclipse.jetty.webapp.WebDescriptor.parse(WebDescriptor.java:207)
> at org.eclipse.jetty.webapp.MetaData.setWebXml(MetaData.java:189)
> at 
> org.eclipse.jetty.webapp.WebXmlConfiguration.preConfigure(WebXmlConfiguration.java:60)
> at 
> org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:485)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:521)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
> at 
>