[jira] [Updated] (YARN-5830) Avoid preempting AM containers

2016-12-21 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5830:
---
Attachment: YARN-5830.001.patch

> Avoid preempting AM containers
> --
>
> Key: YARN-5830
> URL: https://issues.apache.org/jira/browse/YARN-5830
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-5830.001.patch
>
>
> While considering containers for preemption, avoid AM containers unless they 
> are the only container for the app. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769332#comment-15769332
 ] 

Rohith Sharma K S commented on YARN-5585:
-

bq. currently it's doing a column value filter; would it be better to use stop 
and start rows?
there are 2 cases
# idPrefix is known : It is very easy to get row when idPrefix is known. Need 
to do Get of row key. 
# idPrefix is unknown : When idPrefix is unknown, range scan happens with 
additional filter i.e SingleColumnValueFilter and PageFilter( for optimization, 
not in attached patch, but will be addressed as Varun's comment. without 
pageFilter patch works). So, I use the existing multiple entity method 
getResult() with additional filters. The getResult() method will scan with 
default mode i.e {{context.getEntityIdPrefix() == null}}  which scan all the 
entity id for given entityType. This internally sets start and stop rows. Is 
that not sufficient?

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangshilong updated YARN-5969:
---
Attachment: 20161222.patch

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, 20161222.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769268#comment-15769268
 ] 

zhangshilong commented on YARN-5969:



Thanks yufei Gu for the reminder, I will improve my patch soon.

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-4994:

Fix Version/s: 3.0.0-alpha2

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, 
> YARN-4994.09.patch, YARN-4994.10.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769187#comment-15769187
 ] 

Hudson commented on YARN-4994:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11028 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11028/])
YARN-4994. Use MiniYARNCluster with try-with-resources in tests. (aajisaka: rev 
ae401539eaf7745ec8690f9281726fb4cdcdbe94)
* (edit) 
hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogsRunner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMProxy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* (edit) 
hadoop-tools/hadoop-archive-logs/src/test/java/org/apache/hadoop/tools/TestHadoopArchiveLogs.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestHedgingRequestRMFailoverProxyProvider.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnCluster.java


> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, 
> YARN-4994.09.patch, YARN-4994.10.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5903) Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769153#comment-15769153
 ] 

Hadoop QA commented on YARN-5903:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
51s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5903 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12841788/YARN-5903.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7a9f90817049 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 736f54b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14437/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14437/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl 
> beforeclass setup method
> 
>
> Key: YARN-5903
> URL: https://issues.apache.org/jira/browse/YARN-5903
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5903.02.patch, YARN-5903.03.patch, 
> yarn5903.001.patch
>
>
> This is 

[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769138#comment-15769138
 ] 

Akira Ajisaka commented on YARN-4994:
-

Committed this to trunk. I'll cherry-pick this to branch-2 and branch-2.8 after 
HDFS-11258 is fixed.

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, 
> YARN-4994.09.patch, YARN-4994.10.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5993) Allow native services quicklinks to be exported for each component

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769123#comment-15769123
 ] 

Hadoop QA commented on YARN-5993:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
58s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 5 new + 170 unchanged - 6 fixed = 175 total (was 176) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5993 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843950/YARN-5993-yarn-native-services.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 71973911edaa 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 27a13ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14438/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14438/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 |
| Console output | 

[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15769115#comment-15769115
 ] 

Akira Ajisaka commented on YARN-4994:
-

+1, the test failures are not related to the patch.

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, 
> YARN-4994.09.patch, YARN-4994.10.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5924) Resource Manager fails to load state with InvalidProtocolBufferException

2016-12-21 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5924:
--
Assignee: Oleksii Dymytrov

> Resource Manager fails to load state with InvalidProtocolBufferException
> 
>
> Key: YARN-5924
> URL: https://issues.apache.org/jira/browse/YARN-5924
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Oleksii Dymytrov
> Attachments: YARN-5924.002.patch
>
>
> InvalidProtocolBufferException is thrown during recovering of the 
> application's state if application's data has invalid format (or is broken) 
> under FSRMStateRoot/RMAppRoot/application_1477986176766_0134/ directory in 
> HDFS:
> {noformat}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message 
> end-group tag did not match expected tag.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)
>   at 
> com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)
>   at 
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:143)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:188)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:193)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$ApplicationStateDataProto.parseFrom(YarnServerResourceManagerRecoveryProtos.java:1028)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$RMAppStateFileProcessor.processChildNode(FileSystemRMStateStore.java:966)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.processDirectoriesOfFiles(FileSystemRMStateStore.java:317)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMAppState(FileSystemRMStateStore.java:281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:232)
> {noformat}
> The solution can be to catch "InvalidProtocolBufferException", show warning 
> and remove application's folder that contains invalid data to prevent RM 
> restart failure. 
> Additionally, I've added catch for other exceptions that can appear during 
> recovering of the specific application, to avoid RM failure even if the only 
> one application's state can't be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768909#comment-15768909
 ] 

Hadoop QA commented on YARN-4994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 23s{color} 
| {color:red} root generated 3 new + 687 unchanged - 3 fixed = 690 total (was 
690) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 34s{color} | {color:orange} root: The patch generated 1 new + 203 unchanged 
- 4 fixed = 204 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 39s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
8s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
49s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-archive-logs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4994 |
| JIRA Patch URL | 

[jira] [Comment Edited] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768842#comment-15768842
 ] 

Rohith Sharma K S edited comment on YARN-5585 at 12/22/16 3:01 AM:
---

Thanks [~sjlee0] for review comments

bq. I don't think we should set the info from the fromId to entity id prefix 
and entity id. The entity id prefix and the entity id should be used for a true 
single-entity query context. It would be confusing to "reuse" them to indicate 
the fromId. I would prefer an explicit fromId fields in the context so it's 
crystal clear what they are.
I am not sure why do we need an extra field fromId in context. However, these 
are part of existing context which can be re used. At most importantly, 
entityIdPrefix and entityId are used for multiple-entity query context also 
which can be used for setting start row in range scan. Lets take example for 
multiple rows, {code}
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!1!entityId-1
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!3!entityId-3
{code}
# When NO fromId is specified, then range scan start with below range. So, 
basically it scan for all rows of given entityType like below.{code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
# When fromId=2:entity-2. Here scan start from 2nd row. {code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
# When fromId=2. Here scan start from 2nd row. Note the difference from 2nd 
point, start row is from entityIdPrefix.{code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}

bq. Long story short, I think we can support (2) with Varun's suggestion:
Right, I have incorporated in the patch. Here need not scan from 0 to Long.Max 
rather we can range scan for given entity type with filter. Scan range is like 
1st point in above example.

bq. Finally, I know it's no longer directly used, but I think 
TimelineEntity.compareTo() needs updating. It does not use the entity id prefix 
at all, and it's using the creation time which is not very consistent with what 
we're doing. Can we update that method as part of this JIRA? 
Sure, will handle in this JIRA only. 

bq. I am leaning slightly towards the former with the assumption that it should 
be truly rare that there are multiple rows for the same entity id (otherwise it 
would be a bug in the write path) and also for performance reasons.
Right, the reader will throw an error if it found more than one row. 


was (Author: rohithsharma):
Thanks [~sjlee0] for review comments

bq. I don't think we should set the info from the fromId to entity id prefix 
and entity id. The entity id prefix and the entity id should be used for a true 
single-entity query context. It would be confusing to "reuse" them to indicate 
the fromId. I would prefer an explicit fromId fields in the context so it's 
crystal clear what they are.
I am not sure why do we need an extra field fromId in context. However, these 
are part of existing context which can be re used. At most importantly, 
entityIdPrefix and entityId are used for multiple-entity query context also 
which can be used for setting start row in range scan. Lets take example for 
multiple rows, {code}
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!1!entityId-1
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!3!entityId-3
{code}
## When NO fromId is specified, then range scan start with below range. So, 
basically it scan for all rows of given entityType like below.{code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
## When fromId=2:entity-2. Here scan start from 2nd row. {code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
## When fromId=2. Here scan start from 2nd row. {code}
startRow: 

[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768842#comment-15768842
 ] 

Rohith Sharma K S commented on YARN-5585:
-

Thanks [~sjlee0] for review comments

bq. I don't think we should set the info from the fromId to entity id prefix 
and entity id. The entity id prefix and the entity id should be used for a true 
single-entity query context. It would be confusing to "reuse" them to indicate 
the fromId. I would prefer an explicit fromId fields in the context so it's 
crystal clear what they are.
I am not sure why do we need an extra field fromId in context. However, these 
are part of existing context which can be re used. At most importantly, 
entityIdPrefix and entityId are used for multiple-entity query context also 
which can be used for setting start row in range scan. Lets take example for 
multiple rows, {code}
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!1!entityId-1
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!3!entityId-3
{code}
## When NO fromId is specified, then range scan start with below range. So, 
basically it scan for all rows of given entityType like below.{code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
## When fromId=2:entity-2. Here scan start from 2nd row. {code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}
## When fromId=2. Here scan start from 2nd row. {code}
startRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER!2!entityId-2
stopRow: 
rohithsharmaks!yarn_cluster!SleepJob!12345!application_1482156550070_0001!YARN_CONTAINER"{code}

bq. Long story short, I think we can support (2) with Varun's suggestion:
Right, I have incorporated in the patch. Here need not scan from 0 to Long.Max 
rather we can range scan for given entity type with filter. Scan range is like 
1st point in above example.

bq. Finally, I know it's no longer directly used, but I think 
TimelineEntity.compareTo() needs updating. It does not use the entity id prefix 
at all, and it's using the creation time which is not very consistent with what 
we're doing. Can we update that method as part of this JIRA? 
Sure, will handle in this JIRA only. 

bq. I am leaning slightly towards the former with the assumption that it should 
be truly rare that there are multiple rows for the same entity id (otherwise it 
would be a bug in the write path) and also for performance reasons.
Right, the reader will throw an error if it found more than one row. 



> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768808#comment-15768808
 ] 

Hadoop QA commented on YARN-5216:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
34s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
5s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
57s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} YARN-5972 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 326 unchanged - 0 fixed = 332 total (was 326) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844326/YARN-5216-YARN-5972.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 276893ec8200 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5972 / 8752f53 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-6017) node manager physical memory leak

2016-12-21 Thread chenrongwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenrongwei updated YARN-6017:
--
Description: 
In our produce environment, node manager's jvm memory has been set to 
'-Xmx2048m',but we notice that after a long time running the process' actual 
physical memory size had been reached to 12g (we got this value by top command 
as follow).

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
31169 data  20   0 13.2g  12g 6092 S 16.9 13.0  49183:13 java

31169:   /usr/local/jdk/bin/java -Dproc_nodemanager -Xmx2048m 
-Dhadoop.log.dir=/home/data/programs/apache-hadoop-2.7.1/logs 
-Dyarn.log.dir=/home/data/programs/apache-hadoop-2.7.1/logs 
-Dhadoop.log.file=yarn-data-nodemanager.log 
-Dyarn.log.file=yarn-data-nodemanager.log -Dyarn.home.dir= -Dyarn.id.str=data 
-Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA 
-Djava.library.path=/home/data/programs/apache-hadoop-2.7.1/lib/native 
-Dyarn.policy.file=hadoop-policy.xml -XX:PermSize=128M -XX:MaxPermSize=256M 
-XX:+UseC
Address   Kbytes Mode  Offset   DeviceMapping
0040   4 r-x--  008:1 java
0060   4 rw---  008:1 java
00601000 10094936 rw---  000:0   [ anon ]
00077000 2228224 rw---  000:0   [ anon ]
0007f800  131072 rw---  000:0   [ anon ]
00325ee0 128 r-x--  008:1 ld-2.12.so
00325f01f000   4 r 0001f000 008:1 ld-2.12.so
00325f02   4 rw--- 0002 008:1 ld-2.12.so
00325f021000   4 rw---  000:0   [ anon ]
00325f201576 r-x--  008:1 libc-2.12.so
00325f38a0002048 - 0018a000 008:1 libc-2.12.so
00325f58a000  16 r 0018a000 008:1 libc-2.12.so
00325f58e000   4 rw--- 0018e000 008:1 libc-2.12.so
00325f58f000  20 rw---  000:0   [ anon ]
00325f60  92 r-x--  008:1 libpthread-2.12.so
00325f6170002048 - 00017000 008:1 libpthread-2.12.so
00325f817000   4 r 00017000 008:1 libpthread-2.12.so
00325f818000   4 rw--- 00018000 008:1 libpthread-2.12.so
00325f819000  16 rw---  000:0   [ anon ]
00325fa0   8 r-x--  008:1 libdl-2.12.so
00325fa020002048 - 2000 008:1 libdl-2.12.so
00325fc02000   4 r 2000 008:1 libdl-2.12.so
00325fc03000   4 rw--- 3000 008:1 libdl-2.12.so
00325fe0  28 r-x--  008:1 librt-2.12.so
00325fe070002044 - 7000 008:1 librt-2.12.so
003260006000   4 r 6000 008:1 librt-2.12.so
003260007000   4 rw--- 7000 008:1 librt-2.12.so
00326020 524 r-x--  008:1 libm-2.12.so
0032602830002044 - 00083000 008:1 libm-2.12.so
003260482000   4 r 00082000 008:1 libm-2.12.so
003260483000   4 rw--- 00083000 008:1 libm-2.12.so
00326120  88 r-x--  008:1 libresolv-2.12.so
0032612160002048 - 00016000 008:1 libresolv-2.12.so
003261416000   4 r 00016000 008:1 libresolv-2.12.so
003261417000   4 rw--- 00017000 008:1 libresolv-2.12.so

  was:
In our produce environment, node manager's jvm memory has been set to 
'-Xmx2048m',but we notice that after a long time running the process' actual 
physical memory size had been reached to 12g (we got this value by top command 
as follow).

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
31169 data  20   0 13.2g  12g 6092 S 16.9 13.0  49183:13 java



> node manager physical memory leak
> -
>
> Key: YARN-6017
> URL: https://issues.apache.org/jira/browse/YARN-6017
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.1
> Environment: OS:
> Linux guomai124041 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 
> x86_64 x86_64 x86_64 GNU/Linux
> jvm:
> java version "1.7.0_65"
> Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
> Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
>Reporter: chenrongwei
>
> In our produce environment, node manager's jvm memory has been set to 
> '-Xmx2048m',but we notice that after a long time running the process' actual 
> physical memory size had been reached to 12g (we got this value by top 
> command as follow).
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU 

[jira] [Commented] (YARN-5756) Add state-machine implementation for queues

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768724#comment-15768724
 ] 

Hadoop QA commented on YARN-5756:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 610 unchanged - 3 fixed = 614 total (was 613) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 43s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844321/YARN-5756.8.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5f643b35a520 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 736f54b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14434/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |

[jira] [Commented] (YARN-3866) AM-RM protocol changes to support container resizing

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768708#comment-15768708
 ] 

Hadoop QA commented on YARN-3866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 
failed with JDK v1.8.0_111. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 13s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 18 new + 39 unchanged - 0 fixed = 57 total (was 39) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 16s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 20 new + 48 unchanged - 0 fixed = 68 total (was 48) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 102 unchanged - 0 fixed = 134 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_111. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 

[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma updated YARN-5216:

Attachment: YARN-5216-YARN-5972.006.patch

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN-5216-YARN-5972.006.patch, YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768666#comment-15768666
 ] 

Hitesh Sharma commented on YARN-5216:
-

Ok, fair point regarding the dispatcher. Updating the patch.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN-5216-YARN-5972.006.patch, YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5478) [YARN-4902] Define Java API for generalized & unified scheduling-strategies.

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768656#comment-15768656
 ] 

Hadoop QA commented on YARN-5478:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 46 unchanged - 2 fixed = 56 total (was 48) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5478 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844317/YARN-5478.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3fd6bddb3d7e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 736f54b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14433/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14433/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14433/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-4902] Define Java API for generalized & unified scheduling-strategies.
> 
>
>   

[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768627#comment-15768627
 ] 

Hadoop QA commented on YARN-5216:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
11s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} YARN-5972 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 207 unchanged - 0 fixed = 211 total (was 207) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
23s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5216 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844315/YARN-5216-YARN-5972.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 2d5e9e7ca80f 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5972 / 8752f53 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-12-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768603#comment-15768603
 ] 

Wangda Tan commented on YARN-5706:
--

[~kaisasak], got it, thanks for the additional notes!

> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: oct16-easy
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5706.01.patch, YARN-5706.02.patch, 
> YARN-5706.03.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according to the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-12-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768600#comment-15768600
 ] 

Kai Sasaki commented on YARN-5706:
--

[~leftnoteasy] Thanks!

The change was introduced in [this 
commit|https://github.com/apache/hadoop/commit/f990e9d229d3b83e2f2ce5b1921e2d3e7d318dca].
  It seems to be merged into {{trunk}} and {{release-3.0.0-alpha1-RC0}}.
So I think it isn't necessary to backport to branch-2/branch-2.8.

> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: oct16-easy
> Attachments: YARN-5706.01.patch, YARN-5706.02.patch, 
> YARN-5706.03.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according to the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5756) Add state-machine implementation for queues

2016-12-21 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768555#comment-15768555
 ] 

Xuan Gong commented on YARN-5756:
-

Added a new testcase for this.

Please review.

> Add state-machine implementation for queues
> ---
>
> Key: YARN-5756
> URL: https://issues.apache.org/jira/browse/YARN-5756
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5756.1.patch, YARN-5756.2.patch, YARN-5756.3.patch, 
> YARN-5756.4.patch, YARN-5756.5.patch, YARN-5756.6.patch, YARN-5756.6.patch, 
> YARN-5756.7.patch, YARN-5756.8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5756) Add state-machine implementation for queues

2016-12-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5756:

Attachment: YARN-5756.8.patch

> Add state-machine implementation for queues
> ---
>
> Key: YARN-5756
> URL: https://issues.apache.org/jira/browse/YARN-5756
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5756.1.patch, YARN-5756.2.patch, YARN-5756.3.patch, 
> YARN-5756.4.patch, YARN-5756.5.patch, YARN-5756.6.patch, YARN-5756.6.patch, 
> YARN-5756.7.patch, YARN-5756.8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5478) [YARN-4902] Define Java API for generalized & unified scheduling-strategies.

2016-12-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5478:
-
Attachment: YARN-5478.2.patch

Attached ver.2 patch.

> [YARN-4902] Define Java API for generalized & unified scheduling-strategies.
> 
>
> Key: YARN-5478
> URL: https://issues.apache.org/jira/browse/YARN-5478
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5478.1.patch, YARN-5478.2.patch, 
> YARN-5478.preliminary-poc.1.patch, YARN-5478.preliminary-poc.2.patch
>
>
> Define Java API for application to specify generic scheduling requirements 
> described in YARN-4902 design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768540#comment-15768540
 ] 

Arun Suresh edited comment on YARN-5216 at 12/22/16 12:06 AM:
--

bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.

Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and a good jenkins run.
Also one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX



was (Author: asuresh):
bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.
Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and a good jenkins run.
Also one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX


> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768540#comment-15768540
 ] 

Arun Suresh edited comment on YARN-5216 at 12/22/16 12:06 AM:
--

bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.

Given the above, the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and a good jenkins run.
Also one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX



was (Author: asuresh):
bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.

Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and a good jenkins run.
Also one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX


> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768540#comment-15768540
 ] 

Arun Suresh edited comment on YARN-5216 at 12/22/16 12:05 AM:
--

bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.
Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and a good jenkins run.
Also one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX



was (Author: asuresh):
bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.
Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX


> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768540#comment-15768540
 ] 

Arun Suresh commented on YARN-5216:
---

bq. I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers.
I think that discussion is orthogonal. I would like to keep the code consistent 
to kill / launch etc. and I feel exposing 
"dispatcher.getEventHandler().handle(..)" is a bad idea, since:
# it limits testability and 
# This requires the caller (CapacityScheduler in this case) to know what type 
event and which dispatcher to use, both of which become tightly coupled to the 
caller.. I understand there exists code, but I would like to keep new code as 
clean as possible.
Given the above the "need to enforce only opportunistic containers can be 
paused" argument seems too week to not perform refactor.

+1 to the patch, pending the above and one minor nit:
The "use-pause-for-preemption" is an NM level config, so you should not use 
RM_PREFIX


> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-12-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768534#comment-15768534
 ] 

Jian He commented on YARN-4757:
---

This jira now becomes a dependency of YARN-5079, we are going to merge 
YARN-4757 branch to yarn-native-services branch

> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
>  Labels: oct16-hard
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- 
> Simplified discovery of services via DNS mechanisms.pdf, 
> YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, 
> YARN-4757-YARN-4757.003.patch, YARN-4757-YARN-4757.004.patch, 
> YARN-4757-YARN-4757.005.patch, YARN-4757.001.patch, YARN-4757.002.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5928) Move ATSv2 HBase backend code into a new module that is only dependent at runtime by yarn servers

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768517#comment-15768517
 ] 

Sangjin Lee commented on YARN-5928:
---

FYI, YARN-5976 has been merged, and it made some improvements to the pom. You 
might want to look at that.

Also, I suspect some hbase entries in {{hadoop-project/pom.xml}} can now be 
removed as some are no longer used.

> Move ATSv2 HBase backend code into a new module that is only dependent at 
> runtime by yarn servers
> -
>
> Key: YARN-5928
> URL: https://issues.apache.org/jira/browse/YARN-5928
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5928-YARN-5355.02.patch, 
> YARN-5928-YARN-5355.03.patch, YARN-5928-YARN-5355.04.patch, 
> YARN-5928-YARN-5355.04.patch, YARN-5928.01.patch, YARN-5928.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma updated YARN-5216:

Attachment: YARN-5216-YARN-5972.005.patch

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN-5216-YARN-5972.005.patch, 
> YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-12-21 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768498#comment-15768498
 ] 

Hitesh Sharma commented on YARN-5216:
-

Hi [~asuresh], thanks for the feedback. I have incorporated the feedback and 
improved the test case to exercise more code path.

bq. Instead of explicitly calling "dispatcher.getEventHandler().handle(..)" 
from within ContainerScheduler, can you create a method inside Container: 
sendPauseEvent(String) and sendResumeEvent(String)

I'm not so sure about adding anything into the Container interface as 
pause/resume is only for opportunistic containers. We can do that when support 
for the same is added into guaranteed containers. 

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Attachments: YARN-5216-YARN-5972.001.patch, 
> YARN-5216-YARN-5972.002.patch, YARN-5216-YARN-5972.003.patch, 
> YARN-5216-YARN-5972.004.patch, YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6018) Allow specifying resource capability for NMSimulators in topology file

2016-12-21 Thread Jonathan Hung (JIRA)
Jonathan Hung created YARN-6018:
---

 Summary: Allow specifying resource capability for NMSimulators in 
topology file
 Key: YARN-6018
 URL: https://issues.apache.org/jira/browse/YARN-6018
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler-load-simulator
Reporter: Jonathan Hung


Right now NMSimulator is configured to have capability based on 
{{yarn.sls.nm.memory.mb}} and {{yarn.sls.nm.vcores}} in {{sls-runner.xml}}. 
This ticket is to provide this information in topology file so that capability 
per node can be specified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768449#comment-15768449
 ] 

Sangjin Lee commented on YARN-5647:
---

My concern is when a user connects to a reader instance that belongs to cluster 
1 for example but tries to read data for another cluster. For example, suppose 
the reader is started on cluster A, and user X, who is a proper user for 
cluster A, authenticates properly with the reader with his/her kerberos 
credentials and issues a REST URL for cluster B (e.g. "/clusters/clusterB/...").

It is true that it is partially an authorization issue. But it definitely makes 
it look strange that user X can authenticate to one cluster (cluster A) and 
essentially gains access to all clusters' data sans authorization if you will. 
User X may not even exist in cluster B.

A proper authorization could probably solve this. Until then, we may note that 
this behavior may exist...

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-5647-YARN-5355.wip.002.patch, 
> YARN-5647-YARN-5355.wip.003.patch, YARN-5647-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4990) Re-direction of a particular log file within in a container in NM UI does not redirect properly to Log Server ( history ) on container completion

2016-12-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768410#comment-15768410
 ] 

Hudson commented on YARN-4990:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11027 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11027/])
YARN-4990. Re-direction of a particular log file within in a container 
(junping_du: rev 736f54b727c3f0ecc8fb9a594f2281c240c89cb8)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebAppFilter.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebFilter.java


> Re-direction of a particular log file within in a container in NM UI does not 
> redirect properly to Log Server ( history ) on container completion
> -
>
> Key: YARN-4990
> URL: https://issues.apache.org/jira/browse/YARN-4990
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-4990.1.patch, YARN-4990.2.patch
>
>
> The NM does the redirection to the history server correctly. However if the 
> user is viewing or has a link to a particular specific file, the redirect 
> ends up going to the top level page for the container and not redirecting to 
> the specific file. Additionally, the start param to show logs from the offset 
> 0 also goes missing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5864) YARN Capacity Scheduler - Queue Priorities

2016-12-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768409#comment-15768409
 ] 

Wangda Tan commented on YARN-5864:
--

+ [~jlowe], [~eepayne], [~sunilg]. 

Since this is related to preemption, could you also take a look and share your 
thoughts? 

Thanks

> YARN Capacity Scheduler - Queue Priorities
> --
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch, 
> YARN-CapacityScheduler-Queue-Priorities-design-v1.pdf
>
>
> Currently, Capacity Scheduler at every parent-queue level uses relative 
> used-capacities of the chil-queues to decide which queue can get next 
> available resource first.
> For example,
> - Q1 & Q2 are child queues under queueA
> - Q1 has 20% of configured capacity, 5% of used-capacity and
> - Q2 has 80% of configured capacity, 8% of used-capacity.
> In the situation, the relative used-capacities are calculated as below
> - Relative used-capacity of Q1 is 5/20 = 0.25
> - Relative used-capacity of Q2 is 8/80 = 0.10
> In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is 
> selected by the scheduler first to receive next available resource.
> Simply ordering queues according to relative used-capacities sometimes causes 
> a few troubles because scarce resources could be assigned to less-important 
> apps first.
> # Latency sensitivity: This can be a problem with latency sensitive 
> applications where waiting till the ‘other’ queue gets full is not going to 
> cut it. The delay in scheduling directly reflects in the response times of 
> these applications.
> # Resource fragmentation for large-container apps: Today’s algorithm also 
> causes issues with applications that need very large containers. It is 
> possible that existing queues are all within their resource guarantees but 
> their current allocation distribution on each node may be such that an 
> application which needs large container simply cannot fit on those nodes.
> Services:
> # The above problem (2) gets worse with long running applications. With short 
> running apps, previous containers may eventually finish and make enough space 
> for the apps with large containers. But with long running services in the 
> cluster, the large containers’ application may never get resources on any 
> nodes even if its demands are not yet met.
> # Long running services are sometimes more picky w.r.t placement than normal 
> batch apps. For example, for a long running service in a separate queue (say 
> queue=service), during peak hours it may want to launch instances on 50% of 
> the cluster nodes. On each node, it may want to launch a large container, say 
> 200G memory per container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768283#comment-15768283
 ] 

Sangjin Lee commented on YARN-5585:
---

OK, went over the patch once just now. First off, I can also reproduce the test 
failure:
{noformat}
Running 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
Tests run: 26, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.248 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
testUIDQueryWithAndWithoutFlowContextInfo(org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage)
  Time elapsed: 0.453 sec  <<< FAILURE!
java.lang.AssertionError: null
 at org.junit.Assert.fail(Assert.java:86)
 at org.junit.Assert.assertTrue(Assert.java:41)
 at org.junit.Assert.assertTrue(Assert.java:52)
 at 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage.testUIDQueryWithAndWithoutFlowContextInfo(TestTimelineReaderWebServicesHBaseStorage.java:886)
{noformat}

(TimelineReaderWebServices.java)
- l.317: super-nit: let's use the java style if: {{if (split != null)}}
- l.333-335: I don't think we should set the info from the fromId to entity id 
prefix and entity id. The entity id prefix and the entity id should be used for 
a true single-entity query context. It would be confusing to "reuse" them to 
indicate the fromId. I would prefer an explicit fromId fields in the context so 
it's crystal clear what they are.

(GenericEntityReader.java)
- l.442-463: currently it's doing a column value filter; would it be better to 
use stop and start rows?
- l.473-502: as mentioned above, let's be explicit about the fromId

(TimelineReaderContext.java)
- see above; I would prefer not to mix real entity id prefix for single-entity 
queries and entity id prefix + entity id for fromId for multi-entity queries

Finally, I know it's no longer directly used, but I think 
{{TimelineEntity.compareTo()}} needs updating. It does not use the entity id 
prefix at all, and it's using the creation time which is not very consistent 
with what we're doing. Can we update that method as part of this JIRA? Thanks!

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768191#comment-15768191
 ] 

Hadoop QA commented on YARN-5585:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 14 new 
+ 23 unchanged - 9 fixed = 37 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice
 generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  5m 27s{color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5585 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768189#comment-15768189
 ] 

Hadoop QA commented on YARN-5709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
55s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
41s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 
failed with JDK v1.8.0_111. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 59s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_111 with JDK v1.8.0_111 
generated 2 new + 39 unchanged - 0 fixed = 41 total (was 39) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 17s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with JDK v1.7.0_121 
generated 2 new + 48 unchanged - 0 fixed = 50 total (was 48) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 321 unchanged - 9 fixed = 325 total (was 330) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_111. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (YARN-4990) Re-direction of a particular log file within in a container in NM UI does not redirect properly to Log Server ( history ) on container completion

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768065#comment-15768065
 ] 

Hadoop QA commented on YARN-4990:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4990 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844028/YARN-4990.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2ad78b092ec5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b042bc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14427/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14427/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14427/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Re-direction of a particular log file within in a container in NM UI does not 
> redirect properly to Log Server ( history ) on container completion
> 

[jira] [Updated] (YARN-5864) YARN Capacity Scheduler - Queue Priorities

2016-12-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5864:
-
Attachment: YARN-CapacityScheduler-Queue-Priorities-design-v1.pdf

The original proposed solution for fragmented cluster doesn't have clear 
semantics and has some conflicts with existing features / assumptions.

So I worked with [~vinodkv] to propose the new solution: Add queue priorities 
to make allocation / preemption can both benefit from the solution, we believe 
this has better semantics as well.

Updated title / desc and uploaded v1 design doc.

Please feel free to let us know your comments. Thanks for feedbacks from 
[~curino].

> YARN Capacity Scheduler - Queue Priorities
> --
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch, 
> YARN-CapacityScheduler-Queue-Priorities-design-v1.pdf
>
>
> Currently, Capacity Scheduler at every parent-queue level uses relative 
> used-capacities of the chil-queues to decide which queue can get next 
> available resource first.
> For example,
> - Q1 & Q2 are child queues under queueA
> - Q1 has 20% of configured capacity, 5% of used-capacity and
> - Q2 has 80% of configured capacity, 8% of used-capacity.
> In the situation, the relative used-capacities are calculated as below
> - Relative used-capacity of Q1 is 5/20 = 0.25
> - Relative used-capacity of Q2 is 8/80 = 0.10
> In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is 
> selected by the scheduler first to receive next available resource.
> Simply ordering queues according to relative used-capacities sometimes causes 
> a few troubles because scarce resources could be assigned to less-important 
> apps first.
> # Latency sensitivity: This can be a problem with latency sensitive 
> applications where waiting till the ‘other’ queue gets full is not going to 
> cut it. The delay in scheduling directly reflects in the response times of 
> these applications.
> # Resource fragmentation for large-container apps: Today’s algorithm also 
> causes issues with applications that need very large containers. It is 
> possible that existing queues are all within their resource guarantees but 
> their current allocation distribution on each node may be such that an 
> application which needs large container simply cannot fit on those nodes.
> Services:
> # The above problem (2) gets worse with long running applications. With short 
> running apps, previous containers may eventually finish and make enough space 
> for the apps with large containers. But with long running services in the 
> cluster, the large containers’ application may never get resources on any 
> nodes even if its demands are not yet met.
> # Long running services are sometimes more picky w.r.t placement than normal 
> batch apps. For example, for a long running service in a separate queue (say 
> queue=service), during peak hours it may want to launch instances on 50% of 
> the cluster nodes. On each node, it may want to launch a large container, say 
> 200G memory per container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4990) Re-direction of a particular log file within in a container in NM UI does not redirect properly to Log Server ( history ) on container completion

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15768008#comment-15768008
 ] 

Hadoop QA commented on YARN-4990:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4990 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844028/YARN-4990.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c1da71772947 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8b042bc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14426/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14426/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14426/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Re-direction of a particular log file within in a container in NM UI does not 
> redirect properly to Log Server ( history ) on container completion
> 

[jira] [Updated] (YARN-5864) YARN Capacity Scheduler - Queue Priorities

2016-12-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5864:
-
Description: 
Currently, Capacity Scheduler at every parent-queue level uses relative 
used-capacities of the chil-queues to decide which queue can get next available 
resource first.

For example,
- Q1 & Q2 are child queues under queueA
- Q1 has 20% of configured capacity, 5% of used-capacity and
- Q2 has 80% of configured capacity, 8% of used-capacity.

In the situation, the relative used-capacities are calculated as below
- Relative used-capacity of Q1 is 5/20 = 0.25
- Relative used-capacity of Q2 is 8/80 = 0.10

In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is 
selected by the scheduler first to receive next available resource.

Simply ordering queues according to relative used-capacities sometimes causes a 
few troubles because scarce resources could be assigned to less-important apps 
first.

# Latency sensitivity: This can be a problem with latency sensitive 
applications where waiting till the ‘other’ queue gets full is not going to cut 
it. The delay in scheduling directly reflects in the response times of these 
applications.
# Resource fragmentation for large-container apps: Today’s algorithm also 
causes issues with applications that need very large containers. It is possible 
that existing queues are all within their resource guarantees but their current 
allocation distribution on each node may be such that an application which 
needs large container simply cannot fit on those nodes.
Services:
# The above problem (2) gets worse with long running applications. With short 
running apps, previous containers may eventually finish and make enough space 
for the apps with large containers. But with long running services in the 
cluster, the large containers’ application may never get resources on any nodes 
even if its demands are not yet met.
# Long running services are sometimes more picky w.r.t placement than normal 
batch apps. For example, for a long running service in a separate queue (say 
queue=service), during peak hours it may want to launch instances on 50% of the 
cluster nodes. On each node, it may want to launch a large container, say 200G 
memory per container.


  was:
YARN-4390 added preemption for reserved container. However, we found one case 
that large container cannot be allocated even if all queues are under their 
limit.

For example, we have:
{code}
Two queues, a and b, capacity 50:50 
Two nodes: n1 and n2, each of them have 50 resource 
Now queue-a uses 10 on n1 and 10 on n2
queue-b asks for one single container with resource=45. 
{code} 

The container could be reserved on any of the host, but no preemption will 
happen because all queues are under their limits. 


> YARN Capacity Scheduler - Queue Priorities
> --
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> Currently, Capacity Scheduler at every parent-queue level uses relative 
> used-capacities of the chil-queues to decide which queue can get next 
> available resource first.
> For example,
> - Q1 & Q2 are child queues under queueA
> - Q1 has 20% of configured capacity, 5% of used-capacity and
> - Q2 has 80% of configured capacity, 8% of used-capacity.
> In the situation, the relative used-capacities are calculated as below
> - Relative used-capacity of Q1 is 5/20 = 0.25
> - Relative used-capacity of Q2 is 8/80 = 0.10
> In the above example, per today’s Capacity Scheduler’s algorithm, Q2 is 
> selected by the scheduler first to receive next available resource.
> Simply ordering queues according to relative used-capacities sometimes causes 
> a few troubles because scarce resources could be assigned to less-important 
> apps first.
> # Latency sensitivity: This can be a problem with latency sensitive 
> applications where waiting till the ‘other’ queue gets full is not going to 
> cut it. The delay in scheduling directly reflects in the response times of 
> these applications.
> # Resource fragmentation for large-container apps: Today’s algorithm also 
> causes issues with applications that need very large containers. It is 
> possible that existing queues are all within their resource guarantees but 
> their current allocation distribution on each node may be such that an 
> application which needs large container simply cannot fit on those nodes.
> Services:
> # The above problem (2) gets worse with long running applications. With short 
> running apps, previous containers may eventually finish and make enough space 
> for the apps with large containers. But with long running services in the 
> cluster, the large 

[jira] [Updated] (YARN-5864) YARN Capacity Scheduler - Queue Priorities

2016-12-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5864:
-
Summary: YARN Capacity Scheduler - Queue Priorities  (was: Capacity 
Scheduler Queue Prioriry)

> YARN Capacity Scheduler - Queue Priorities
> --
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5864) Capacity Scheduler Queue Prioriry

2016-12-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5864:
-
Summary: Capacity Scheduler Queue Prioriry  (was: Capacity Scheduler 
preemption for fragmented cluster )

> Capacity Scheduler Queue Prioriry
> -
>
> Key: YARN-5864
> URL: https://issues.apache.org/jira/browse/YARN-5864
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5864.poc-0.patch
>
>
> YARN-4390 added preemption for reserved container. However, we found one case 
> that large container cannot be allocated even if all queues are under their 
> limit.
> For example, we have:
> {code}
> Two queues, a and b, capacity 50:50 
> Two nodes: n1 and n2, each of them have 50 resource 
> Now queue-a uses 10 on n1 and 10 on n2
> queue-b asks for one single container with resource=45. 
> {code} 
> The container could be reserved on any of the host, but no preemption will 
> happen because all queues are under their limits. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5976) Update hbase version to 1.2

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767981#comment-15767981
 ] 

Hadoop QA commented on YARN-5976:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m  9s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
38s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5976 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844160/YARN-5976.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux f78963a54bcc 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (YARN-5709) Cleanup leader election configs and pluggability

2016-12-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767971#comment-15767971
 ] 

Junping Du commented on YARN-5709:
--

I noticed the same issue in other JIRAs and filed an infra ticket: INFRA-13141 
but have get any respond yet.

> Cleanup leader election configs and pluggability
> 
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: yarn-5709-branch-2.8.patch, yarn-5709-wip.2.patch, 
> yarn-5709.1.patch, yarn-5709.2.patch, yarn-5709.3.patch, yarn-5709.4.patch
>
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767944#comment-15767944
 ] 

Sangjin Lee commented on YARN-5585:
---

Sorry for chiming in late on the discussion. I haven't reviewed the patch yet, 
but just to state my opinion,

I'm fine with passing {{fromId}} with the prefix and id concatenated with a 
colon (":") for multi-entity queries. I'm also OK with using only the prefix 
portion for such queries although I don't expect this to be an important use 
case.

As for only specifying only the entity id for {{fromId}}, I don't know that 
this is important at all. Pagination requests would be coming mostly from 
non-human clients (e.g. UI, scripted REST clients, etc.), and as such they 
always have both pieces of information. It would be strange for them not to 
provide the id prefix. I am comfortable with just throwing an exception if the 
id prefix is missing in {{fromId}}.

For queries by entity id (i.e. single entity queries), as noted there are 
really 2 distinct use cases: (1) queries with both id prefix and entity id 
(which would be mostly coming from non-human clients), and (2) queries with 
only entity id. (1) is not ambiguous at all.

(2) can be further divided into 2 cases: (2-1) there was no id prefix written 
to the storage (i.e. default prefix = 0), and (2-2) the client (most likely 
human) simply does not know the id prefix.

Long story short, I think we can support (2) with Varun's suggestion:
{quote}
I am wondering that can we utilize setting the start and stop row in Scan for 
this. Reason being we know idprefix can have a range of 0 to max value of long. 
Thus, our start row can be cluster!user!flow!runid!appid!entitytype!0!entityid 
and as stop row in not inclusive, we can call 
TimelineStorageUtils#calculateTheClosestNextRowKeyForPrefix for 
cluster!user!flow!runid!appid!entitytype!LONG_MAX!entityid. This would mean 
that typically only one row will be scanned. We can anyways break out of the 
loop as soon as first row (which will be true for almost all the cases) is 
found. We can use PageFilter of 1 to keep the Scan and result retrieved via it 
as small. Thoughts ?
{quote}

If entity prefix was not specified, we could do this range scan. The only point 
to clarify then is whether to stop at the first result or detect the case where 
there are multiple rows and return an error. I am leaning slightly towards the 
former with the assumption that it should be truly rare that there are multiple 
rows for the same entity id (otherwise it would be a bug in the write path) and 
also for performance reasons.

For those cases where there was no id prefix (i.e. default) written, clients 
should still set the id prefix (to 0) so that it becomes the first use case (1).

I'll go over the patch and post my feedback today. Thanks.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767926#comment-15767926
 ] 

Yufei Gu commented on YARN-5969:


Thanks [~zsl2007]. I like the figures, especially the one about the allocated 
container # per minute. It did improve a lot based on your figures in this 
scale!

The patch looks good to me generally. Minor nits:
1. use meaningful variable names instead of "u1", "u2". It is OK to leave 
"s1"/"s2" since they are already there.
2. It would be nice to put the comment about why you do this before the 
following code. 
{code}
 Resource u1 = s1.getResourceUsage();
 Resource u2 = s2.getResourceUsage();
{code}

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-12-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767884#comment-15767884
 ] 

Varun Saxena edited comment on YARN-5647 at 12/21/16 7:11 PM:
--

bq. Something to think about and answer, however: how do we ensure we don't get 
mixed up with the timeline state store from v.1 on the same node? Is it not a 
concern because in v.1 the state store would be on the RM machine?
Different DB names should ward that off. I was planning to consider these 
changes(i.e. recovery related changes) in another JIRA. For this JIRA, we can 
assume that it's a fresh setup.

bq. there is the scenario of using a reader instance to access other clusters' 
data than the one the reader belongs to. It's not clear how to 
implement/enforce authentication for that. 
Not sure if I understood it correctly. Do you mean how do we identify if a 
request is originating from cluster1(say) and trying to access data of 
cluster2(say) ?
It will be hard to identify what is cluster 1 and what is cluster 2 effectively 
unless a list of hosts belonging to each is maintained, which I don't think is 
a feasible solution.
I think this entirely depends on the way deployment is done and whether 
isolation amongst clusters is required. Frankly, this issue is more about 
authorization. And, can be achieved by having different users based on cluster 
or apps or some other criteria. For instance, if we do not want users of one 
cluster to access data of other we can probably have a different set of users 
for both. It's not only about clusters, you may not want to have access to 
applications as well.   
Now, we can define ACLs' at the entity level and check against them while 
returning results from the reader. And Kerberos authentication for each user 
would anyways be done.
In ATSv1 we used to have multiple entities published within the scope of a 
specific domain. We can probably extend the same idea when we implement 
authorization. This anyways would require more thought and discussion and can 
be done when we decide on the design for authorization.
Also, it may not be necessary for a reader to belong to a specific cluster(if 
ownership for each cluster is different). If its that big a concern, you can 
choose to have separate reader(s) for each cluster.


was (Author: varun_saxena):
bq. Something to think about and answer, however: how do we ensure we don't get 
mixed up with the timeline state store from v.1 on the same node? Is it not a 
concern because in v.1 the state store would be on the RM machine?
Different DB names should ward that off. I was planning to consider these 
changes(i.e. recovery related changes) in another JIRA. For this JIRA, we can 
assume that its a fresh setup.

bq. there is the scenario of using a reader instance to access other clusters' 
data than the one the reader belongs to. It's not clear how to 
implement/enforce authentication for that. 
Not sure if I understood it correctly. Do you mean how do we identify if a 
request is originating from cluster1(say) and trying to access data of 
cluster2(say) ?
It will be hard to identify what is cluster 1 and what is cluster 2 effectively 
unless a list of hosts belonging to each is maintained, which I dont think is a 
feasible solution.
I think this entirely depends on the way deployment and whether isolation 
amongst clusters is required. Frankly this issue is more about authorization. 
And can be achieved by having different users based on cluster or apps or some 
other criteria. For instance, if we do not want users of one cluster to access 
data of other we can probably have different set of users for both. Its not 
only about clusters, you may not want to have access across applications as 
well.   
Now, we can define ACLs' at entity level and check against them while returning 
results from reader. And kerberos authentication for each user would anyways be 
done.
In ATSv1 we used to have multiple entities published within the scope of a 
specific domain. We can probably extend the same idea when we implement 
authorization. This anyways would require more thought and discussion and can 
be done when we decide on design for authorization.
Also, it may not be necessary for a reader to belong to a specific cluster(if 
ownership for each cluster is different). If its that big a concern, you can 
choose to have separate reader(s) for each cluster.

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: 

[jira] [Commented] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-12-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767884#comment-15767884
 ] 

Varun Saxena commented on YARN-5647:


bq. Something to think about and answer, however: how do we ensure we don't get 
mixed up with the timeline state store from v.1 on the same node? Is it not a 
concern because in v.1 the state store would be on the RM machine?
Different DB names should ward that off. I was planning to consider these 
changes(i.e. recovery related changes) in another JIRA. For this JIRA, we can 
assume that its a fresh setup.

bq. there is the scenario of using a reader instance to access other clusters' 
data than the one the reader belongs to. It's not clear how to 
implement/enforce authentication for that. 
Not sure if I understood it correctly. Do you mean how do we identify if a 
request is originating from cluster1(say) and trying to access data of 
cluster2(say) ?
It will be hard to identify what is cluster 1 and what is cluster 2 effectively 
unless a list of hosts belonging to each is maintained, which I dont think is a 
feasible solution.
I think this entirely depends on the way deployment and whether isolation 
amongst clusters is required. Frankly this issue is more about authorization. 
And can be achieved by having different users based on cluster or apps or some 
other criteria. For instance, if we do not want users of one cluster to access 
data of other we can probably have different set of users for both. Its not 
only about clusters, you may not want to have access across applications as 
well.   
Now, we can define ACLs' at entity level and check against them while returning 
results from reader. And kerberos authentication for each user would anyways be 
done.
In ATSv1 we used to have multiple entities published within the scope of a 
specific domain. We can probably extend the same idea when we implement 
authorization. This anyways would require more thought and discussion and can 
be done when we decide on design for authorization.
Also, it may not be necessary for a reader to belong to a specific cluster(if 
ownership for each cluster is different). If its that big a concern, you can 
choose to have separate reader(s) for each cluster.

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-5647-YARN-5355.wip.002.patch, 
> YARN-5647-YARN-5355.wip.003.patch, YARN-5647-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5906) Update AppSchedulingInfo to use SchedulingPlacementSet

2016-12-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767842#comment-15767842
 ] 

Wangda Tan commented on YARN-5906:
--

[~sunilg], 

Thanks for comments,

bq. I was more or less thinking to move few common methods to an abstract class.
I would prefer to do this once we have more SchedulingPlacementSet supported. 

bq. My doubt is, when we need to provide an interator of a set of nodes which 
are arranged in some order (as per configuration policy), this code may become 
more trickier.
What I want is implementing different sort mechanism inside different 
SchedulingPlacementSet, so I'm not sure if we need add a plugable iterator 
interface inside the schedulingPlacementSet, I would also prefer to do this 
when we adding more schedulingPlacementSet.

Any other comments?

> Update AppSchedulingInfo to use SchedulingPlacementSet
> --
>
> Key: YARN-5906
> URL: https://issues.apache.org/jira/browse/YARN-5906
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5906.1.patch, YARN-5906.2.patch, YARN-5906.3.patch, 
> YARN-5906.4.patch
>
>
> Currently AppSchedulingInfo simply stores resource request and scheduler make 
> decision according to stored resource request. For example, CS/FS use 
> slightly different approach to get pending resource request and make delay 
> scheduling decision. 
> There're several benefits of moving pending resource request data structure 
> to SchedulingPlacementSet
> 1) Delay scheduling logic should be agnostic to scheduler, for example CS 
> supports count-based delay and FS supports both of count-based and time-based 
> delay. Ideally scheduler should be able to choose which delay scheduling 
> policy to use.
> 2) In addition to 1., YARN-4902 has proposal to support pluggable delay 
> scheduling behavior in addition to locality-based (host->rack->offswitch). 
> Which requires more flexibility.
> 3) To make YARN-4902 becomes real, instead of directly adding the new 
> resource request API to client, we can make scheduler to use it internally to 
> make sure it is well defined. And AppSchedulingInfo/SchedulingPlacementSet 
> will be the perfect place to isolate which ResourceRequest implementation to 
> use.
> 4) Different scheduling requirement needs different behavior of checking 
> ResourceRequest table.
> This JIRA is the 1st patch of several refactorings. Which moves all 
> ResourceRequest data structure and logics to SchedulingPlacementSet. We need 
> follow changes to make it better structured
> - Make delay scheduling to be a plugin of SchedulingPlacementSet
> - After YARN-4902 get committed, change SchedulingPlacementSet to use 
> YARN-4902 internally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5976) Update hbase version to 1.2

2016-12-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767718#comment-15767718
 ] 

Hudson commented on YARN-5976:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11025 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11025/])
YARN-5976. Update hbase version to 1.2. Contributed by Vrushali C. (sjlee: rev 
8b042bc1e6ae5e18d435d6a184dec1811cc3a513)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java
* (edit) hadoop-project/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/PhoenixOfflineAggregationWriterImpl.java
* (edit) LICENSE.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestPhoenixOfflineAggregationWriterImpl.java


> Update hbase version to 1.2
> ---
>
> Key: YARN-5976
> URL: https://issues.apache.org/jira/browse/YARN-5976
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5976-YARN-5355.004.patch, YARN-5976.001.wip.patch, 
> YARN-5976.002.wip.patch, YARN-5976.004.patch
>
>
> I believe phoenix now works with hbase 1.2. We should now upgrade timeline 
> service to use hbase 1.2 now. 
> And also update documentation in timelineservice to reflect that hbase mode 
> of all daemons in single jvm but writing to hdfs is supported. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5976) Update hbase version to 1.2

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767700#comment-15767700
 ] 

Sangjin Lee edited comment on YARN-5976 at 12/21/16 6:06 PM:
-

Committed it to trunk, YARN-5355 and YARN-5355-branch-2.

Thanks [~vrushalic] for your contributions, and others for your reviews!


was (Author: sjlee0):
Committed it to trunk, YARN-5355 and YARN-5355-branch-2.

> Update hbase version to 1.2
> ---
>
> Key: YARN-5976
> URL: https://issues.apache.org/jira/browse/YARN-5976
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5976-YARN-5355.004.patch, YARN-5976.001.wip.patch, 
> YARN-5976.002.wip.patch, YARN-5976.004.patch
>
>
> I believe phoenix now works with hbase 1.2. We should now upgrade timeline 
> service to use hbase 1.2 now. 
> And also update documentation in timelineservice to reflect that hbase mode 
> of all daemons in single jvm but writing to hdfs is supported. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5976) Update hbase version to 1.2

2016-12-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767647#comment-15767647
 ] 

Sangjin Lee commented on YARN-5976:
---

The CI result is not really applicable. The root build failure was caused by 
the unrelated YARN UI module. The deprecation warnings logged are the same 
between the trunk and the patched, and are unrelated. The test failure is 
unrelated, and the ASF license problems are false.

I did a build locally to confirm them, and verified the clean dependencies.

+1. I'm going to commit this shortly.

> Update hbase version to 1.2
> ---
>
> Key: YARN-5976
> URL: https://issues.apache.org/jira/browse/YARN-5976
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-5355-merge-blocker
> Fix For: YARN-5355
>
> Attachments: YARN-5976-YARN-5355.004.patch, YARN-5976.001.wip.patch, 
> YARN-5976.002.wip.patch, YARN-5976.004.patch
>
>
> I believe phoenix now works with hbase 1.2. We should now upgrade timeline 
> service to use hbase 1.2 now. 
> And also update documentation in timelineservice to reflect that hbase mode 
> of all daemons in single jvm but writing to hdfs is supported. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5903) Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl beforeclass setup method

2016-12-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767416#comment-15767416
 ] 

Varun Saxena commented on YARN-5903:


+1 pending Jenkins.
Will commit it later today unless there are further comments.

> Fix race condition in TestResourceManagerAdministrationProtocolPBClientImpl 
> beforeclass setup method
> 
>
> Key: YARN-5903
> URL: https://issues.apache.org/jira/browse/YARN-5903
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-5903.02.patch, YARN-5903.03.patch, 
> yarn5903.001.patch
>
>
> This is essentially the same race condition as in YARN-5901, that is, 
> resourcemanager.getServiceState() == STATE.STARTED does not guarantee 
> resource manager is fully started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767264#comment-15767264
 ] 

Varun Saxena commented on YARN-5585:


bq. Yes, it is required. When entity is retrieved, UID is constructed using 
entity details. 
Sorry had missed the call to encodeUID after setting the prefix. You are 
correct. This is fine.

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-12-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15767212#comment-15767212
 ] 

Varun Saxena commented on YARN-5585:


bq. This would also require to change TimelineEntity equals method comparing 
idprefix also. 
Probably not. Infact if we do not compare id prefix, equals will help us 
determine if a duplicate entity is being added (with same entity id and type) 
while adding it to LinkedHashSet. We can hence throw an exception (if we agree 
upon it) if duplicate entity is added. This can be done in TimelineEntityReader 
class. My intention was to keep it consistent with get entity call but this can 
have a disadvantage of not returning anything if even one entity is duplicate. 
Another option would be to wrap the list of timeline entities inside another 
class. And this class can additionally contain a list of error entities which 
are duplicate to alert the user. This can be returned in HTTP response of get 
entities call.
We can see what the majority opinion on this though. Li, your thoughts on this ?

bq. We can have one wrapper for method createHBaseSingleColValueFilter that 
takes input as column.
Yeah probably can do so.

bq. Shall we avoid using those constants? We can set an enum to represent each 
part of the tuple list.
Which constants are we talking about here ? Delimiters ?



> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: yarn-5355-merge-blocker
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-YARN-5355.0002.patch, YARN-5585-YARN-5355.0003.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5924) Resource Manager fails to load state with InvalidProtocolBufferException

2016-12-21 Thread Oleksii Dymytrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksii Dymytrov updated YARN-5924:
---
Attachment: YARN-5924.002.patch

> Resource Manager fails to load state with InvalidProtocolBufferException
> 
>
> Key: YARN-5924
> URL: https://issues.apache.org/jira/browse/YARN-5924
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
> Attachments: YARN-5924.002.patch
>
>
> InvalidProtocolBufferException is thrown during recovering of the 
> application's state if application's data has invalid format (or is broken) 
> under FSRMStateRoot/RMAppRoot/application_1477986176766_0134/ directory in 
> HDFS:
> {noformat}
> com.google.protobuf.InvalidProtocolBufferException: Protocol message 
> end-group tag did not match expected tag.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)
>   at 
> com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)
>   at 
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:143)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:176)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:188)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:193)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$ApplicationStateDataProto.parseFrom(YarnServerResourceManagerRecoveryProtos.java:1028)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore$RMAppStateFileProcessor.processChildNode(FileSystemRMStateStore.java:966)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.processDirectoriesOfFiles(FileSystemRMStateStore.java:317)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMAppState(FileSystemRMStateStore.java:281)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:232)
> {noformat}
> The solution can be to catch "InvalidProtocolBufferException", show warning 
> and remove application's folder that contains invalid data to prevent RM 
> restart failure. 
> Additionally, I've added catch for other exceptions that can appear during 
> recovering of the specific application, to avoid RM failure even if the only 
> one application's state can't be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-12-21 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-4994:
---
Attachment: YARN-4994.10.patch

Thanks [~ajisakaa],

I agree both things you came up with so I fixed them and uploaded a new patch.

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, 
> YARN-4994.09.patch, YARN-4994.10.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766727#comment-15766727
 ] 

Sunil G commented on YARN-5866:
---

+1 to latest patch.
I also tested and all pages seems fine.

Will commit tomorrow if there are no objections.

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch, 
> YARN-5866.003.patch, YARN-5866.004.patch, YARN-5866.005.patch, 
> YARN-5866.006.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2016-12-21 Thread luhuichun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15764528#comment-15764528
 ] 

luhuichun edited comment on YARN-3659 at 12/21/16 10:15 AM:


[~giovanni.fumarola] Hi Giovanni, I'd like to contribute and work on this, if 
you haven't started working on it yet. Thank you.


was (Author: luhuichun):
[~giovanni.fumarola] hi, Giovanni  we recently focus on yarn federation, can we 
take some patch to work on, we are really interested in this feature. 

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router

2016-12-21 Thread luhuichun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15764796#comment-15764796
 ] 

luhuichun edited comment on YARN-5411 at 12/21/16 10:14 AM:


[~giovanni.fumarola] Hi Giovanni, I'd like to contribute and work on this, if 
you haven't started working on it yet. Thank you.


was (Author: luhuichun):
[~giovanni.fumarola]  hi, Giovanni we recently focus on yarn federation, can we 
take some patch to work on, we are really interested in this feature.

> Create a proxy chain for ApplicationClientProtocol in the Router
> 
>
> Key: YARN-5411
> URL: https://issues.apache.org/jira/browse/YARN-5411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ApplicationClientProtocol in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1572#comment-1572
 ] 

Hadoop QA commented on YARN-5866:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5866 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12844215/YARN-5866.006.patch |
| Optional Tests |  asflicense  |
| uname | Linux eb71f2a00699 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6e2521 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14411/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch, 
> YARN-5866.003.patch, YARN-5866.004.patch, YARN-5866.005.patch, 
> YARN-5866.006.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766634#comment-15766634
 ] 

Sunil G commented on YARN-5148:
---

[~Sreenath], could you pls pool in some thoughts to have read-only view for 
pretty json print.

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, UsingStringifyPrint.png, 
> YARN-5148-YARN-3368.01.patch, YARN-5148-YARN-3368.02.patch, 
> YARN-5148-YARN-3368.03.patch, YARN-5148-YARN-3368.04.patch, 
> YARN-5148-YARN-3368.05.patch, YARN-5148-YARN-3368.06.patch, 
> YARN-5148.07.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-12-21 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766624#comment-15766624
 ] 

Akhil PB commented on YARN-5866:


[~sunilg] Attached latest patch which resolves conflict issues.

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch, 
> YARN-5866.003.patch, YARN-5866.004.patch, YARN-5866.005.patch, 
> YARN-5866.006.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-12-21 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5866:
---
Attachment: YARN-5866.006.patch

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch, 
> YARN-5866.003.patch, YARN-5866.004.patch, YARN-5866.005.patch, 
> YARN-5866.006.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6017) node manager physical memory leak

2016-12-21 Thread chenrongwei (JIRA)
chenrongwei created YARN-6017:
-

 Summary: node manager physical memory leak
 Key: YARN-6017
 URL: https://issues.apache.org/jira/browse/YARN-6017
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 2.7.1
 Environment: OS:
Linux guomai124041 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 
x86_64 x86_64 x86_64 GNU/Linux
jvm:
java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
Reporter: chenrongwei


In our produce environment, node manager's jvm memory has been set to 
'-Xmx2048m',but we notice that after a long time running the process' actual 
physical memory size had been reached to 12g (we got this value by top command 
as follow).

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
31169 data  20   0 13.2g  12g 6092 S 16.9 13.0  49183:13 java




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5995) Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition performance

2016-12-21 Thread zhangyubiao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766538#comment-15766538
 ] 

zhangyubiao commented on YARN-5995:
---

[~sunilg],OK I will go ahead this days.

> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance
> ---
>
> Key: YARN-5995
> URL: https://issues.apache.org/jira/browse/YARN-5995
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, resourcemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.2 Hadoop-2.7.1 
>Reporter: zhangyubiao
>  Labels: patch
> Attachments: YARN-5995.0001.patch, YARN-5995.0002.patch, 
> YARN-5995.patch
>
>
> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5866) [YARN-3368] Fix few issues reported by jshint in new YARN UI

2016-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766531#comment-15766531
 ] 

Sunil G commented on YARN-5866:
---

[~akhilpb] patch_5 is not cleanly getting applied to trunk. Could you please 
help to rebase and attach a new patch.

> [YARN-3368] Fix few issues reported by jshint in new YARN UI
> 
>
> Key: YARN-5866
> URL: https://issues.apache.org/jira/browse/YARN-5866
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5866.001.patch, YARN-5866.002.patch, 
> YARN-5866.003.patch, YARN-5866.004.patch, YARN-5866.005.patch
>
>
> There are few minor issues reported by jshint (javascript lint tool).
> This jira is to track and fix those issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766471#comment-15766471
 ] 

zhangshilong edited comment on YARN-5969 at 12/21/16 8:34 AM:
--

ContainerAllocated picture means container  allocation per minute. 
After patch, Container  allocation per minute improves about 50%.
obviously, 500 apps finish faster after patch.


was (Author: zsl2007):
ContainerAllocated picture means container  allocation per minute. 
After patch, Container  allocation per minute improves about 50%.

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766471#comment-15766471
 ] 

zhangshilong commented on YARN-5969:


ContainerAllocated picture means container  allocation per minute. 
After patch, Container  allocation per minute improves about 50%.

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangshilong updated YARN-5969:
---
Attachment: containerAllocated_after.png
apprunning_after.png

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_after.png, 
> apprunning_before.png, containerAllocatedDelta_before.png, 
> containerAllocated_after.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangshilong updated YARN-5969:
---
Attachment: pending_before.png
pending_after.png
containerAllocatedDelta_before.png
apprunning_before.png

> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch, apprunning_before.png, 
> containerAllocatedDelta_before.png, pending_after.png, pending_before.png
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5969) FairShareComparator getResourceUsage poor performance

2016-12-21 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766437#comment-15766437
 ] 

zhangshilong commented on YARN-5969:


Test case: 500 app,3000 nm nodes  
queue:
parent queue number: 100
leaf queue number per parent queue: 5
500 apps submitted to 155 leaf queues.  Average queue contains 4 apps.
all apps are mapreduce job.  One job contains 325 mapper and 44 reducer.  Every 
mapper/reducer  does: sleep 20 seconds.




> FairShareComparator getResourceUsage poor performance
> -
>
> Key: YARN-5969
> URL: https://issues.apache.org/jira/browse/YARN-5969
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: zhangshilong
>Assignee: zhangshilong
> Attachments: 20161206.patch
>
>
> in FairShareComparator class, the performance of function getResourceUsage()  
> is very poor. It will be executed above 100,000,000 times per second.
> In our scene, It  takes 20 seconds per minute.  
> A simple solution is to reduce call counts  of the function.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5995) Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition performance

2016-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766430#comment-15766430
 ] 

Sunil G commented on YARN-5995:
---

bq.I think we can focus on writes first, read only happens on RM startup.
Agree. Yes, we could have few sub tickets under this to do the same if needed. 

bq.total number of failed ops may be useful
Yes. We can add that too.

> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance
> ---
>
> Key: YARN-5995
> URL: https://issues.apache.org/jira/browse/YARN-5995
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, resourcemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.2 Hadoop-2.7.1 
>Reporter: zhangyubiao
>  Labels: patch
> Attachments: YARN-5995.0001.patch, YARN-5995.0002.patch, 
> YARN-5995.patch
>
>
> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5906) Update AppSchedulingInfo to use SchedulingPlacementSet

2016-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766405#comment-15766405
 ] 

Sunil G commented on YARN-5906:
---

bq.Were you suggesting to move some implementations to an abstract base class 
to share between other implementations, or were you suggesting to move some 
private methods to the SchedulingPlacementSet interface?
I was more or less thinking to move few common methods to an abstract class.

bq.Not sure if I understand the question, different SchedulingPlacementSet 
could have different ordering of preferred node. Could you elaborate?
{{LocalitySchedulingPlacementSet#getPreferredNodeIterator}} currently trying to 
get singleNode from {{PlacementSetUtils.getSingleNode(clusterPlacementSet)}} 
call. Going forward, if we need to have different policies to select nodes for 
different allocations, i think we will be invoking {{getSingleNode}} or 
{{getMultiNode}} etc. My doubt is, when we need to provide an interator of a 
set of nodes which are arranged in some order (as per configuration policy), 
this code may become more trickier. Could we do an interface model here, so 
that different node ordering policies could be injected? 

> Update AppSchedulingInfo to use SchedulingPlacementSet
> --
>
> Key: YARN-5906
> URL: https://issues.apache.org/jira/browse/YARN-5906
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5906.1.patch, YARN-5906.2.patch, YARN-5906.3.patch, 
> YARN-5906.4.patch
>
>
> Currently AppSchedulingInfo simply stores resource request and scheduler make 
> decision according to stored resource request. For example, CS/FS use 
> slightly different approach to get pending resource request and make delay 
> scheduling decision. 
> There're several benefits of moving pending resource request data structure 
> to SchedulingPlacementSet
> 1) Delay scheduling logic should be agnostic to scheduler, for example CS 
> supports count-based delay and FS supports both of count-based and time-based 
> delay. Ideally scheduler should be able to choose which delay scheduling 
> policy to use.
> 2) In addition to 1., YARN-4902 has proposal to support pluggable delay 
> scheduling behavior in addition to locality-based (host->rack->offswitch). 
> Which requires more flexibility.
> 3) To make YARN-4902 becomes real, instead of directly adding the new 
> resource request API to client, we can make scheduler to use it internally to 
> make sure it is well defined. And AppSchedulingInfo/SchedulingPlacementSet 
> will be the perfect place to isolate which ResourceRequest implementation to 
> use.
> 4) Different scheduling requirement needs different behavior of checking 
> ResourceRequest table.
> This JIRA is the 1st patch of several refactorings. Which moves all 
> ResourceRequest data structure and logics to SchedulingPlacementSet. We need 
> follow changes to make it better structured
> - Make delay scheduling to be a plugin of SchedulingPlacementSet
> - After YARN-4902 get committed, change SchedulingPlacementSet to use 
> YARN-4902 internally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org