[jira] [Updated] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5433:
--
Attachment: YARN-5433.03.patch

Posted patch v.3.

Included the full MPL v.1.1.

I analyzed a few more entries that were not previously analyzed by 
HADOOP-12893, and things like glassfish jasper (jsp) and servlet-api are 
actually redundant as they are Oracle/Sun API specs that were included in jetty 
(we already captured jsp and servlet-api under CDDL 1.0). So, there are no new 
entries.

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch, YARN-5433.02.patch, 
> YARN-5433.03.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5772) Replace old Hadoop logo with new one

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607514#comment-15607514
 ] 

Sunil G commented on YARN-5772:
---

Thanks [~ajisakaa]. I will commit the same.

> Replace old Hadoop logo with new one
> 
>
> Key: YARN-5772
> URL: https://issues.apache.org/jira/browse/YARN-5772
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: YARN-3368
>Reporter: Akira Ajisaka
>Assignee: Akhil PB
> Attachments: YARN-5772-YARN-3368.0001.patch, ui2-with-newlogo.png
>
>
> YARN-5161 added Apache Hadoop logo in the UI but the logo is old.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607493#comment-15607493
 ] 

Bibin A Chundatt commented on YARN-5773:


{quote}
we could only invoke activateApplications once after recovering all apps
{quote}
Any issue with current handling based on cluster 
resource.{{activateApplication()}} get invoked during following cases
# Application finish 
# Reinitialize queue (refresh,scheduler service start)
# Attempt add 
# node add, update cluster resource etc.
Handling recovery using cluster resource i felt can cover all cases along with 
log change for which is too costly . Else for each case we have to handle 
separately.
Resource based handle we do have in many cases assignContainers based on 
pendingResources too rt??

Recover time i am not sure from scheduler side we can get whether recovery is 
completed for all apps since its even based.As Rohith mentioned earlier the 
apps recovery from store and scheduler side apps recovery are different .


> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch, 
> YARN-5773.003.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607487#comment-15607487
 ] 

Sangjin Lee commented on YARN-5433:
---

Some/most of the ones that I marked as "already analyzed" are the ones that 
were resolved in your original spreadsheet. I think a few more may need to be 
analyzed as part of this, and I'll do that.

Regarding including the Mozilla Public License, this is what we're looking at: 
https://www.mozilla.org/en-US/MPL/1.1/

I'm just not sure if this needs to be included in its entirety or only a 
certain section is relevant...

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch, YARN-5433.02.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607477#comment-15607477
 ] 

Hadoop QA commented on YARN-5433:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 20s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 47s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 152m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
| Timed out junit tests | org.apache.hadoop.hdfs.TestFileChecksum |
|   | org.apache.hadoop.hdfs.TestDFSUtil |
|   | org.apache.hadoop.hdfs.TestWriteConfigurationToDFS |
|   | org.apache.hadoop.hdfs.TestDFSStripedInputStream |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.TestDecommissionWithStriped |
|   | org.apache.hadoop.cli.TestAclCLIWithPosixAclInheritance |
|   | org.apache.hadoop.hdfs.TestGetFileChecksum |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835233/YARN-5433.02.patch |
| JIRA Issue | YARN-5433 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 4fad9a7fa3be 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d88dca8 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13515/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13515/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13515/testReport/ |
| asflicense | 

[jira] [Commented] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607406#comment-15607406
 ] 

Vrushali C commented on YARN-5780:
--

[~gtCarrera9] let me see if I can take a stab at this. I will reach out to you 
offline and see what the scope is and what can be done here. 

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>Assignee: Vrushali C
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5780:


Assignee: Vrushali C

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>Assignee: Vrushali C
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607384#comment-15607384
 ] 

Xiao Chen commented on YARN-5433:
-

Thanks [~sjlee0] for the new rev!

For MPL, since it's new, we need to put the whole MPL license text in there. 
The script can't handle it because we need to find the official license, and 
80-char wrap it. I used http://appincredible.com/online/word-wrap/ to do the 
wrap in HADOOP-12893.

Also, it seems the spreadsheet here has some {{already analyzed? == Y?}} 
entries (namely the glassfish ones). What's the plan on those? I think we 
should have the spreadsheet all {{Done==Y}} before resolving the jira. :)

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch, YARN-5433.02.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607341#comment-15607341
 ] 

Hadoop QA commented on YARN-5770:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 53s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 314 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: 
The patch generated 3 new + 492 unchanged - 7 fixed = 495 total (was 499) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} hadoop-yarn-slider-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 generated 0 new + 1 unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 19s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || 

[jira] [Commented] (YARN-4014) Support user cli interface in for Application Priority

2016-10-25 Thread stefanlee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607338#comment-15607338
 ] 

stefanlee commented on YARN-4014:
-

[~rohithsharma] [~jianhe] thanks for sharing this jira ,i have added those code 
to hadoop2.4.0 ,but when i mvn package this project,some errors happened as 
follows:
|[ERROR] 
/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationPriorityResponsePBImpl.java:[76,10]
 error: cannot find symbol
[ERROR] symbol:   method setApplicationPriority(PriorityProto)
[ERROR] location: variable builder of type Builder
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationPriorityResponsePBImpl.java:[88,10]
 error: cannot find symbol
[ERROR] symbol:   method hasApplicationPriority()
[ERROR] location: variable p of type 
UpdateApplicationPriorityResponseProtoOrBuilder
[ERROR] 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationPriorityResponsePBImpl.java:[92,32]
 error: cannot find symbol
[ERROR] symbol:   method getApplicationPriority()
[ERROR] location: variable p of type 
UpdateApplicationPriorityResponseProtoOrBuilder
[ERROR]/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/UpdateApplicationPriorityResponsePBImpl.java:[100,13]
 error: cannot find symbol|
but i have modified ApplicationClientProtocol.java 
,applicationclient_protocol.proto,etc. and importted  related class at the 
beginning.
|import 
org.apache.hadoop.yarn.proto.YarnServiceProtos.UpdateApplicationPriorityResponseProto;
import 
org.apache.hadoop.yarn.proto.YarnServiceProtos.UpdateApplicationPriorityResponseProtoOrBuilder;|
how can i solve this problem.

> Support user cli interface in for Application Priority
> --
>
> Key: YARN-4014
> URL: https://issues.apache.org/jira/browse/YARN-4014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: 0001-YARN-4014-V1.patch, 0001-YARN-4014.patch, 
> 0002-YARN-4014.patch, 0003-YARN-4014.patch, 0004-YARN-4014.patch, 
> 0004-YARN-4014.patch
>
>
> Track the changes for user-RM client protocol i.e ApplicationClientProtocol 
> changes and discussions in this jira.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5770:

Attachment: YARN-5770-yarn-native-services.004.patch

Oops, had an unnecessary public modifier in the interface. Removed and uploaded 
004 patch.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.003.patch, 
> YARN-5770-yarn-native-services.004.patch, 
> YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607259#comment-15607259
 ] 

Hadoop QA commented on YARN-4757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} YARN-4757 does not apply to YARN-4757. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808955/YARN-4757-YARN-4757.005.patch
 |
| JIRA Issue | YARN-4757 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13516/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- 
> Simplified discovery of services via DNS mechanisms.pdf, 
> YARN-4757-YARN-4757.001.patch, YARN-4757-YARN-4757.002.patch, 
> YARN-4757-YARN-4757.003.patch, YARN-4757-YARN-4757.004.patch, 
> YARN-4757-YARN-4757.005.patch, YARN-4757.001.patch, YARN-4757.002.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5680:

Fix Version/s: yarn-native-services

> Add 2 new fields in Slider status output - image-name and 
> is-privileged-container
> -
>
> Key: YARN-5680
> URL: https://issues.apache.org/jira/browse/YARN-5680
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5680-yarn-native-services.001.patch
>
>
> We need to add 2 new fields in Slider status output for docker provider - 
> image-name and is-privileged-container. The native services REST API needs to 
> expose these 2 attribute values to the end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606182#comment-15606182
 ] 

Sunil G edited comment on YARN-5545 at 10/26/16 2:58 AM:
-

Extremely for the comment. I mistyped in a wrong Jira. Pls discard below comment

.
Currently we are trying to invoke activateApplications while recovering each 
application. Yes, as of now nodes are getting registered later in the flow. But 
for scheduler, we need not have to consider such timing cases from 
RMAppManager/RM end. Being said that, its important to separate 2 issues out 
here
..


was (Author: sunilg):
Currently we are trying to invoke {{activateApplications}} while recovering 
each application. Yes, as of now nodes are getting registered later in the 
flow. But for scheduler, we need not have to consider such timing cases from 
RMAppManager/RM end. Being said that, its important to separate 2 issues out 
here
- Recovery call flow for each app in Scheduler should not invoke 
{{activateApplications}} every time
- {{activateApplications}} itself could be improved by considering AM head 
room. But that could be done in another ticket, as this one is focusing on 
fixing recovery call flow.

To address issue 1, we could only invoke {{activateApplications}} once after 
recovering all apps. By this, we can remove the timing dependency from RM end 
for recovery. With this change, even if there is a change in RM recovery model, 
scheduler would have done its complete recovery flow w/o causing any 
performance issue or waiting for resourceTrackerService to register nodes. 
Thanks [~leftnoteasy].

Thoughts?

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> 

[jira] [Commented] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607222#comment-15607222
 ] 

Gour Saha commented on YARN-5680:
-

Thanks [~billie.rinaldi]. The patch looks good. I ran the tests and the 2 tests 
that fail are not related to this patch. They are getting fixed in YARN-5690. 

+1 for the patch.


> Add 2 new fields in Slider status output - image-name and 
> is-privileged-container
> -
>
> Key: YARN-5680
> URL: https://issues.apache.org/jira/browse/YARN-5680
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Attachments: YARN-5680-yarn-native-services.001.patch
>
>
> We need to add 2 new fields in Slider status output for docker provider - 
> image-name and is-privileged-container. The native services REST API needs to 
> expose these 2 attribute values to the end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5773) RM recovery too slow due to LeafQueue#activateApplication()

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607225#comment-15607225
 ] 

Sunil G commented on YARN-5773:
---

 Currently we are trying to invoke activateApplications while recovering each 
application. Yes, as of now nodes are getting registered later in the flow. But 
for scheduler, we need not have to consider such timing cases from 
RMAppManager/RM end. Being said that, its important to separate 2 issues out 
here
Recovery call flow for each app in Scheduler should not invoke 
activateApplications every time
activateApplications itself could be improved by considering AM head room. But 
that could be done in another ticket, as this one is focusing on fixing 
recovery call flow.
To address issue 1, we could only invoke activateApplications once after 
recovering all apps. By this, we can remove the timing dependency from RM end 
for recovery. With this change, even if there is a change in RM recovery model, 
scheduler would have done its complete recovery flow w/o causing any 
performance issue or waiting for resourceTrackerService to register nodes. 
Thanks [~leftnoteasy] for the thoughts.
Thoughts?

> RM recovery too slow due to LeafQueue#activateApplication()
> ---
>
> Key: YARN-5773
> URL: https://issues.apache.org/jira/browse/YARN-5773
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5773.0001.patch, YARN-5773.0002.patch, 
> YARN-5773.003.patch
>
>
> # Submit application 10K application to default queue.
> # All applications are in accepted state
> # Now restart resourcemanager
> For each application recovery {{LeafQueue#activateApplications()}} is 
> invoked.Resulting in AM limit check to be done even before Node managers are 
> getting registered.
> Total iteration for N application is about {{N(N+1)/2}} for {{10K}} 
> application   {{5000}} iterations causing time take for Rm to be active 
> more than 10 min.
> Since NM resources are not yet added to during recovery we should skip 
> {{activateApplicaiton()}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607223#comment-15607223
 ] 

Hadoop QA commented on YARN-5770:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 54s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 314 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: 
The patch generated 4 new + 492 unchanged - 7 fixed = 496 total (was 499) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} hadoop-yarn-slider-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 generated 0 new + 1 unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 16s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || 

[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607160#comment-15607160
 ] 

Gour Saha commented on YARN-5770:
-

[~billie.rinaldi] thank you for reviewing the patch. I uploaded a 003 patch 
incorporating your comments. Note, I removed the phase1 keyword from the patch 
file name, since I will file new sub-tasks for subsequent phases of performance 
improvement.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.003.patch, 
> YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5770:

Attachment: YARN-5770-yarn-native-services.003.patch

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.003.patch, 
> YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5752) TestLocalResourcesTrackerImpl#testLocalResourceCache times out

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607028#comment-15607028
 ] 

Miklos Szegedi commented on YARN-5752:
--

Thank you, [~ebadger] for the fix! You might want to consider changing the 
timeout value on testHierarchicalLocalCacheDirectories to 1, too. I checked 
and the runtime is comparable to testLocalResourceCache.

> TestLocalResourcesTrackerImpl#testLocalResourceCache times out
> --
>
> Key: YARN-5752
> URL: https://issues.apache.org/jira/browse/YARN-5752
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5752.001.patch
>
>
> {noformat}
> java.lang.Exception: test timed out after 1000 milliseconds
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:133)
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
>   at java.io.Writer.write(Writer.java:157)
>   at org.apache.log4j.helpers.QuietWriter.write(QuietWriter.java:48)
>   at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
>   at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>   at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>   at org.apache.log4j.Category.callAppenders(Category.java:206)
>   at org.apache.log4j.Category.forcedLog(Category.java:391)
>   at org.apache.log4j.Category.log(Category.java:856)
>   at 
> org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:176)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.register(AsyncDispatcher.java:209)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestLocalResourcesTrackerImpl.testLocalResourceCache(TestLocalResourcesTrackerImpl.java:258)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607017#comment-15607017
 ] 

Hadoop QA commented on YARN-5433:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 18s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 14s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 177m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
| Timed out junit tests | org.apache.hadoop.hdfs.protocol.TestAnnotations |
|   | org.apache.hadoop.hdfs.TestFileChecksum |
|   | org.apache.hadoop.cli.TestAclCLI |
|   | org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | org.apache.hadoop.hdfs.TestWriteConfigurationToDFS |
|   | org.apache.hadoop.hdfs.server.balancer.TestBalancer |
|   | org.apache.hadoop.hdfs.TestFileConcurrentReader |
|   | org.apache.hadoop.hdfs.TestDecommissionWithStriped |
|   | org.apache.hadoop.hdfs.TestDistributedFileSystem |
|   | org.apache.hadoop.hdfs.protocol.TestLocatedBlock |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
|   | org.apache.hadoop.hdfs.TestParallelUnixDomainRead |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835214/YARN-5433.01.patch |
| JIRA Issue | YARN-5433 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ae86d3b00f82 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 86c735b |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13512/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13512/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607003#comment-15607003
 ] 

Naganarasimha G R commented on YARN-5765:
-

bq. I think doing "perm = perm | S_ISGID" will set the setGID bit regardless of 
the original permission,
Agree less riskier option is to go for the option 1 which is  {{umask(0027);}} 
before calling *mkdir*, but having said that doubt what i have is setGid is 
required for only the user's appcache folder or recursively for all its 
children ? if recursively done is there any folder which we create do not 
require this ?

In between, do we need to raise a new jira for 3.0.0-alpha2 so that release 
notes gets added in specific which is not required for 2.8 and 2.9 ?

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Venkat Ranganathan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606996#comment-15606996
 ] 

Venkat Ranganathan commented on YARN-4126:
--

[~jianhe] IIRC, the issue is that since the delegation tokens, if present, are 
used and will cause jobs to fail after the token expiry as there are no 
renewers.   We had a customer complaining about it and the solution was to 
either break the job into multiple steps or use a secure cluster.   Honestly I 
don't know if any other app relies on getting a delegation token from RM in a 
unsecure cluster and may be this is isolated to fixing Oozie alone to not 
request a RM delegation token in a unsecure cluster.  And Oozie has several 
conditionals for handling secure/unsecure).

If we do issue tokens in an unsecure cluster, we should make sure it just is 
nothing but a dummy token and unused.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606964#comment-15606964
 ] 

Jian He commented on YARN-4126:
---

bq.  what was the motivation for removing the delegation tokens and throwing 
the exception
I had seen an issue that RM issues the delegation token in unsecure cluster and 
that caused some issues. However, I couldn't find the exact issue in my 
database. [~venkatnrangan] and I decided to not issue the token. And the 
original code itself is to throw the exception if not allowed.  There was no 
intention to change that part of the logic.

[~daryn], 
bq. The general contract for servers is to return null when tokens are not 
applicable. I'd rather see this reverted from trunk and never integrated
The original code also does not return null. It returns a real token. Are you 
suggesting we should change the code to return null in 3.x ?



> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5433:
--
Attachment: YARN-5433.02.patch

Posted patch v.2.

Generated the output using the script provided by [~xiaochen].

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch, YARN-5433.02.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606943#comment-15606943
 ] 

Miklos Szegedi commented on YARN-5774:
--

Hi [~yufeigu]!

I am wondering, if the normalize methods for DefaultResourceCalculator and 
DominantResourceCalculator are better place for this check. They cover all 
other possible callers. Also, it might make sense to throw, if min > max.
{code}
if (Resources.equals(incrementResource, Resources.none())) {
  throw new RuntimeException("Increment resource cannot be zero!");
}
{code}

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5774.001.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606872#comment-15606872
 ] 

Sangjin Lee commented on YARN-5433:
---

Thanks for the pointer [~xiaochen]!

I played with the python script, and generated some contents. I'll use the 
output to update the patch.

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-10-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606864#comment-15606864
 ] 

Yufei Gu commented on YARN-4907:


Thanks [~miklos.szeg...@cloudera.com] for the review. I will upload a new patch 
including the case in BaseContainerManagerTest.waitForNMContainerState.

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5734) OrgQueue for easy CapacityScheduler queue configuration management

2016-10-25 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606834#comment-15606834
 ] 

Zhe Zhang commented on YARN-5734:
-

Since there is some overlap between this JIRA's objectives and those of 
YARN-5724, we plan to have a meetup to better discuss these 2 projects. Thanks 
[~wangda] and [~xgong] for proposing this. Please join in-person or remotely if 
you are interested.

*When*: Wednesday 10/26 2~4pm
*Where*: LinkedIn HQ, 950 West Maude Avenue, Sunnyvale, CA. (If you do plan to 
attend in-person, please email z...@apache.org)
*Confcall*: https://bluejeans.com/654904000 

We will post notes after the meetup.

> OrgQueue for easy CapacityScheduler queue configuration management
> --
>
> Key: YARN-5734
> URL: https://issues.apache.org/jira/browse/YARN-5734
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: OrgQueue_Design_v0.pdf
>
>
> The current xml based configuration mechanism in CapacityScheduler makes it 
> very inconvenient to apply any changes to the queue configurations. We saw 2 
> main drawbacks in the file based configuration mechanism:
> # This makes it very inconvenient to automate queue configuration updates. 
> For example, in our cluster setup, we leverage the queue mapping feature from 
> YARN-2411 to route users to their dedicated organization queues. It could be 
> extremely cumbersome to keep updating the config file to manage the very 
> dynamic mapping between users to organizations.
> # Even a user has the admin permission on one specific queue, that user is 
> unable to make any queue configuration changes to resize the subqueues, 
> changing queue ACLs, or creating new queues. All these operations need to be 
> performed in a centralized manner by the cluster administrators.
> With these current limitations, we realized the need of a more flexible 
> configuration mechanism that allows queue configurations to be stored and 
> managed more dynamically. We developed the feature internally at LinkedIn 
> which introduces the concept of MutableConfigurationProvider. What it 
> essentially does is to provide a set of configuration mutation APIs that 
> allows queue configurations to be updated externally with a set of REST APIs. 
> When performing the queue configuration changes, the queue ACLs will be 
> honored, which means only queue administrators can make configuration changes 
> to a given queue. MutableConfigurationProvider is implemented as a pluggable 
> interface, and we have one implementation of this interface which is based on 
> Derby embedded database.
> This feature has been deployed at LinkedIn's Hadoop cluster for a year now, 
> and have gone through several iterations of gathering feedbacks from users 
> and improving accordingly. With this feature, cluster administrators are able 
> to automate lots of thequeue configuration management tasks, such as setting 
> the queue capacities to adjust cluster resources between queues based on 
> established resource consumption patterns, or managing updating the user to 
> queue mappings. We have attached our design documentation with this ticket 
> and would like to receive feedbacks from the community regarding how to best 
> integrate it with the latest version of YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-10-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606760#comment-15606760
 ] 

Arun Suresh commented on YARN-5587:
---

Hmmm... Actually, I think we have to deal with one other issue in the 
{{AMRMClient}}:
Once a Container is returned by the RM, the AMRMClient uses the 
{{getMatchingRequests()}} api to retrieve matching ContainerRequests and remove 
it.

This implies that we need to have an equivalence relationship from a *Resource* 
to a *ProfileCapability*, else the matching won't work. Thus I think we might 
have to normalize all "named" ProfileCapabilities to a either a Resource or a 
NONE ProfileCapability (with the override being the complete resource)

Either that, or we have to ensure that these new requests are tagged with 
unique 'allocationRequestIds', for which we already have a separate 
{{getMatchingRequests}} that matches ContainerRequests to Containers only based 
on the allocationRequestId.



> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606758#comment-15606758
 ] 

Andrew Wang commented on YARN-4126:
---

One little request, if we do revert this, let's please do so under a new JIRA 
since this was released in 3.0.0-alpha1. This way the changelog for alpha1 -> 
alpha2 will show the revert.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606752#comment-15606752
 ] 

Hadoop QA commented on YARN-4457:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} YARN-4457 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804512/YARN-4457.005.patch |
| JIRA Issue | YARN-4457 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13513/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606745#comment-15606745
 ] 

Miklos Szegedi commented on YARN-4457:
--

+1 (non-binding). I verified and it looks good to me. Thank you, [~templedf]!

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch, YARN-4457.004.patch, YARN-4457.005.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-10-25 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606739#comment-15606739
 ] 

Arun Suresh commented on YARN-5587:
---

I feel we should try to keep this simple. Given that we have already introduced 
a {{ProfileCapability}} class that takes a profile name and a possible 
Capability override, I propose:
# We create a special {{ProfileCapability}} with profile name *NONE* where user 
only specifies the capability (Resource). This would be equivalent to our old 
{{Resource}}
# We replace the capability(Resource) argument in both the {{ContainerRequest}} 
and {{ResourceRequest}} with {{ProfileCapability}}. Maybe for backward 
compatibility, we have 1 constructor that takes Resource and 1 that takes 
ProfileCapability. We then convert a Resource to ProfileCapability(NONE, 
Resource) in the AMRMClient.
# In the rest of the {{AMRMClient}} codebase, including the 
{{RemoteRequestsTable} }we just use {{ProfileCapability}}


> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606716#comment-15606716
 ] 

Miklos Szegedi commented on YARN-4907:
--

+1 (non-binding) I verified and it looks good to me. Thank you, [~yufeigu]!

You might want to change the wait logic in 
BaseContainerManagerTest.waitForNMContainerState as well.

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4907.001.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606711#comment-15606711
 ] 

Xiao Chen commented on YARN-5433:
-

Thanks [~sjlee0] for working on this (and the ping).

For HADOOP-12893, the 
[spreadsheet|https://docs.google.com/spreadsheets/d/1HL2b4PSdQMZDVJmum1GIKrteFr2oainApTLiJTPnfd4/edit#gid=1060066002]
 has a 'generate.py' tab, containing the code used to generate the L (which 
license to skip L etc.). Probably an overkill here, but that's more accurate 
than my memory. :)

So:
- No need to add NOTICE for BSD here. See 
https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15284739=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15284739
 for details.
- MPL and CDDL are ok. We should add it to LICENSE.txt under corresponding 
license. There's already {{servlet-api 2.5}} in there under CPPL1.0, so we can 
skip that.
- Didn't check all, but if there're any dependency with multiple licenses, we 
can choose the most friendly one.

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606709#comment-15606709
 ] 

Hadoop QA commented on YARN-4734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 57s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 34 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 56s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 171m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |

[jira] [Updated] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5433:
--
Attachment: YARN-5433.01.patch

Attaching the first version of the patch.

- excluded findbugs annotations from the dependencies
- added notices for the new 3-clause BSD binaries (ANTLR, StringTemplate, and 
Sqlline)

Any feedback on this patch or the analysis in the linked spreadsheet is 
welcome. Thanks!

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5433.01.patch
>
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606628#comment-15606628
 ] 

Robert Kanter commented on YARN-4126:
-

That's a good point [~daryn].  
[~jianhe], what was the motivation for removing the delegation tokens and 
throwing the exception?  The JIRA description doesn't actually say.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5767) Fix the order that resources are cleaned up from the local Public/Private caches

2016-10-25 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606624#comment-15606624
 ] 

Miklos Szegedi commented on YARN-5767:
--

+1 (non-binding) Thank you, [~ctrezzo]!

I have a few optional comments:

testLRUAcrossTrackers:
The test does not check whether the right resource was deleted from each list. 
You might want to use resources.getLocalRsrc().containsKey here just like in 
testPositiveRefCount.

LocalCacheCleanerStats:
It would be useful in the future for debugging, if toStringDetailed() printed 
out the actual resource paths not just the size per user


> Fix the order that resources are cleaned up from the local Public/Private 
> caches
> 
>
> Key: YARN-5767
> URL: https://issues.apache.org/jira/browse/YARN-5767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha1
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5767-trunk-v1.patch, YARN-5767-trunk-v2.patch, 
> YARN-5767-trunk-v3.patch
>
>
> If you look at {{ResourceLocalizationService#handleCacheCleanup}}, you can 
> see that public resources are added to the {{ResourceRetentionSet}} first 
> followed by private resources:
> {code:java}
> private void handleCacheCleanup(LocalizationEvent event) {
>   ResourceRetentionSet retain =
> new ResourceRetentionSet(delService, cacheTargetSize);
>   retain.addResources(publicRsrc);
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Resource cleanup (public) " + retain);
>   }
>   for (LocalResourcesTracker t : privateRsrc.values()) {
> retain.addResources(t);
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Resource cleanup " + t.getUser() + ":" + retain);
> }
>   }
>   //TODO Check if appRsrcs should also be added to the retention set.
> }
> {code}
> Unfortunately, if we look at {{ResourceRetentionSet#addResources}} we see 
> that this means public resources are deleted first until the target cache 
> size is met:
> {code:java}
> public void addResources(LocalResourcesTracker newTracker) {
>   for (LocalizedResource resource : newTracker) {
> currentSize += resource.getSize();
> if (resource.getRefCount() > 0) {
>   // always retain resources in use
>   continue;
> }
> retain.put(resource, newTracker);
>   }
>   for (Iterator> i =
>  retain.entrySet().iterator();
>currentSize - delSize > targetSize && i.hasNext();) {
> Map.Entry rsrc = i.next();
> LocalizedResource resource = rsrc.getKey();
> LocalResourcesTracker tracker = rsrc.getValue();
> if (tracker.remove(resource, delService)) {
>   delSize += resource.getSize();
>   i.remove();
> }
>   }
> }
> {code}
> The result of this is that resources in the private cache are only deleted in 
> the cases where:
> # The cache size is larger than the target cache size and the public cache is 
> empty.
> # The cache size is larger than the target cache size and everything in the 
> public cache is being used by a running container.
> For clusters that primarily use the public cache (i.e. make use of the shared 
> cache), this means that the most commonly used resources can be deleted 
> before old resources in the private cache. Furthermore, the private cache can 
> continue to grow over time causing more and more churn in the public cache.
> Additionally, the same problem exists within the private cache. Since 
> resources are added to the retention set on a user by user basis, resources 
> will get cleaned up one user at a time in the order that privateRsrc.values() 
> returns the LocalResourcesTracker. So if user1 has 10MB in their cache and 
> user2 has 100MB in their cache and the target size of the cache is 50MB, 
> user1 could potentially have their entire cache removed before anything is 
> deleted from the user2 cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2016-10-25 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606587#comment-15606587
 ] 

Daryn Sharp commented on YARN-4126:
---

The general contract for servers is to return null when tokens are not 
applicable.  This violates that contract and throws an exception.  How is a 
generalized client supposed to pre-meditate fetching a token?  And how to 
handle a generic IOE?

I'd rather see this reverted from trunk and never integrated.  We've 
historically had lots of problem with all the security enabled conditionals, 
which is why one of my multi-year old endeavors is to have tokens always 
enabled and gut the security conditionals.  I've always admired the fact that 
yarn unconditionally used them...  This is a step backwards.


> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Fix For: 3.0.0-alpha1
>
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606560#comment-15606560
 ] 

Hadoop QA commented on YARN-5716:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 59s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 142 
new + 1467 unchanged - 164 fixed = 1609 total (was 1631) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 6484 
unchanged - 10 fixed = 6484 total (was 6494) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 928 unchanged - 10 fixed = 928 total (was 938) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 54s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 54s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 143m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA 

[jira] [Updated] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Attachment: YARN-5780.poc.patch

POC patch to launch timeline client and post timeline data. This patch applies 
to the latest slider develop branch. There were some nontrivial work to make 
this patch work with Hadoop trunk (which has timeline v.2 code), but I didn't 
include those changes since this feature should be targeted to the native 
service branch of YARN. By that time these obstacles should have been removed. 

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Assignee: (was: Li Lu)

> [YARN native service] Allowing YARN native services to post data to timeline 
> service V.2
> 
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5765:
--
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0)

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5780) [YARN native service] Allowing YARN native services to post data to timeline service V.2

2016-10-25 Thread Li Lu (JIRA)
Li Lu created YARN-5780:
---

 Summary: [YARN native service] Allowing YARN native services to 
post data to timeline service V.2
 Key: YARN-5780
 URL: https://issues.apache.org/jira/browse/YARN-5780
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Li Lu
Assignee: Li Lu


The basic end-to-end workflow of timeline service v.2 has been merged into 
trunk. In YARN native services, we would like to post some service-specific 
data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606496#comment-15606496
 ] 

Hadoop QA commented on YARN-5680:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 58s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 314 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 29s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835198/YARN-5680-yarn-native-services.001.patch
 |
| JIRA Issue | YARN-5680 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3527fbe0de1b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 201b0b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13511/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13511/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| javadoc | 

[jira] [Commented] (YARN-5388) Deprecate and remove DockerContainerExecutor

2016-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606468#comment-15606468
 ] 

Hudson commented on YARN-5388:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10677 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10677/])
YARN-5388. Deprecate and remove DockerContainerExecutor. (Daniel (kasha: rev 
de6faae97c0937dcd969386b12283d60c22dcb02)
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
* (edit) hadoop-project/src/site/site.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutorWithMocks.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java


> Deprecate and remove DockerContainerExecutor
> 
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5680:
-
Attachment: YARN-5680-yarn-native-services.001.patch

> Add 2 new fields in Slider status output - image-name and 
> is-privileged-container
> -
>
> Key: YARN-5680
> URL: https://issues.apache.org/jira/browse/YARN-5680
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Attachments: YARN-5680-yarn-native-services.001.patch
>
>
> We need to add 2 new fields in Slider status output for docker provider - 
> image-name and is-privileged-container. The native services REST API needs to 
> expose these 2 attribute values to the end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606444#comment-15606444
 ] 

Hadoop QA commented on YARN-4330:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-4330 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771077/YARN-4330.01.patch |
| JIRA Issue | YARN-4330 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13510/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606445#comment-15606445
 ] 

Eric Badger commented on YARN-4330:
---

I applied the most recent patch the branch-2.8 and ran 
{{TestMRTimelineEventHandling}}, which uses the MiniYARNCluster (I'm running 
MacOS Sierra). 2 of the 3 tests in the class consistently fail with the 
stacktraces as shown below. I didn't dig into the tests any further, but I 
imagine that this is reproducible on other machines. 

{noformat}
Running org.apache.hadoop.mapred.TestMRTimelineEventHandling
Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 93.809 sec <<< 
FAILURE! - in org.apache.hadoop.mapred.TestMRTimelineEventHandling
testMRTimelineEventHandling(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
  Time elapsed: 35.821 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<3>
at 
org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMRTimelineEventHandling(TestMRTimelineEventHandling.java:105)

testMapreduceJobTimelineServiceEnabled(org.apache.hadoop.mapred.TestMRTimelineEventHandling)
  Time elapsed: 32.703 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<3>
at 
org.apache.hadoop.mapred.TestMRTimelineEventHandling.testMapreduceJobTimelineServiceEnabled(TestMRTimelineEventHandling.java:162)
{noformat}

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-10-25 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606417#comment-15606417
 ] 

Eric Badger commented on YARN-4330:
---

[~ste...@apache.org], there's talk on the mailing list of releasing 2.8. Is 
this ready to go in? If not, should we mark the target version to 2.8.0? 

cc [~ajisakaa] for 2.8 tracking purposes

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606382#comment-15606382
 ] 

Daniel Templeton commented on YARN-5694:


The branch-2.7 patch is out of date.  I stopped updating it until we settle on 
the trunk patch.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5694:
---
Attachment: YARN-5694.006.patch

Here's a patch to add the null check back.  I don't think we need to rename the 
{{handleTransitionToStandby()}} method, because the exit happens only if we 
can't transition to standby, i.e. we're not in HA mode.  To my way of thinking, 
that doesn't change the purpose of the method.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5388) Deprecate and remove DockerContainerExecutor

2016-10-25 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5388:
---
Summary: Deprecate and remove DockerContainerExecutor  (was: MAPREDUCE-6719 
requires changes to DockerContainerExecutor)

> Deprecate and remove DockerContainerExecutor
> 
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-10-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606374#comment-15606374
 ] 

Karthik Kambatla commented on YARN-5388:


My bad. I missed the fix parts of the code in the rest of the unrelated 
improvements. +1. Checking both in..

(For the unrelated code readability etc. improvements, I always wonder if we 
should do it in a separate JIRA to avoid confusing folks looking at pulling 
this patch in later.)

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606354#comment-15606354
 ] 

Gour Saha commented on YARN-5770:
-

Makes sense. Let me add it to SliderClientAPI  so that when we split it, it 
does not fall through the cracks. I will upload a new patch with this change.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5433) Audit dependencies for Category-X

2016-10-25 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606346#comment-15606346
 ] 

Sangjin Lee commented on YARN-5433:
---

I have done a more comprehensive analysis, and am building a spreadsheet 
similar to HADOOP-12893: 
https://docs.google.com/spreadsheets/d/1D6aDHOUbQmF3SDtVj-3n4GhWlu6V8t4jD4z8HQivsq8/edit?usp=sharing

So far almost all of the newly added dependencies appear to be ASLv.2, BSD, and 
MIT. It seems only ANTLR and sqlline may need to be added to NOTICES.txt, but 
I'm not 100% certain.

cc [~xiaochen] to see if he has feedback. Thanks!

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Assignee: Sangjin Lee
>Priority: Blocker
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606339#comment-15606339
 ] 

Billie Rinaldi commented on YARN-5770:
--

[~gsaha], I built with the patch, ran unit tests, and tried starting services 
using the API. The unit test failure is due to a different issue and will be 
fixed in YARN-5690. One suggestion I have is that the new actionStatus method 
should probably be added to the SliderClientAPI interface. Maybe in the future 
we could separate this into two client interfaces, one designed for CLI and one 
for Java clients.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606332#comment-15606332
 ] 

Vrushali C commented on YARN-5739:
--

So if we need only some entity types, then it's even easier I think. The row 
key already contains the entity type after the app id. The entity table row key 
is
{code} userId! clusterId ! flowName ! flowRunId ! appId ! entity type ! entity 
id {code}

So, if we know which entity types we are looking for, we can just add these to 
the scan's while match filter for row key prefix and that should do this 
"jumping" for us. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606322#comment-15606322
 ] 

Haibo Chen commented on YARN-5765:
--

I think doing "perm = perm | S_ISGID" will set the setGID bit regardless of the 
original permission, which may not be something we want.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606313#comment-15606313
 ] 

Li Lu commented on YARN-5739:
-

Thanks [~vrushalic]! Yes. One concern we had was the scan may need to go 
through every record in an app, which is potentially a big scan. However, the 
number of different entity types may be limited. So we can construct a few 
scans that "jump" in different entity types. I don't think we need to provide a 
separate filter since this may introduce some overhead in deployments. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606304#comment-15606304
 ] 

Hadoop QA commented on YARN-4734:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/13508/console in case of 
problems.


> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.17.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606302#comment-15606302
 ] 

Hudson commented on YARN-5777:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10676 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10676/])
YARN-5777. TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails. (xiao: rev 
c88c1dc50c0ec4521bc93f39726248026e68063a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.java


> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4734:
-
Attachment: YARN-4734.17.patch

Oops, just used wrong way to generate the patch, attach the correct one (ver.17)

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.17.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606279#comment-15606279
 ] 

Hadoop QA commented on YARN-4734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} YARN-4734 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835184/YARN-4734.16.patch |
| JIRA Issue | YARN-4734 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13507/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, 
> YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4734:
-
Attachment: YARN-4734.16.patch

Attached ver.16 included notes of security environment in doc.

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, 
> YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5779:
-
Attachment: YARN-5779-YARN-3368.01.patch

Attached ver.1 patch, [~sunilg] plz review.

> [YARN-3368] Document limits/notes of the new YARN UI
> 
>
> Key: YARN-5779
> URL: https://issues.apache.org/jira/browse/YARN-5779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5779-YARN-3368.01.patch
>
>
> For example, we don't make sure it's able to run on security enabled 
> environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606203#comment-15606203
 ] 

Xiao Chen commented on YARN-5777:
-

Thanks [~ajisakaa] for the catch. The analysis make sense, +1. Committing this.

> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5716:
-
Attachment: YARN-5716.009.patch

Thanks [~sunilg] for comments:

For 1), yes it looks like we're doing UL check twice, however, the second time 
only check one application instead of thousands of apps.
So the time spent on UL check for accept/apply is ignoreable from my 
performance tests.

For 2), we will not check allocation-from-reserved container by the outter if 
(...) check

For 3), Jian asked the same question, see my answer 
https://issues.apache.org/jira/browse/YARN-5716?focusedCommentId=15593292=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15593292:
bq. No we cannot do this merge, because it is possible that in the previous 
reservedContainer != null if, we reserve a new container so the check is not 
valid.

For 4):
bq. a reference for resourceRequests is kept and under failure updated back to 
schedulingInfo object. I do not feel its a clean implementation... 
Actually we clone the ResourceRequest, and we need to check and make sure 
ResourceRequest is not changed or still required by the apps.

bq. Method is very lengthy. We could take out logic like increase request etc. 
So it ll be more easier.
Done

bq. readLock is mostly needed for increased request and for appSchedulingInfo. 
I think some optimizations here could be done separately in another ticket as 
improvement.
Yeah I agree, I would prefer to keep it as-is unless we find any performance 
issues.

bq. preferring equals instead of if (fromReservedContainer != 
reservedContainerOnNode)
I intended to do that, I want to make sure two reserved container are pointed 
to the same instance.

Please check ver.9 patch

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5716.001.patch, YARN-5716.002.patch, 
> YARN-5716.003.patch, YARN-5716.004.patch, YARN-5716.005.patch, 
> YARN-5716.006.patch, YARN-5716.007.patch, YARN-5716.008.patch, 
> YARN-5716.009.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606192#comment-15606192
 ] 

Hadoop QA commented on YARN-3649:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 209 unchanged - 1 fixed = 209 total (was 210) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 10s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker 

[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606182#comment-15606182
 ] 

Sunil G commented on YARN-5545:
---

Currently we are trying to invoke {{activateApplications}} while recovering 
each application. Yes, as of now nodes are getting registered later in the 
flow. But for scheduler, we need not have to consider such timing cases from 
RMAppManager/RM end. Being said that, its important to separate 2 issues out 
here
- Recovery call flow for each app in Scheduler should not invoke 
{{activateApplications}} every time
- {{activateApplications}} itself could be improved by considering AM head 
room. But that could be done in another ticket, as this one is focusing on 
fixing recovery call flow.

To address issue 1, we could only invoke {{activateApplications}} once after 
recovering all apps. By this, we can remove the timing dependency from RM end 
for recovery. With this change, even if there is a change in RM recovery model, 
scheduler would have done its complete recovery flow w/o causing any 
performance issue or waiting for resourceTrackerService to register nodes. 
Thanks [~leftnoteasy].

Thoughts?

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 

[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-25 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606155#comment-15606155
 ] 

Karthik Kambatla commented on YARN-5694:


I have minor comments (nits) on the trunk patch:
# ResourceManager: Since the method is now not just transitioning to standby, 
it should likely be named differently. How about {{handleFatalFailure()}}?
# ZKRMStateStore: In closeInternal, I would still prefer the null check on 
verifyActiveStatusThread. 

On the branch-2.7 patch, does it also include the YARN-5677 patch? 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5777) TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails

2016-10-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606081#comment-15606081
 ] 

John Zhuge commented on YARN-5777:
--

+1 (non-binding) The change looks good to me. Thanks [~ajisakaa] for 
investigation and fix.

> TestLogsCLI#testFetchApplictionLogsAsAnotherUser fails
> --
>
> Key: YARN-5777
> URL: https://issues.apache.org/jira/browse/YARN-5777
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5777.01.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.client.cli.TestLogsCLI
> Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 5.876 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.client.cli.TestLogsCLI
> testFetchApplictionLogsAsAnotherUser(org.apache.hadoop.yarn.client.cli.TestLogsCLI)
>   Time elapsed: 0.199 sec  <<< ERROR!
> java.io.IOException: Invalid directory or I/O error occurred for dir: 
> /Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/target/logs/priority/logs/application_1477371285256_1000
> at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1148)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:469)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.listStatus(DelegateToFileSystem.java:169)
> at org.apache.hadoop.fs.ChecksumFs.listStatus(ChecksumFs.java:519)
> at 
> org.apache.hadoop.fs.AbstractFileSystem$1.(AbstractFileSystem.java:890)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.listStatusIterator(AbstractFileSystem.java:888)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1492)
> at org.apache.hadoop.fs.FileContext$22.next(FileContext.java:1487)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.listStatus(FileContext.java:1494)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.getRemoteNodeFileDir(LogCLIHelpers.java:592)
> at 
> org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:348)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchApplicationLogs(LogsCLI.java:971)
> at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:299)
> at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:106)
> at 
> org.apache.hadoop.yarn.client.cli.TestLogsCLI.testFetchApplictionLogsAsAnotherUser(TestLogsCLI.java:868)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606080#comment-15606080
 ] 

Bibin A Chundatt commented on YARN-5548:


If there a delay in getting YARN-5375 we can push this jira. recently for 
YARN--5545   failure of same tc we observed.

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3649:
-
Attachment: YARN-3649-YARN-5355.005.patch

Uploading v5 that addresses Varun's suggestions.

Diff between this patch and the one before this:
{noformat}

$ diff YARN-3649-YARN-5355.004.patch YARN-3649-YARN-5355.005.patch
2c2
< index 3bb73f5..9e95d68 100644
---
> index 3bb73f5..925f626 100644
28c28
< +  TIMELINE_SERVICE_PREFIX + "schema.prefix";
---
> +  TIMELINE_SERVICE_PREFIX + "hbase-schema.prefix";
32a33,55
> diff --git 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
>  
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> index ed220c0..f1fd85b 100644
> --- 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> +++ 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
> @@ -2258,6 +2258,18 @@
>  25920
>
>
> +  
> +
> +The value of this parameter sets the prefix for all tables that are part 
> of
> +timeline service in the hbase storage schema. It can be set to "dev."
> +or "staging." if it is to be used for development or staging instances.
> +This way the data in production tables stays in a separate set of tables
> +prefixed by "prod.".
> +
> +yarn.timeline-service.hbase-schema.prefix
> +prod.
> +  
> +
>
>
>
$

{noformat}


> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.002.patch, YARN-3649-YARN-5355.003.patch, 
> YARN-3649-YARN-5355.004.patch, YARN-3649-YARN-5355.005.patch, 
> YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5775) Convert enums in swagger definition to uppercase

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5775:
-
Summary: Convert enums in swagger definition to uppercase  (was: Bug fixes 
in swagger definition)

> Convert enums in swagger definition to uppercase
> 
>
> Key: YARN-5775
> URL: https://issues.apache.org/jira/browse/YARN-5775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5775-yarn-native-services.001.patch
>
>
> All enums have been listed in lowercase. Need to convert all of them to 
> uppercase.
> For e.g. ContainerState:
> {noformat}
> enum:
>   - init
>   - ready
> {noformat}
> needs to be changed to -
> {noformat}
> enum:
>   - INIT
>   - READY
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606036#comment-15606036
 ] 

Vrushali C commented on YARN-5739:
--

Actually, this is not that hard to do. It requires making two queries:
- first to get the flow from AppToFlow table (given the cluster and appid, look 
up the flowrun row key in the AppToFlow table).
- second, query the entities tables with row key
{code}
userId! clusterId ! flowName ! flowRunId ! appId
{code}

All the entities that belong to this appId are part of the scan for this row 
key prefix.



> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4173) Ensure the final values for metrics/events are emitted/stored at APP completion time

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606017#comment-15606017
 ] 

Vrushali C commented on YARN-4173:
--

In YARN-5747, the final aggregation is being done explicitly, so need to think 
over further what else we need to do here. I believe what needs to be confirmed 
is that these are tagged with FINAL tags when they are written. 

> Ensure the final values for metrics/events are emitted/stored at APP 
> completion time
> 
>
> Key: YARN-4173
> URL: https://issues.apache.org/jira/browse/YARN-4173
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> When an application is finishing, the final values of metrics/events need to 
> be written to the backend as final values from the all AM/RM/NM processes for 
> that app.
> For the flow run table (YARN-3901), we need to know which values are the 
> final ones for metrics so that they can be tagged accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Add support for toggling the removal of completed and failed docker containers

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606015#comment-15606015
 ] 

Allen Wittenauer commented on YARN-5366:


I think you misunderstood what I was pointing out.  If the yarn user is part of 
the docker group, this gives the docker command access to the docker daemon 
bits to the point that c-e is no longer needed to exec docker.  Given that 
there are currently zero protections to how YARN invokes docker, this doesn't 
change the security profile at all.



> Add support for toggling the removal of completed and failed docker containers
> --
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5366.001.patch, YARN-5366.002.patch, 
> YARN-5366.003.patch, YARN-5366.004.patch, YARN-5366.005.patch, 
> YARN-5366.006.patch
>
>
> Currently, completed and failed docker containers are removed by 
> container-executor. Add a job level environment variable to 
> DockerLinuxContainerRuntime to allow the user to toggle whether they want the 
> container deleted or not and remove the logic from container-executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606009#comment-15606009
 ] 

Allen Wittenauer commented on YARN-5428:


Nope.  Given that probably no one is going to use YARN as a replacement for k8s 
or Mesos or whatever, it doesn't really matter.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch, 
> YARN-5428Allowforspecifyingthedockerclientconfigurationdirectory.pdf
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-10-25 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15606006#comment-15606006
 ] 

Vrushali C commented on YARN-3649:
--

Thanks Varun, making the changes, will upload a patch shortly.

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch, 
> YARN-3649-YARN-5355.002.patch, YARN-3649-YARN-5355.003.patch, 
> YARN-3649-YARN-5355.004.patch, YARN-3649-YARN-5355.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5779) [YARN-3368] Document limits/notes of the new YARN UI

2016-10-25 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5779:


 Summary: [YARN-3368] Document limits/notes of the new YARN UI
 Key: YARN-5779
 URL: https://issues.apache.org/jira/browse/YARN-5779
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


For example, we don't make sure it's able to run on security enabled 
environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605982#comment-15605982
 ] 

Wangda Tan commented on YARN-4734:
--

[~aw], thanks. I think we can move the AltKerberos discussions to YARN-4006, it 
is not too much related to this work. 

Anything else for the merge? I plan to send vote thread before Thu, just let me 
know if you have any comments before that. Thanks.

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, YARN-4734.5.patch, 
> YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605908#comment-15605908
 ] 

Hadoop QA commented on YARN-4743:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 30s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835157/YARN-4743-v4.patch |
| JIRA Issue | YARN-4743 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f2990e7beca8 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13503/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13503/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, 

[jira] [Commented] (YARN-5770) Performance improvement of native-services REST API service

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605889#comment-15605889
 ] 

Billie Rinaldi commented on YARN-5770:
--

This patch looks good to me. I will test it out locally before completing my 
review.

> Performance improvement of native-services REST API service
> ---
>
> Key: YARN-5770
> URL: https://issues.apache.org/jira/browse/YARN-5770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5770-yarn-native-services.phase1.001.patch, 
> YARN-5770-yarn-native-services.phase1.002.patch
>
>
> Make enhancements and bug-fixes to eliminate frequent full GC of the REST API 
> Service. Dependent on few Slider fixes like SLIDER-1168 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605887#comment-15605887
 ] 

Hadoop QA commented on YARN-5778:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 30s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835161/YARN-5778-yarn-native-services.001.patch
 |
| JIRA Issue | YARN-5778 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  |
| uname | Linux e878c4f22871 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 023be93 |
| Default Java | 1.8.0_101 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13504/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 U: 

[jira] [Commented] (YARN-5775) Bug fixes in swagger definition

2016-10-25 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605886#comment-15605886
 ] 

Billie Rinaldi commented on YARN-5775:
--

+1, straightforward patch.

> Bug fixes in swagger definition
> ---
>
> Key: YARN-5775
> URL: https://issues.apache.org/jira/browse/YARN-5775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5775-yarn-native-services.001.patch
>
>
> All enums have been listed in lowercase. Need to convert all of them to 
> uppercase.
> For e.g. ContainerState:
> {noformat}
> enum:
>   - init
>   - ready
> {noformat}
> needs to be changed to -
> {noformat}
> enum:
>   - INIT
>   - READY
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605875#comment-15605875
 ] 

Allen Wittenauer commented on YARN-4734:


No, I saw it.  I just opted to ignore it. We had already started reworking the 
YARN web initialization since a) it's code differed from the rest of Hadoop and 
b) the rest of Hadoop actually works.  But then we got distracted with other 
stuff and put it on the shelf.  The problem itself should be easy to reproduce 
by you folks though. You just need to stuff a class into 
hadoop.http.authentication.type that implements AltKerberos.  

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.2.patch, YARN-4734.3.patch, YARN-4734.4.patch, YARN-4734.5.patch, 
> YARN-4734.6.patch, YARN-4734.7.patch, YARN-4734.8.patch, 
> YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605868#comment-15605868
 ] 

Gour Saha commented on YARN-5778:
-

Looks good. +1 for the patch.

> Add .keep file for yarn native services AM web app
> --
>
> Key: YARN-5778
> URL: https://issues.apache.org/jira/browse/YARN-5778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5778-yarn-native-services.001.patch
>
>
> The empty .keep file is needed to preserve the directory structure for the 
> yarn native services AM web app. To add this file, run "git add -f 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
>  before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5778:
-
Attachment: YARN-5778-yarn-native-services.001.patch

> Add .keep file for yarn native services AM web app
> --
>
> Key: YARN-5778
> URL: https://issues.apache.org/jira/browse/YARN-5778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5778-yarn-native-services.001.patch
>
>
> The empty .keep file is needed to preserve the directory structure for the 
> yarn native services AM web app. To add this file, run "git add -f 
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
>  before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5778) Add .keep file for yarn native services AM web app

2016-10-25 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-5778:


 Summary: Add .keep file for yarn native services AM web app
 Key: YARN-5778
 URL: https://issues.apache.org/jira/browse/YARN-5778
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The empty .keep file is needed to preserve the directory structure for the yarn 
native services AM web app. To add this file, run "git add -f 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
 before doing the git commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605769#comment-15605769
 ] 

Sunil G commented on YARN-5548:
---

YARN-5375 is having a state-store based solution now. Hence we can ensure that 
any events which are fired from RM to state-store and subsequent events back to 
RM from state-store are clearly handling in linear way. So we could now try to 
avoid few waitForStates.Given YARN-5375 is going in as per current form, do we 
still need this new waitForState?

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4743) ResourceManager crash because TimSort

2016-10-25 Thread Zephyr Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zephyr Guo updated YARN-4743:
-
Attachment: YARN-4743-v4.patch

> ResourceManager crash because TimSort
> -
>
> Key: YARN-4743
> URL: https://issues.apache.org/jira/browse/YARN-4743
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Zephyr Guo
>Assignee: Zephyr Guo
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-4743-v1.patch, YARN-4743-v2.patch, 
> YARN-4743-v3.patch, YARN-4743-v4.patch, timsort.log
>
>
> {code}
> 2016-02-26 14:08:50,821 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type NODE_UPDATE to the scheduler
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
>  at java.util.TimSort.mergeHi(TimSort.java:868)
>  at java.util.TimSort.mergeAt(TimSort.java:485)
>  at java.util.TimSort.mergeCollapse(TimSort.java:410)
>  at java.util.TimSort.sort(TimSort.java:214)
>  at java.util.TimSort.sort(TimSort.java:173)
>  at java.util.Arrays.sort(Arrays.java:659)
>  at java.util.Collections.sort(Collections.java:217)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:316)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:240)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1091)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:989)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1185)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:684)
>  at java.lang.Thread.run(Thread.java:745)
> 2016-02-26 14:08:50,822 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {code}
> Actually, this bug found in 2.6.0-cdh5.4.7. {{FairShareComparator}} is not 
> transitive.
> We get NaN when memorySize=0 and weight=0.
> {code:title=FairSharePolicy.java}
> useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
>   s1.getWeights().getWeight(ResourceType.MEMORY)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-25 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605683#comment-15605683
 ] 

Sunil G commented on YARN-5716:
---

Thanks [~leftnoteasy] for the update.

Few more general comments:

1. {{computeUserLimitAndSetHeadroom}} is invoked invoked from *accept* call 
chain and from *assigneContainers*. I think its fine provided the both gives 
similar computed data. With user-limit preemption improvement, i think if 
compute is done outside these scheduler call flow, such user-limit computation 
costs can be brought down.
2. In {{LeafQueue.accept}}, 
{code}
Resources.subtractFrom(usedResource,
request.getTotalReleasedResource());
{code}
totalReleasedResource also include reserved resources too, correct. Do we need 
to decrement that? I am not very clear in this point here.  

3. In {{CS#allocateContainerOnSingleNode}}, *node.getReservedContainer()* is 
checked two times (second time to handle error case). Could we handle in first 
block itself.
4. In {{FiCaSchedulerApp.accept}},
  - a reference for {{resourceRequests}} is kept and under failure updated back 
to schedulingInfo object. I do not feel its a clean implementation. I remember 
this change, but I feel we need not have to take a reference in app level, may 
be we can keep it in schedulingInfo and can be used the same during recovery.
  - Method is very lengthy. We could take out logic like increase request etc. 
So it ll be more easier.
  - {{readLock}} is mostly needed for increased request and for 
appSchedulingInfo. I think some optimizations here could be done separately in 
another ticket as improvement.
  - preferring *equals* instead of {{if (fromReservedContainer != 
reservedContainerOnNode)}}

Also I have checked reservation continue scheduling and lazy-preemption in 
particular in this round. Generally both looks fine for me.

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5716.001.patch, YARN-5716.002.patch, 
> YARN-5716.003.patch, YARN-5716.004.patch, YARN-5716.005.patch, 
> YARN-5716.006.patch, YARN-5716.007.patch, YARN-5716.008.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-25 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605561#comment-15605561
 ] 

Naganarasimha G R commented on YARN-5765:
-

Thanks [~haibochen], for giving some insights on the chmod command, was not 
aware of it,  So at first glance As initially suggested it would be be ideal to 
set umask as {{ umask(0027);}} would ideal than chmod in 
{{create_validate_dir}} but before making the system call *mkdir* .This will 
ensure that if the cluster has been configured with a restrictive umask "e.g.: 
umask 077" still the directories are set with proper rights. or another 
approach i can think of is setting in this way but not sure whether it works ...
{code}
} else {
 // Explicitly set permission after creating the directory in case
 // umask has been set to a restrictive value, i.e., 0077.
 perm = perm | S_ISGID;
 if (chmod(npath, perm) != 0) {
  int permInt = perm & (S_IRWXU | S_IRWXG | S_IRWXO);
  fprintf(LOGFILE, "Can't chmod %s to the required permission %o - %s\n",
npath, permInt, strerror(errno));
   return -1;
}
{code}
Thoughts ? 

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605535#comment-15605535
 ] 

Hadoop QA commented on YARN-5148:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 91 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 52s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5a4801a |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835141/YARN-5148-YARN-3368.05.patch
 |
| JIRA Issue | YARN-5148 |
| Optional Tests |  asflicense  |
| uname | Linux 726ce6f07f61 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 9690f29 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13502/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13502/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4463) Container launch failure when yarn.nodemanager.log-dirs directory path contains space

2016-10-25 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605532#comment-15605532
 ] 

Bibin A Chundatt commented on YARN-4463:


[~sunilg]/[~rohithsharma]
Another option can we to skip all those paths with space. For both nmlocal-dir 
and nm-log-dir

> Container launch failure when yarn.nodemanager.log-dirs directory path 
> contains space
> -
>
> Key: YARN-4463
> URL: https://issues.apache.org/jira/browse/YARN-4463
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> If the container log directory path contains space container-launch fails
> Even with DEBUG logs are enabled only log able to get is 
> {noformat}
> Container id: container_e32_1450233925719_0009_01_22
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:912)
> at org.apache.hadoop.util.Shell.run(Shell.java:823)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1102)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:225)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> We can support container-launch to support nmlog directory path with space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-10-25 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-5148:
-
Attachment: YARN-5148-YARN-3368.05.patch

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, YARN-5148-YARN-3368.01.patch, 
> YARN-5148-YARN-3368.02.patch, YARN-5148-YARN-3368.03.patch, 
> YARN-5148-YARN-3368.04.patch, YARN-5148-YARN-3368.05.patch, yarn-conf.png, 
> yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15605453#comment-15605453
 ] 

Hadoop QA commented on YARN-5375:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 4 
new + 414 unchanged - 3 fixed = 418 total (was 417) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 37s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 17s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835111/YARN-5375.08.patch |
| JIRA Issue | YARN-5375 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 827e9d9d63c5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbd2057 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13501/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13501/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 

  1   2   >