[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378791#comment-15378791
 ] 

Karthik Kambatla commented on YARN-5380:


I am experiencing technically difficulties checking this in.

[~leftnoteasy], [~jianhe] - can either of you take care of this? 

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378777#comment-15378777
 ] 

Varun Saxena commented on YARN-5383:


LGTM. Will commit it later today.

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5079) [Umbrella] Native YARN framework layer for services and beyond

2016-07-14 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378764#comment-15378764
 ] 

Vinod Kumar Vavilapalli commented on YARN-5079:
---

Reigniting this. Started an email thread titled _"(DISCUSS) YARN-5079 : Native 
YARN framework layer for services and Apache Slider"_ in the YARN dev mailing 
list and cross-posted to the Sldier dev lists.

> [Umbrella] Native YARN framework layer for services and beyond
> --
>
> Key: YARN-5079
> URL: https://issues.apache.org/jira/browse/YARN-5079
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> (See overview doc at YARN-4692, modifying and copy-pasting some of the 
> relevant pieces and sub-section 3.3.1 to track the specific sub-item.)
> (This is a companion to YARN-4793 in our effort to simplify the entire story, 
> but focusing on APIs)
> So far, YARN by design has restricted itself to having a very low-­level API 
> that can support any type of application. Frameworks like Apache Hadoop 
> MapReduce, Apache Tez, Apache Spark, Apache REEF, Apache Twill, Apache Helix 
> and others ended up exposing higher level APIs that end­-users can directly 
> leverage to build their applications on top of YARN. On the services side, 
> Apache Slider has done something similar.
> With our current attention on making services first­-class and simplified, 
> it's time to take a fresh look at how we can make Apache Hadoop YARN support 
> services well out of the box. Beyond the functionality that I outlined in the 
> previous sections in the doc on how NodeManagers can be enhanced to help 
> services, the biggest missing piece is the framework itself. There is a lot 
> of very important functionality that a services' framework can own together 
> with YARN in executing services end­-to­-end.
> In this JIRA I propose we look at having a native Apache Hadoop framework for 
> running services natively on YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-14 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378730#comment-15378730
 ] 

Ying Zhang edited comment on YARN-5287 at 7/15/16 1:44 AM:
---

My fault. Got it.


was (Author: ying zhang):
My fault. Note that.

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-14 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378730#comment-15378730
 ] 

Ying Zhang commented on YARN-5287:
--

My fault. Note that.

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5181) ClusterNodeTracker: add method to get list of nodes matching a specific resourceName

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378686#comment-15378686
 ] 

Hadoop QA commented on YARN-5181:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 29s {color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 2 unchanged - 1 fixed = 3 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 989 unchanged - 0 fixed = 991 total (was 989) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 51s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818063/yarn-5181-2.patch |
| JIRA Issue | YARN-5181 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09da718a1a04 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e549a9a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/12336/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12336/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12336/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results 

[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378668#comment-15378668
 ] 

Karthik Kambatla commented on YARN-5380:


+1. Checking this in. 

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5181) ClusterNodeTracker: add method to get list of nodes matching a specific resourceName

2016-07-14 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5181:
---
Attachment: yarn-5181-2.patch

Thanks for the review, [~asuresh]. Updated patch incorporates your suggestions. 

> ClusterNodeTracker: add method to get list of nodes matching a specific 
> resourceName
> 
>
> Key: YARN-5181
> URL: https://issues.apache.org/jira/browse/YARN-5181
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5181-1.patch, yarn-5181-2.patch
>
>
> ClusterNodeTracker should have a method to return the list of nodes matching 
> a particular resourceName. This is so we could identify what all nodes a 
> particular ResourceRequest is interested in, which in turn is useful in 
> YARN-5139 (global scheduler) and YARN-4752 (FairScheduler preemption 
> overhaul). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378637#comment-15378637
 ] 

Vrushali C commented on YARN-5383:
--

The findbugs warning shown in red is for trunk, which is exactly what this 
patch fixes

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5386) Add a priority-aware Replanner

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5386:
--
Description: YARN-5211 proposes adding support for generalized priorities 
for reservations in the YARN ReservationSystem. This JIRA is a sub-task to 
track the addition of a priority-aware re-planner to accomplish it. Please 
refer to the design doc in the parent JIRA for details.

> Add a priority-aware Replanner
> --
>
> Key: YARN-5386
> URL: https://issues.apache.org/jira/browse/YARN-5386
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the addition 
> of a priority-aware re-planner to accomplish it. Please refer to the design 
> doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Description: 
YARN-5211 proposes adding support for generalized priorities for reservations 
in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
needed in ApplicationClientProtocol to accomplish it. Please refer to the 
design doc in the parent JIRA for details.


> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5385) Add a PriorityAgent in ReservationSystem

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5385:
--
Description: 
YARN-5211 proposes adding support for generalized priorities for reservations 
in the YARN ReservationSystem. This JIRA is a sub-task to track the addition of 
a priority agent to accomplish it. Please refer to the design doc in the parent 
JIRA for details.


> Add a PriorityAgent in ReservationSystem 
> -
>
> Key: YARN-5385
> URL: https://issues.apache.org/jira/browse/YARN-5385
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the addition 
> of a priority agent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5386) Add a priority-aware Replanner

2016-07-14 Thread Sean Po (JIRA)
Sean Po created YARN-5386:
-

 Summary: Add a priority-aware Replanner
 Key: YARN-5386
 URL: https://issues.apache.org/jira/browse/YARN-5386
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler, fairscheduler, resourcemanager
Reporter: Sean Po
Assignee: Sean Po






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5385) Add a PriorityAgent in ReservationSystem

2016-07-14 Thread Sean Po (JIRA)
Sean Po created YARN-5385:
-

 Summary: Add a PriorityAgent in ReservationSystem 
 Key: YARN-5385
 URL: https://issues.apache.org/jira/browse/YARN-5385
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler, fairscheduler, resourcemanager
Reporter: Sean Po
Assignee: Sean Po






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378625#comment-15378625
 ] 

Hadoop QA commented on YARN-5383:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 0 unchanged - 7 fixed = 0 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 4s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818058/YARN-5383.01.patch |
| JIRA Issue | YARN-5383 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97a24b319b6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e549a9a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/12335/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12335/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 

[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Issue Type: Sub-task  (was: Task)
Parent: YARN-5211

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5327:
--
Parent Issue: YARN-5326  (was: YARN-5211)

> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5327:
--
Parent Issue: YARN-5211  (was: YARN-5326)

> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-07-14 Thread Sean Po (JIRA)
Sean Po created YARN-5384:
-

 Summary: Expose priority in ReservationSystem submission APIs
 Key: YARN-5384
 URL: https://issues.apache.org/jira/browse/YARN-5384
 Project: Hadoop YARN
  Issue Type: Task
  Components: capacity scheduler, fairscheduler, resourcemanager
Reporter: Sean Po
Assignee: Sean Po






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5211) Supporting "priorities" in the ReservationSystem

2016-07-14 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5211:
--
Description: YARN-1051 introduced a ReservationSytem that enables the YARN 
RM to handle time explicitly, i.e. users can now "reserve" capacity ahead of 
time which is predictably allocated to them. Currently, the ReservationSystem 
currently has an implicit FIFO priority. This JIRA tracks effort to generalize 
this to arbitrary priority. This is non-trivial as the greedy nature of our 
ReservationAgents might need to be revisited if not enough space if found for 
late-arriving but higher priority reservations.   (was: The ReservationSystem 
currently has an implicit FIFO priority. This JIRA tracks effort to generalize 
this to arbitrary priority. This is non-trivial as the greedy nature of our 
ReservationAgents might need to be revisited if not enough space if found for 
late-arriving but higher priority reservations. )

> Supporting "priorities" in the ReservationSystem
> 
>
> Key: YARN-5211
> URL: https://issues.apache.org/jira/browse/YARN-5211
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Sean Po
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. Currently, the ReservationSystem currently has 
> an implicit FIFO priority. This JIRA tracks effort to generalize this to 
> arbitrary priority. This is non-trivial as the greedy nature of our 
> ReservationAgents might need to be revisited if not enough space if found for 
> late-arriving but higher priority reservations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5182) MockNodes.newNodes creates one more node per rack than requested

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378614#comment-15378614
 ] 

Karthik Kambatla commented on YARN-5182:


Didn't realize this got in. Thanks [~varun_saxena] for review/commit.

> MockNodes.newNodes creates one more node per rack than requested
> 
>
> Key: YARN-5182
> URL: https://issues.apache.org/jira/browse/YARN-5182
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.9.0
>
> Attachments: yarn-5182-1.patch
>
>
> MockNodes.newNodes creates one more node per rack than requested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-14 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378606#comment-15378606
 ] 

Vrushali C commented on YARN-5382:
--

I think I can add this, will upload a patch shortly

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5382) RM does not audit log kill request for active applications

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5382:


Assignee: Vrushali C

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5383:
-
Component/s: nodemanager

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5383:
-
Attachment: YARN-5383.01.patch

Uploading patch v1, no extra tests added since the fixes are for findbugs & 
checkstyle reported issues

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5383.01.patch
>
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
>  (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
>  (coding) HiddenField: 'conf' hides a field.
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
>  (sizes) LineLength: Line is longer than 80 characters (found 81).
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
>  (coding) HiddenField: 'conf' hides a field.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5383:
-
Description: 
Nodemanager build shows a findbugs warning

{code}
Performance Warnings

CodeWarning
WMI 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String) makes inefficient use of keySet iterator instead 
of entrySet iterator
Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
In method 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String)
At ContainerExecutor.java:[line 330]

Details

WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of entrySet 
iterator

This method accesses the value of a Map entry, using a key that was retrieved 
from a keySet iterator. It is more efficient to use an iterator on the entrySet 
of the map, to avoid the Map.get(key) lookup.
{code}


There are also several checkstyle errors in the same class 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor

{code}
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
 (indentation) Indentation: 'ContainerLaunch' have incorrect indentation level 
6, expected level should be 8.
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[319:29]
 (whitespace) WhitespaceAfter: ',' is not followed by whitespace.
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[474:52]
 (coding) HiddenField: 'conf' hides a field.
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[497:52]
 (coding) HiddenField: 'conf' hides a field.
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[522:52]
 (coding) HiddenField: 'conf' hides a field.
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[529]
 (sizes) LineLength: Line is longer than 80 characters (found 81).
[ERROR] 
src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[571:21]
 (coding) HiddenField: 'conf' hides a field.
{code}

  was:

Nodemanager build shows a findbugs warning

{code}
Performance Warnings

CodeWarning
WMI 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String) makes inefficient use of keySet iterator instead 
of entrySet iterator
Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
In method 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String)
At ContainerExecutor.java:[line 330]

Details

WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of entrySet 
iterator

This method accesses the value of a Map entry, using a key that was retrieved 
from a keySet iterator. It is more efficient to use an iterator on the entrySet 
of the map, to avoid the Map.get(key) lookup.
{code}




> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}
> There are also several checkstyle errors in the same class 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutorContainerExecutor
> {code}
> [ERROR] 
> src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java[308]
>  (indentation) Indentation: 'ContainerLaunch' have incorrect indentation 
> level 6, expected level should be 8.
> [ERROR] 
> 

[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager & checkstyle warnings in nodemanager.ContainerExecutor

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5383:
-
Summary: Fix findbugs for nodemanager & checkstyle warnings in 
nodemanager.ContainerExecutor  (was: Fix findbugs for nodemanager )

> Fix findbugs for nodemanager & checkstyle warnings in 
> nodemanager.ContainerExecutor
> ---
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378586#comment-15378586
 ] 

Vrushali C commented on YARN-5380:
--


The findbugs is unrelated to this patch. I filed jira YARN-5383 to fix that 
findbugs warning. 

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5383) Fix findbugs for nodemanager

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5383:
-
Affects Version/s: 3.0.0-alpha1

> Fix findbugs for nodemanager 
> -
>
> Key: YARN-5383
> URL: https://issues.apache.org/jira/browse/YARN-5383
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> Nodemanager build shows a findbugs warning
> {code}
> Performance Warnings
> Code  Warning
> WMI   
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String) makes inefficient use of keySet iterator 
> instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
> In method 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
>  Map, Map, List, Path, String)
> At ContainerExecutor.java:[line 330]
> Details
> WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of 
> entrySet iterator
> This method accesses the value of a Map entry, using a key that was retrieved 
> from a keySet iterator. It is more efficient to use an iterator on the 
> entrySet of the map, to avoid the Map.get(key) lookup.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5383) Fix findbugs for nodemanager

2016-07-14 Thread Vrushali C (JIRA)
Vrushali C created YARN-5383:


 Summary: Fix findbugs for nodemanager 
 Key: YARN-5383
 URL: https://issues.apache.org/jira/browse/YARN-5383
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vrushali C
Assignee: Vrushali C



Nodemanager build shows a findbugs warning

{code}
Performance Warnings

CodeWarning
WMI 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String) makes inefficient use of keySet iterator instead 
of entrySet iterator
Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
In class org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor
In method 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.writeLaunchEnv(OutputStream,
 Map, Map, List, Path, String)
At ContainerExecutor.java:[line 330]

Details

WMI_WRONG_MAP_ITERATOR: Inefficient use of keySet iterator instead of entrySet 
iterator

This method accesses the value of a Map entry, using a key that was retrieved 
from a keySet iterator. It is more efficient to use an iterator on the entrySet 
of the map, to avoid the Map.get(key) lookup.
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378573#comment-15378573
 ] 

Hadoop QA commented on YARN-5380:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 17 unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 4s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818044/YARN-5380.01.patch |
| JIRA Issue | YARN-5380 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 921408c350cf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e549a9a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/12334/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12334/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12334/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message 

[jira] [Created] (YARN-5382) RM does not audit log kill request for active applications

2016-07-14 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-5382:


 Summary: RM does not audit log kill request for active applications
 Key: YARN-5382
 URL: https://issues.apache.org/jira/browse/YARN-5382
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.2
Reporter: Jason Lowe


ClientRMService will audit a kill request but only if it either fails to issue 
the kill or if the kill is sent to an already finished application.  It does 
not create a log entry when the application is active which is arguably the 
most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378529#comment-15378529
 ] 

Hudson commented on YARN-5379:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10102 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10102/])
YARN-5379. TestHBaseTimelineStorage. testWriteApplicationToHBase() fails 
(sjlee: rev e549a9af3177f6ee83477cde8bd7d0ed72d6ecec)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java


> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: YARN-5379-YARN-5355.01.patch, YARN-5379.01.patch
>
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 

[jira] [Updated] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5380:
-
Attachment: YARN-5380.01.patch

Uploading patch v1

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
> Attachments: YARN-5380.01.patch
>
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2016-07-14 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee reassigned YARN-5195:
--

Assignee: sandflee

> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>Assignee: sandflee
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378498#comment-15378498
 ] 

Hadoop QA commented on YARN-5356:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 17s {color} | 
{color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 17s {color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 1 new + 43 unchanged - 0 fixed = 44 total (was 43) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 16s {color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 15s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817559/YARN-5356.000.patch |
| JIRA Issue | YARN-5356 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (YARN-2965) Enhance Node Managers to monitor and report the resource usage on machines

2016-07-14 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378465#comment-15378465
 ] 

Inigo Goiri commented on YARN-2965:
---

The failed unit test seems unrelated and the checkstyle issues are related to 
private accesses. Anybody available for review?

> Enhance Node Managers to monitor and report the resource usage on machines
> --
>
> Key: YARN-2965
> URL: https://issues.apache.org/jira/browse/YARN-2965
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Robert Grandl
>Assignee: Inigo Goiri
> Attachments: YARN-2965.000.patch, YARN-2965.001.patch, 
> YARN-2965.002.patch, ddoc_RT.docx
>
>
> This JIRA is about augmenting Node Managers to monitor the resource usage on 
> the machine, aggregates these reports and exposes them to the RM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378439#comment-15378439
 ] 

Vrushali C commented on YARN-5380:
--

Taking this up, will post a patch shortly

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5380:


Assignee: Vrushali C

> NMTimelinePublisher should use getMemorySize instead of getMemory
> -
>
> Key: YARN-5380
> URL: https://issues.apache.org/jira/browse/YARN-5380
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Vrushali C
>  Labels: newbie
>
> NMTimelinePublisher should use getMemorySize instead of getMemory, because 
> the latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5361) Obtaining logs for completed container says 'file belongs to a running container ' at the end

2016-07-14 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378428#comment-15378428
 ] 

Junping Du commented on YARN-5361:
--

v3 patch LGTM. Will commit it tomorrow if not further comments.

> Obtaining logs for completed container says 'file belongs to a running 
> container ' at the end
> -
>
> Key: YARN-5361
> URL: https://issues.apache.org/jira/browse/YARN-5361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-5361.1.patch, YARN-5361.2.patch, YARN-5361.3.patch
>
>
> Obtaining logs via yarn CLI for completed container but running application 
> says "This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete" 
> which is not correct.
> {code}
> LogType:stdout
> Log Upload Time:Tue Jul 12 10:38:14 + 2016
> Log Contents:
> End of LogType:stdout. This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378415#comment-15378415
 ] 

Hadoop QA commented on YARN-5379:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818024/YARN-5379.01.patch |
| JIRA Issue | YARN-5379 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ecde1d0a6815 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6cf0175 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12332/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12332/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>

[jira] [Updated] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5379:
--
Attachment: YARN-5379.01.patch

Thanks [~vrushalic]! I am +1 on the patch.

FYI, the jenkins run was done against the YARN-5355 branch as opposed to the 
trunk. I'm renaming it to run it against the trunk. As soon as that comes back 
clean, I'll commit it.

> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5379-YARN-5355.01.patch, YARN-5379.01.patch
>
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  

[jira] [Updated] (YARN-5373) NPE listing wildcard directory in containerLaunch

2016-07-14 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5373:
-
Description: 
YARN-4958 added support for wildcards in file localization. It introduces a NPE 
at 
{code:java}
for (File wildLink : directory.listFiles()) {
sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName()));
}
{code}
When directory.listFiles returns null (only happens in a secure cluster), NPE 
will cause the container fail to launch.
Hive, Oozie jobs fail as a result.

  was:
YARN-4958 added support for wildcards in file localization. It introduces a NPE 
at 
{code:java}
for (File wildLink : directory.listFiles()) {
sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName()));
}
{code}
When directory.listFiles returns null (only happens in a secure cluster), NPE 
will cause the container fail to launch.


> NPE listing wildcard directory in containerLaunch
> -
>
> Key: YARN-5373
> URL: https://issues.apache.org/jira/browse/YARN-5373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> YARN-4958 added support for wildcards in file localization. It introduces a 
> NPE 
> at 
> {code:java}
> for (File wildLink : directory.listFiles()) {
> sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName()));
> }
> {code}
> When directory.listFiles returns null (only happens in a secure cluster), NPE 
> will cause the container fail to launch.
> Hive, Oozie jobs fail as a result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5373) NPE listing wildcard directory in containerLaunch

2016-07-14 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378287#comment-15378287
 ] 

Haibo Chen commented on YARN-5373:
--

Created a separate jira to fix this issue for Windows, as I don't have an 
access to a window OS.

> NPE listing wildcard directory in containerLaunch
> -
>
> Key: YARN-5373
> URL: https://issues.apache.org/jira/browse/YARN-5373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> YARN-4958 added support for wildcards in file localization. It introduces a 
> NPE 
> at 
> {code:java}
> for (File wildLink : directory.listFiles()) {
> sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName()));
> }
> {code}
> When directory.listFiles returns null (only happens in a secure cluster), NPE 
> will cause the container fail to launch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5381) fix NPE listing wildcard directory on Windows

2016-07-14 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5381:


 Summary: fix NPE listing wildcard directory on Windows
 Key: YARN-5381
 URL: https://issues.apache.org/jira/browse/YARN-5381
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Haibo Chen
Priority: Blocker


NPE can be thrown when wildcard is used in libjar option and if the cluster is 
secure. The root cause is that NM can be running as a user that does not have 
access to resource files that are downloaded by remote users. YARN-5373 only 
fixes the issue on Linux. This jira implements the fix for Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378279#comment-15378279
 ] 

Karthik Kambatla commented on YARN-5047:


Sorry for missing this in my previous review. AbstractYarnScheduler#getNode can 
return {{N}} instead of {{SchedulerNode}}. That way, you wouldn't need to 
typecast in FifoScheduler. Otherwise, the patch looks good to me. 

[~leftnoteasy] - mind taking a look at this, when you get a chance. Would like 
to get this in soon to unblock further de-dup work. 

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch, 
> YARN-5047.003.patch, YARN-5047.004.patch, YARN-5047.005.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5373) NPE listing wildcard directory in containerLaunch

2016-07-14 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5373:
-
Priority: Blocker  (was: Critical)

> NPE listing wildcard directory in containerLaunch
> -
>
> Key: YARN-5373
> URL: https://issues.apache.org/jira/browse/YARN-5373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> YARN-4958 added support for wildcards in file localization. It introduces a 
> NPE 
> at 
> {code:java}
> for (File wildLink : directory.listFiles()) {
> sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName()));
> }
> {code}
> When directory.listFiles returns null (only happens in a secure cluster), NPE 
> will cause the container fail to launch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5380) NMTimelinePublisher should use getMemorySize instead of getMemory

2016-07-14 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5380:
--

 Summary: NMTimelinePublisher should use getMemorySize instead of 
getMemory
 Key: YARN-5380
 URL: https://issues.apache.org/jira/browse/YARN-5380
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: timelineserver
Affects Versions: 3.0.0-alpha1
Reporter: Karthik Kambatla


NMTimelinePublisher should use getMemorySize instead of getMemory, because the 
latter is deprecated in favor of the former. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2016-07-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5195:
-
Assignee: (was: Wangda Tan)

> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378230#comment-15378230
 ] 

Karthik Kambatla commented on YARN-5270:


Just got around to looking at the patch. Looks good. Thanks for fixing this, 
folks. 

NMTimelinePublisher is the only remaining usage of either of getMemory or 
setMemory. Likely came in via the recent merge of ATS. Will file a follow up 
for it. 

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.002.patch, YARN-5270-branch-2.003.patch, 
> YARN-5270-branch-2.004.patch, YARN-5270-branch-2.8.001.patch, 
> YARN-5270-branch-2.8.002.patch, YARN-5270-branch-2.8.003.patch, 
> YARN-5270-branch-2.8.004.patch, YARN-5270.003.patch, YARN-5270.004.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5195) RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2016-07-14 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378233#comment-15378233
 ] 

Wangda Tan commented on YARN-5195:
--

I don't have bandwidth to do this now, please feel free to pick it up if you 
have time.

> RM intermittently crashed with NPE while handling APP_ATTEMPT_REMOVED event 
> when async-scheduling enabled in CapacityScheduler
> --
>
> Key: YARN-5195
> URL: https://issues.apache.org/jira/browse/YARN-5195
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Karam Singh
>
> While running gridmix experiments one time came across incident where RM went 
> down with following exception
> {noformat}
> 2016-05-28 15:45:24,459 [ResourceManager Event Processor] FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1282)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1469)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:497)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:860)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:127)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:704)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-28 15:45:24,460 [ApplicationMasterLauncher #49] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning 
> master appattempt_1464449118385_0006_01
> 2016-05-28 15:45:24,460 [ResourceManager Event Processor] INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5378) Accomodate app-id->cluster mapping

2016-07-14 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378224#comment-15378224
 ] 

Joep Rottinghuis commented on YARN-5378:


Yeah, either flowName!cluster1@dc2:foo@daily_hive_report or 
cluster1@dc2!flowName:foo@daily_hive_report, except that our current model 
doesn't really allow for that prefix. From the key layout the cluster would be 
better.
On the other hand, how many clusters will we have? We can still either provide 
the cluster in the query and pull a few columns back or not and pull all back.
Given the volume I don't think that the column names will make much difference.

> Accomodate app-id->cluster mapping
> --
>
> Key: YARN-5378
> URL: https://issues.apache.org/jira/browse/YARN-5378
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>
> In discussion with [~sjlee0], [~vrushalic], [~subru], and [~curino] a 
> use-case came up to be able to map from application-id to cluster-id in 
> context of federation for Yarn.
> What happens is that a "random" cluster in the federation is asked to 
> generate an app-id and then potentially a different cluster can be the "home" 
> cluster for the AM. Furthermore, tasks can then run in yet other clusters.
> In order to be able to pull up the logical home cluster on which the 
> application ran, there needs to be a mapping from application-id to 
> cluster-id. This mapping is available in the federated Yarn case only during 
> the active live of the application.
> A similar situation is common in our larger production environment. Somebody 
> will complain about a slow job, some failure or whatever. If we're lucky we 
> have an application-id. When we ask the user which cluster they ran on, 
> they'll typically answer with the machine from where they launched the job 
> (many users are unaware of the underlying physical clusters). This leaves us 
> to spelunk through various RM ui's to find a matching epoch in the 
> application ID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5298) Mount usercache and NM filecache directories into Docker container

2016-07-14 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev resolved YARN-5298.
-
Resolution: Fixed

Fixed the resolution. Sorry about that [~sidharta-s]!

> Mount usercache and NM filecache directories into Docker container
> --
>
> Key: YARN-5298
> URL: https://issues.apache.org/jira/browse/YARN-5298
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Varun Vasudev
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: YARN-5298.001.patch, YARN-5298.002.patch
>
>
> Currently, we don't mount the usercache and the NM filecache directories into 
> the Docker container. This can lead to issues with containers that rely on 
> public and application scope resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5298) Mount usercache and NM filecache directories into Docker container

2016-07-14 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reopened YARN-5298:
-

Re-opening to fix the resolution.

> Mount usercache and NM filecache directories into Docker container
> --
>
> Key: YARN-5298
> URL: https://issues.apache.org/jira/browse/YARN-5298
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Varun Vasudev
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: YARN-5298.001.patch, YARN-5298.002.patch
>
>
> Currently, we don't mount the usercache and the NM filecache directories into 
> the Docker container. This can lead to issues with containers that rely on 
> public and application scope resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5298) Mount usercache and NM filecache directories into Docker container

2016-07-14 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378155#comment-15378155
 ] 

Sidharta Seethana commented on YARN-5298:
-

hi [~vvasudev], the resolution field is set to "Cannot Reproduce" - is this 
intentional?


> Mount usercache and NM filecache directories into Docker container
> --
>
> Key: YARN-5298
> URL: https://issues.apache.org/jira/browse/YARN-5298
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Varun Vasudev
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: YARN-5298.001.patch, YARN-5298.002.patch
>
>
> Currently, we don't mount the usercache and the NM filecache directories into 
> the Docker container. This can lead to issues with containers that rely on 
> public and application scope resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5361) Obtaining logs for completed container says 'file belongs to a running container ' at the end

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378113#comment-15378113
 ] 

Hadoop QA commented on YARN-5361:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 3 unchanged - 1 fixed = 4 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 40s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817977/YARN-5361.3.patch |
| JIRA Issue | YARN-5361 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a05184c42925 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6cf0175 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12330/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378107#comment-15378107
 ] 

Hadoop QA commented on YARN-5379:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
58s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 2s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 25s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817979/YARN-5379-YARN-5355.01.patch
 |
| JIRA Issue | YARN-5379 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4f8e27bdc890 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 0fd3980 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12331/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12331/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>   

[jira] [Commented] (YARN-1994) Expose YARN/MR endpoints on multiple interfaces

2016-07-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15378062#comment-15378062
 ] 

Arpit Agarwal commented on YARN-1994:
-

[~kasha], I also think that 0.0.0.0 is a better default. e.g. we made wild-card 
bind the default (HDFS-10363) in Ozone.

Changing the behavior in 2.x feels risky as it may introduce a security risk 
for existing deployments. We can certainly change it in 3.0 and document it.

> Expose YARN/MR endpoints on multiple interfaces
> ---
>
> Key: YARN-1994
> URL: https://issues.apache.org/jira/browse/YARN-1994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Craig Welch
> Fix For: 2.6.0
>
> Attachments: YARN-1994.0.patch, YARN-1994.1.patch, 
> YARN-1994.11.patch, YARN-1994.11.patch, YARN-1994.12.patch, 
> YARN-1994.13.patch, YARN-1994.14.patch, YARN-1994.15-branch2.patch, 
> YARN-1994.15.patch, YARN-1994.2.patch, YARN-1994.3.patch, YARN-1994.4.patch, 
> YARN-1994.5.patch, YARN-1994.6.patch, YARN-1994.7.patch
>
>
> YARN and MapReduce daemons currently do not support specifying a wildcard 
> address for the server endpoints. This prevents the endpoints from being 
> accessible from all interfaces on a multihomed machine.
> Note that if we do specify INADDR_ANY for any of the options, it will break 
> clients as they will attempt to connect to 0.0.0.0. We need a solution that 
> allows specifying a hostname or IP-address for clients while requesting 
> wildcard bind for the servers.
> (List of endpoints is in a comment below)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5379:
-
Attachment: YARN-5379-YARN-5355.01.patch

Uploading patch v1.

The timestamp variable "ts"  was being reset at line 616  (I think this was 
added as part of YARN-3816). I have fixed the timestamp value being reset.


> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5379-YARN-5355.01.patch
>
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn 

[jira] [Commented] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377888#comment-15377888
 ] 

Hudson commented on YARN-5159:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10100 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10100/])
YARN-5159. Wrong Javadoc tag in MiniYarnCluster. Contributed by Andras 
(aajisaka: rev 6cf017558a3d06240b95d1b56c953591ece97c92)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java


> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.0
>
> Attachments: YARN-5159.01.patch, YARN-5159.02.patch, 
> YARN-5159.03.patch
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1994) Expose YARN/MR endpoints on multiple interfaces

2016-07-14 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377882#comment-15377882
 ] 

Karthik Kambatla commented on YARN-1994:


[~arpitagarwal] - what are your thoughts on setting the default value for RM 
and ATS bindhosts to 0.0.0.0? Wanting to listen on all interfaces seems like 
what most users would want to do. If we think changing this would be 
incompatible in 2.x, how about 3.0? 

> Expose YARN/MR endpoints on multiple interfaces
> ---
>
> Key: YARN-1994
> URL: https://issues.apache.org/jira/browse/YARN-1994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager, webapp
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Craig Welch
> Fix For: 2.6.0
>
> Attachments: YARN-1994.0.patch, YARN-1994.1.patch, 
> YARN-1994.11.patch, YARN-1994.11.patch, YARN-1994.12.patch, 
> YARN-1994.13.patch, YARN-1994.14.patch, YARN-1994.15-branch2.patch, 
> YARN-1994.15.patch, YARN-1994.2.patch, YARN-1994.3.patch, YARN-1994.4.patch, 
> YARN-1994.5.patch, YARN-1994.6.patch, YARN-1994.7.patch
>
>
> YARN and MapReduce daemons currently do not support specifying a wildcard 
> address for the server endpoints. This prevents the endpoints from being 
> accessible from all interfaces on a multihomed machine.
> Note that if we do specify INADDR_ANY for any of the options, it will break 
> clients as they will attempt to connect to 0.0.0.0. We need a solution that 
> allows specifying a hostname or IP-address for clients while requesting 
> wildcard bind for the servers.
> (List of endpoints is in a comment below)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5361) Obtaining logs for completed container says 'file belongs to a running container ' at the end

2016-07-14 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5361:

Attachment: YARN-5361.3.patch

> Obtaining logs for completed container says 'file belongs to a running 
> container ' at the end
> -
>
> Key: YARN-5361
> URL: https://issues.apache.org/jira/browse/YARN-5361
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Attachments: YARN-5361.1.patch, YARN-5361.2.patch, YARN-5361.3.patch
>
>
> Obtaining logs via yarn CLI for completed container but running application 
> says "This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete" 
> which is not correct.
> {code}
> LogType:stdout
> Log Upload Time:Tue Jul 12 10:38:14 + 2016
> Log Contents:
> End of LogType:stdout. This log file belongs to a running container 
> (container_e32_1468319707096_0001_01_04) and so may not be complete.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5379:


Assignee: Vrushali C

> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Minor
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520026, negotiated timeout = 4
> 2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
> 

[jira] [Commented] (YARN-5363) For AM containers, or for containers of running-apps, "yarn logs" incorrectly only (tries to) shows syslog file-type by default

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377356#comment-15377356
 ] 

Hudson commented on YARN-5363:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10099 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10099/])
YARN-5363. For AM containers, or for containers of running-apps, "yarn (xgong: 
rev 429347289c7787364e654334cd84115ae40bb22d)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java


> For AM containers, or for containers of running-apps, "yarn logs" incorrectly 
> only (tries to) shows syslog file-type by default
> ---
>
> Key: YARN-5363
> URL: https://issues.apache.org/jira/browse/YARN-5363
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 2.9.0
>
> Attachments: YARN-5363-2016-07-12.txt, YARN-5363-2016-07-13.1.txt, 
> YARN-5363-2016-07-13.txt
>
>
> For e.g, for a running application, the following happens:
> {code}
> # yarn logs -applicationId application_1467838922593_0001
> 16/07/06 22:07:05 INFO impl.TimelineClientImpl: Timeline service address: 
> http://:8188/ws/v1/timeline/
> 16/07/06 22:07:06 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> 16/07/06 22:07:07 INFO impl.TimelineClientImpl: Timeline service address: 
> http://l:8188/ws/v1/timeline/
> 16/07/06 22:07:07 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_01 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_02 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_03 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_04 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_05 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_06 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_07 within the application: 
> application_1467838922593_0001
> Can not find the logs for the application: application_1467838922593_0001 
> with the appOwner: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-07-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377351#comment-15377351
 ] 

Akira Ajisaka commented on YARN-5159:
-

Thank you for the detailed information! You are right. I checked whether the 
mvn command passed, but I didn't actually see the generated html file. +1, 
checking this in.

> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.0
>
> Attachments: YARN-5159.01.patch, YARN-5159.02.patch, 
> YARN-5159.03.patch
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377349#comment-15377349
 ] 

Vrushali C commented on YARN-5379:
--

I see the problem, let me put up a patch. 

> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Minor
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520026, negotiated timeout = 4
> 2016-07-13 00:15:48,938 INFO  [main] 

[jira] [Commented] (YARN-5363) For AM containers, or for containers of running-apps, "yarn logs" incorrectly only (tries to) shows syslog file-type by default

2016-07-14 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377319#comment-15377319
 ] 

Xuan Gong commented on YARN-5363:
-

Committed into trunk/branch-2. Thanks, vinod.

> For AM containers, or for containers of running-apps, "yarn logs" incorrectly 
> only (tries to) shows syslog file-type by default
> ---
>
> Key: YARN-5363
> URL: https://issues.apache.org/jira/browse/YARN-5363
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 2.9.0
>
> Attachments: YARN-5363-2016-07-12.txt, YARN-5363-2016-07-13.1.txt, 
> YARN-5363-2016-07-13.txt
>
>
> For e.g, for a running application, the following happens:
> {code}
> # yarn logs -applicationId application_1467838922593_0001
> 16/07/06 22:07:05 INFO impl.TimelineClientImpl: Timeline service address: 
> http://:8188/ws/v1/timeline/
> 16/07/06 22:07:06 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> 16/07/06 22:07:07 INFO impl.TimelineClientImpl: Timeline service address: 
> http://l:8188/ws/v1/timeline/
> 16/07/06 22:07:07 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_01 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_02 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_03 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_04 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_05 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_06 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_07 within the application: 
> application_1467838922593_0001
> Can not find the logs for the application: application_1467838922593_0001 
> with the appOwner: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5043) TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377276#comment-15377276
 ] 

Hadoop QA commented on YARN-5043:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 24s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817962/YARN-5043.01.patch |
| JIRA Issue | YARN-5043 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a5ad000f7046 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 54bf14f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12329/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12329/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail
> --
>
> Key: YARN-5043
> URL: https://issues.apache.org/jira/browse/YARN-5043
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: sandflee
>Assignee: Jun Gong
> Attachments: TestAMRestart-output.txt, YARN-5043.01.patch
>
>
> {noformat}
> Test set: 
> 

[jira] [Commented] (YARN-5363) For AM containers, or for containers of running-apps, "yarn logs" incorrectly only (tries to) shows syslog file-type by default

2016-07-14 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377275#comment-15377275
 ] 

Xuan Gong commented on YARN-5363:
-

+1 lgtm. Will commit it shortly.

> For AM containers, or for containers of running-apps, "yarn logs" incorrectly 
> only (tries to) shows syslog file-type by default
> ---
>
> Key: YARN-5363
> URL: https://issues.apache.org/jira/browse/YARN-5363
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Attachments: YARN-5363-2016-07-12.txt, YARN-5363-2016-07-13.1.txt, 
> YARN-5363-2016-07-13.txt
>
>
> For e.g, for a running application, the following happens:
> {code}
> # yarn logs -applicationId application_1467838922593_0001
> 16/07/06 22:07:05 INFO impl.TimelineClientImpl: Timeline service address: 
> http://:8188/ws/v1/timeline/
> 16/07/06 22:07:06 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> 16/07/06 22:07:07 INFO impl.TimelineClientImpl: Timeline service address: 
> http://l:8188/ws/v1/timeline/
> 16/07/06 22:07:07 INFO client.RMProxy: Connecting to ResourceManager at 
> /:8050
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_01 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_02 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_03 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_04 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_05 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_06 within the application: 
> application_1467838922593_0001
> Can not find any log file matching the pattern: [syslog] for the container: 
> container_e03_1467838922593_0001_01_07 within the application: 
> application_1467838922593_0001
> Can not find the logs for the application: application_1467838922593_0001 
> with the appOwner: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-07-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377263#comment-15377263
 ] 

Sangjin Lee commented on YARN-5355:
---

It has been committed to {{YARN-5355-branch-2}}. Thanks Vrushali for the 
backport!

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf, 
> YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-07-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377252#comment-15377252
 ] 

Sangjin Lee commented on YARN-5355:
---

Thanks [~vrushalic] for creating the branch-2 patch! Vrushali and I tested it, 
and it looks good. I'll commit it into our branch-2 feature branch ( 
{{YARN-5355-branch-2}} ) shortly.

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf, 
> YARN-5355-branch-2.01.patch
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377239#comment-15377239
 ] 

Sangjin Lee edited comment on YARN-5379 at 7/14/16 4:34 PM:


I saw it once yesterday with YARN-5364. Then this morning I saw another with 
the trunk build report: 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/94/

I don't believe this is related to tests running concurrently. Hopefully this 
can be reproduced by running it repeatedly.


was (Author: sjlee0):
I saw it once yesterday with YARN-5364. Then this morning I saw another with 
the trunk build report: 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/94/

I don't believe this is related to tests running concurrently.

> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Priority: Minor
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098

[jira] [Commented] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377239#comment-15377239
 ] 

Sangjin Lee commented on YARN-5379:
---

I saw it once yesterday with YARN-5364. Then this morning I saw another with 
the trunk build report: 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/94/

I don't believe this is related to tests running concurrently.

> TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently
> 
>
> Key: YARN-5379
> URL: https://issues.apache.org/jira/browse/YARN-5379
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Priority: Minor
>
> The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
> fail intermittently:
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
> {noformat}
> The stdout output:
> {noformat}
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38097
> 2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
> 2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> establishment complete on server localhost/127.0.0.1:53474, sessionid = 
> 0x155e19baa520025, negotiated timeout = 4
> 2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
> (RecoverableZooKeeper.java:(120)) - Process 
> identifier=hconnection-0x32130e61 connecting to ZooKeeper 
> ensemble=localhost:53474
> 2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
> (ZooKeeper.java:(438)) - Initiating client connection, 
> connectString=localhost:53474 sessionTimeout=9 
> watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
> 2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
> connection to server localhost/127.0.0.1:53474. Will not attempt to 
> authenticate using SASL (unknown error)
> 2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
> socket connection from /127.0.0.1:38098
> 2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket 
> connection established to localhost/127.0.0.1:53474, initiating session
> 2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
> server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
> Client attempting to establish new session at /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
> (ZooKeeperServer.java:finishSessionInit(617)) - Established session 
> 0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
> 2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
> zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
> 

[jira] [Updated] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5379:
--
Description: 
The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
fail intermittently:
{noformat}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
{noformat}

The stdout output:
{noformat}
2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
(RecoverableZooKeeper.java:(120)) - Process 
identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
ensemble=localhost:53474
2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
(ZooKeeper.java:(438)) - Initiating client connection, 
connectString=localhost:53474 sessionTimeout=9 
watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
connection to server localhost/127.0.0.1:53474. Will not attempt to 
authenticate using SASL (unknown error)
2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket connection 
established to localhost/127.0.0.1:53474, initiating session
2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
socket connection from /127.0.0.1:38097
2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
Client attempting to establish new session at /127.0.0.1:38097
2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
(ZooKeeperServer.java:finishSessionInit(617)) - Established session 
0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
establishment complete on server localhost/127.0.0.1:53474, sessionid = 
0x155e19baa520025, negotiated timeout = 4
2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
(RecoverableZooKeeper.java:(120)) - Process 
identifier=hconnection-0x32130e61 connecting to ZooKeeper 
ensemble=localhost:53474
2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
(ZooKeeper.java:(438)) - Initiating client connection, 
connectString=localhost:53474 sessionTimeout=9 
watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
connection to server localhost/127.0.0.1:53474. Will not attempt to 
authenticate using SASL (unknown error)
2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
socket connection from /127.0.0.1:38098
2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket connection 
established to localhost/127.0.0.1:53474, initiating session
2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
Client attempting to establish new session at /127.0.0.1:38098
2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
(ZooKeeperServer.java:finishSessionInit(617)) - Established session 
0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
establishment complete on server localhost/127.0.0.1:53474, sessionid = 
0x155e19baa520026, negotiated timeout = 4
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(541)) - closing the entity table
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(546)) - closing the app_flow table
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(551)) - closing the application table
2016-07-13 00:15:48,941 INFO  
[RpcServer.reader=1,bindAddress=2588a1932efe,port=37493] hbase.Server 
(RpcServer.java:processConnectionHeader(1678)) - Connection from 172.17.0.3 
port: 35467 with version info: version: "1.1.3" url: 
"git://diocles.local/Volumes/hbase-1.1.3RC1/hbase" revision: 

[jira] [Created] (YARN-5379) TestHBaseTimelineStorage. testWriteApplicationToHBase() fails intermittently

2016-07-14 Thread Sangjin Lee (JIRA)
Sangjin Lee created YARN-5379:
-

 Summary: TestHBaseTimelineStorage. testWriteApplicationToHBase() 
fails intermittently
 Key: YARN-5379
 URL: https://issues.apache.org/jira/browse/YARN-5379
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test, timelineserver
Affects Versions: 3.0.0-alpha1
Reporter: Sangjin Lee
Priority: Minor


The {{TestHBaseTimelineStorage. testWriteApplicationToHBase()}} test seems to 
fail intermittently:
{noformat}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage.testWriteApplicationToHBase(TestHBaseTimelineStorage.java:817)
{noformat}

The stdout output:
{noformat}
2016-07-13 00:15:48,883 INFO  [main] zookeeper.RecoverableZooKeeper 
(RecoverableZooKeeper.java:(120)) - Process 
identifier=hconnection-0x2b7962a2 connecting to ZooKeeper 
ensemble=localhost:53474
2016-07-13 00:15:48,883 INFO  [main] zookeeper.ZooKeeper 
(ZooKeeper.java:(438)) - Initiating client connection, 
connectString=localhost:53474 sessionTimeout=9 
watcher=hconnection-0x2b7962a20x0, quorum=localhost:53474, baseZNode=/hbase
2016-07-13 00:15:48,886 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
connection to server localhost/127.0.0.1:53474. Will not attempt to 
authenticate using SASL (unknown error)
2016-07-13 00:15:48,887 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket connection 
established to localhost/127.0.0.1:53474, initiating session
2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
socket connection from /127.0.0.1:38097
2016-07-13 00:15:48,887 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
Client attempting to establish new session at /127.0.0.1:38097
2016-07-13 00:15:48,896 INFO  [SyncThread:0] server.ZooKeeperServer 
(ZooKeeperServer.java:finishSessionInit(617)) - Established session 
0x155e19baa520025 with negotiated timeout 4 for client /127.0.0.1:38097
2016-07-13 00:15:48,896 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
establishment complete on server localhost/127.0.0.1:53474, sessionid = 
0x155e19baa520025, negotiated timeout = 4
2016-07-13 00:15:48,911 INFO  [main] zookeeper.RecoverableZooKeeper 
(RecoverableZooKeeper.java:(120)) - Process 
identifier=hconnection-0x32130e61 connecting to ZooKeeper 
ensemble=localhost:53474
2016-07-13 00:15:48,912 INFO  [main] zookeeper.ZooKeeper 
(ZooKeeper.java:(438)) - Initiating client connection, 
connectString=localhost:53474 sessionTimeout=9 
watcher=hconnection-0x32130e610x0, quorum=localhost:53474, baseZNode=/hbase
2016-07-13 00:15:48,917 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(975)) - Opening socket 
connection to server localhost/127.0.0.1:53474. Will not attempt to 
authenticate using SASL (unknown error)
2016-07-13 00:15:48,918 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.NIOServerCnxnFactory (NIOServerCnxnFactory.java:run(197)) - Accepted 
socket connection from /127.0.0.1:38098
2016-07-13 00:15:48,921 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:primeConnection(852)) - Socket connection 
established to localhost/127.0.0.1:53474, initiating session
2016-07-13 00:15:48,921 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:53474] 
server.ZooKeeperServer (ZooKeeperServer.java:processConnectRequest(868)) - 
Client attempting to establish new session at /127.0.0.1:38098
2016-07-13 00:15:48,929 INFO  [SyncThread:0] server.ZooKeeperServer 
(ZooKeeperServer.java:finishSessionInit(617)) - Established session 
0x155e19baa520026 with negotiated timeout 4 for client /127.0.0.1:38098
2016-07-13 00:15:48,929 INFO  [main-SendThread(localhost:53474)] 
zookeeper.ClientCnxn (ClientCnxn.java:onConnected(1235)) - Session 
establishment complete on server localhost/127.0.0.1:53474, sessionid = 
0x155e19baa520026, negotiated timeout = 4
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(541)) - closing the entity table
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(546)) - closing the app_flow table
2016-07-13 00:15:48,938 INFO  [main] storage.HBaseTimelineWriterImpl 
(HBaseTimelineWriterImpl.java:serviceStop(551)) - closing the application table
2016-07-13 00:15:48,941 INFO  

[jira] [Commented] (YARN-5043) TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail

2016-07-14 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377184#comment-15377184
 ] 

Jun Gong commented on YARN-5043:


Attach a patch to fix the problem and delete unnecessary sleeps.

> TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail
> --
>
> Key: YARN-5043
> URL: https://issues.apache.org/jira/browse/YARN-5043
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: sandflee
>Assignee: Jun Gong
> Attachments: TestAMRestart-output.txt, YARN-5043.01.patch
>
>
> {noformat}
> Test set: 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 31.558 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
> testRMAppAttemptFailuresValidityInterval(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
>   Time elapsed: 31.509 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.junit.Assert.assertEquals(Assert.java:542)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testRMAppAttemptFailuresValidityInterval(TestAMRestart.java:913)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377182#comment-15377182
 ] 

Sunil G commented on YARN-5321:
---

HI [~Sreenath], could you please check the updated patch.

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5043) TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail

2016-07-14 Thread Jun Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Gong updated YARN-5043:
---
Attachment: YARN-5043.01.patch

> TestAMRestart.testRMAppAttemptFailuresValidityInterval random fail
> --
>
> Key: YARN-5043
> URL: https://issues.apache.org/jira/browse/YARN-5043
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: sandflee
>Assignee: Jun Gong
> Attachments: TestAMRestart-output.txt, YARN-5043.01.patch
>
>
> {noformat}
> Test set: 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 31.558 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
> testRMAppAttemptFailuresValidityInterval(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
>   Time elapsed: 31.509 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<3>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.junit.Assert.assertEquals(Assert.java:542)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testRMAppAttemptFailuresValidityInterval(TestAMRestart.java:913)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377175#comment-15377175
 ] 

Hadoop QA commented on YARN-5321:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 21s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:6d3a5f5 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817961/YARN-5321-YARN-3368.005.patch
 |
| JIRA Issue | YARN-5321 |
| Optional Tests |  asflicense  |
| uname | Linux c1c9347934d6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 8ec70d3 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12328/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5321:
--
Assignee: Wangda Tan  (was: Sunil G)

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5321:
--
Attachment: YARN-5321-YARN-3368.005.patch

Thanks [~Sreenath] for the review. Updating new patch after cleaning unwanted 
code. Also fixed few minor alignment issues . Some border/view etc can be done 
as an improvement jira. 

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, YARN-5321-YARN-3368.005.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-5321:
-

Assignee: Sunil G  (was: Wangda Tan)

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: YARN-5321-YARN-3368-0001.patch, 
> YARN-5321-YARN-3368.0002.patch, YARN-5321-YARN-3368.003.patch, 
> YARN-5321-YARN-3368.004.patch, sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5342) Improve non-exclusive node partition resource allocation in Capacity Scheduler

2016-07-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377092#comment-15377092
 ] 

Sunil G commented on YARN-5342:
---

I feel the current approach seems simple and transparent to solve the container 
allocation for non-exclusive labels.

Few alternatives could have been like:
1. We could still allocate no_label containers to a non-exclusive partition (OR 
do not reset the scheduling opportunity) provided
- non-exclusive partition pending + no_label container resource demand(for 1 
request) < total free resources (available) in non-exclusive. 
- It is possible that complete *pending* resource for a non-exclusive partition 
may not be able to allocate due to user-limit/factor, am-resource-percentage 
etc. So if we can get effective pending value, we could add to the equation and 
can do more allocation in non-exclusive partition.

2. Another idea is to do over committing when specific per-partition demand is 
coming for non-exclusive partition. And do a preemption if needed for other 
container. This is of very aggressive nature, So I am not feeling it ll be 
acceptable.

But these are not very transparent or easier to explain to user as a whitebox 
operation. So we could discuss and continue this in a new ticket provided 
current patch goes in. I could raise another ticket as an improvement task. 
Thoughts [~leftnoteasy] / [~Naganarasimha Garla].

> Improve non-exclusive node partition resource allocation in Capacity Scheduler
> --
>
> Key: YARN-5342
> URL: https://issues.apache.org/jira/browse/YARN-5342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5342.1.patch
>
>
> In the previous implementation, one non-exclusive container allocation is 
> possible when the missed-opportunity >= #cluster-nodes. And 
> missed-opportunity will be reset when container allocated to any node.
> This will slow down the frequency of container allocation on non-exclusive 
> node partition: *When a non-exclusive partition=x has idle resource, we can 
> only allocate one container for this app in every 
> X=nodemanagers.heartbeat-interval secs for the whole cluster.*
> In this JIRA, I propose a fix to reset missed-opporunity only if we have >0 
> pending resource for the non-exclusive partition OR we get allocation from 
> the default partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-14 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377005#comment-15377005
 ] 

Naganarasimha G R commented on YARN-5287:
-

Hi [~Ying Zhang],
Please dont delete the previous artifacts it would be helpful for tracking.

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5298) Mount usercache and NM filecache directories into Docker container

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377001#comment-15377001
 ] 

Hudson commented on YARN-5298:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10098 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10098/])
YARN-5298. Mount usercache and NM filecache directories into Docker (vvasudev: 
rev 58e18508018081b5b5aa7c12cc5af386146cd26b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/LinuxContainerRuntimeConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/executor/ContainerStartContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java


> Mount usercache and NM filecache directories into Docker container
> --
>
> Key: YARN-5298
> URL: https://issues.apache.org/jira/browse/YARN-5298
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Varun Vasudev
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: YARN-5298.001.patch, YARN-5298.002.patch
>
>
> Currently, we don't mount the usercache and the NM filecache directories into 
> the Docker container. This can lead to issues with containers that rely on 
> public and application scope resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5303) Clean up ContainerExecutor JavaDoc

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377000#comment-15377000
 ] 

Hudson commented on YARN-5303:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10098 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10098/])
YARN-5303. Clean up ContainerExecutor JavaDoc. Contributed by Daniel (vvasudev: 
rev 54bf14f80bcb2cafd1d30b77f2e02cd40b9515d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java


> Clean up ContainerExecutor JavaDoc
> --
>
> Key: YARN-5303
> URL: https://issues.apache.org/jira/browse/YARN-5303
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-5303.001.patch
>
>
> The {{ContainerExecutor}} class needs a lot of JavaDoc cleanup and could use 
> some other TLC as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15377002#comment-15377002
 ] 

Hudson commented on YARN-4759:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10098 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10098/])
YARN-4759. Fix signal handling for docker containers. Contributed by (vvasudev: 
rev e5e558b0a34968eaffdd243ce605ef26346c5e85)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerStopCommandTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerStopCommand.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c


> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Fix For: 2.9.0
>
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376959#comment-15376959
 ] 

Varun Vasudev commented on YARN-4759:
-

+1 for the latest patch. Committing this.

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5069) TestFifoScheduler.testResourceOverCommit race condition

2016-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5069:
---
Component/s: test

> TestFifoScheduler.testResourceOverCommit race condition
> ---
>
> Key: YARN-5069
> URL: https://issues.apache.org/jira/browse/YARN-5069
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.7.3
>
> Attachments: YARN-5069-b2.7.001.patch, YARN-5069.001.patch
>
>
> There is a race condition between updating the node resources and the node 
> report becoming available. If the update takes too long, the report will be 
> set to null and we will get an NPE when checking the report's available 
> resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5069) TestFifoScheduler.testResourceOverCommit race condition

2016-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5069:
---
Fix Version/s: 2.8.0

> TestFifoScheduler.testResourceOverCommit race condition
> ---
>
> Key: YARN-5069
> URL: https://issues.apache.org/jira/browse/YARN-5069
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Eric Badger
>Assignee: Eric Badger
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-5069-b2.7.001.patch, YARN-5069.001.patch
>
>
> There is a race condition between updating the node resources and the node 
> report becoming available. If the update takes too long, the report will be 
> set to null and we will get an NPE when checking the report's available 
> resources. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5159) Wrong Javadoc tag in MiniYarnCluster

2016-07-14 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376755#comment-15376755
 ] 

Andras Bokor edited comment on YARN-5159 at 7/14/16 1:29 PM:
-

[~ajisakaa]
It is interesting
I tried again on either Mac or CentOS.
My steps:
{code}
cd hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
git apply ../../../../YARN-5159.03.patch
mvn javadoc:test-javadoc
open target/site/testapidocs/index.html{code}

In index.html the api says:
{{then the property "yarn.scheduler.include-port-in-node-name" should be set 
true in the configuration used to initialize the minicluster.}}

If I remove the package name, the javadoc will be
{{@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}}
than the result is
{{then the property should be set true in the configuration used to initialize 
the minicluster.}}

Do I do something wrong?


was (Author: boky01):
[~ajisakaa]
It is interesting
I tried again on either Mac or CentOS.
My steps:
{code}
cd hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
git apply ../../../../YARN-5159.03.patch
mvn javadoc:test-javadoc
open target/site/testapidocs/index.html{code}

In index.html the api says:
{{then the property "yarn.scheduler.include-port-in-node-name" should be set 
true in the configuration used to initialize the minicluster.}}

If I remove the package name, so the javadoc is
{{@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}}
than the result is
{{then the property should be set true in the configuration used to initialize 
the minicluster.}}

Do I do something wrong?

> Wrong Javadoc tag in MiniYarnCluster
> 
>
> Key: YARN-5159
> URL: https://issues.apache.org/jira/browse/YARN-5159
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.0
>
> Attachments: YARN-5159.01.patch, YARN-5159.02.patch, 
> YARN-5159.03.patch
>
>
> {@YarnConfiguration.RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME} is wrong. Should 
> be changed to 
>  {@value YarnConfiguration#RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME}
> Edit:
> I noted that due to java 8 javadoc restrictions the javadoc:test-javadoc goal 
> fails on hadoop-yarn-server-tests project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376898#comment-15376898
 ] 

Hadoop QA commented on YARN-4759:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 6s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817941/YARN-4759.003.patch |
| JIRA Issue | YARN-4759 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 937eafa1114e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be26c1b |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12327/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12327/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12327/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix signal handling for docker containers
> 

[jira] [Updated] (YARN-5300) Exclude generated federation protobuf sources from YARN Javadoc/findbugs build

2016-07-14 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5300:
---
Fix Version/s: (was: 2.x)
   YARN-2915

> Exclude generated federation protobuf sources from YARN Javadoc/findbugs build
> --
>
> Key: YARN-5300
> URL: https://issues.apache.org/jira/browse/YARN-5300
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>Priority: Minor
> Fix For: YARN-2915
>
> Attachments: YARN-5300-v1.patch, YARN-5300-v2.patch
>
>
> This JIRA is the equivalent of YARN-5132 for generated federation protobuf 
> sources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376863#comment-15376863
 ] 

Hadoop QA commented on YARN-4759:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 18 unchanged - 0 fixed = 21 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 59s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRegression |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817935/YARN-4759.003.patch |
| JIRA Issue | YARN-4759 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux d4a5d4345b79 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / be26c1b |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12326/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376852#comment-15376852
 ] 

Shane Kumpf commented on YARN-4759:
---

Jenkins slave failed again. Will reattach again.

{code}


  maven site verification: trunk




cd 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-patch-1 -Ptest-patch 
clean site site:stage > 
/testptch/hadoop/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 2>&1
Slave went offline during the build
ERROR: Connection was broken: java.io.IOException: Sorry, this connection is 
closed.
at 
com.trilead.ssh2.transport.TransportManager.ensureConnected(TransportManager.java:587)
at 
com.trilead.ssh2.transport.TransportManager.sendMessage(TransportManager.java:660)
at com.trilead.ssh2.channel.Channel.freeupWindow(Channel.java:407)
at com.trilead.ssh2.channel.Channel.freeupWindow(Channel.java:347)
at 
com.trilead.ssh2.channel.ChannelManager.getChannelData(ChannelManager.java:943)
at 
com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:58)
at 
com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:79)
at 
hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82)
at 
hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)
at 
hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)
at 
hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
com.trilead.ssh2.crypto.cipher.CipherOutputStream.flush(CipherOutputStream.java:75)
at 
com.trilead.ssh2.transport.TransportConnection.sendMessage(TransportConnection.java:193)
at 
com.trilead.ssh2.transport.TransportConnection.sendMessage(TransportConnection.java:107)
at 
com.trilead.ssh2.transport.TransportManager.sendMessage(TransportManager.java:677)
at 
com.trilead.ssh2.transport.TransportManager$AsynchronousWorker.run(TransportManager.java:115)
{code}

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-4759:
--
Attachment: YARN-4759.003.patch

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-4759:
--
Attachment: (was: YARN-4759.003.patch)

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-4759:
--
Attachment: (was: YARN-4759.003.patch)

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-4759:
--
Attachment: YARN-4759.003.patch

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch, YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4759) Fix signal handling for docker containers

2016-07-14 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15376830#comment-15376830
 ] 

Shane Kumpf commented on YARN-4759:
---

It appears the jenkins slave failed during the precommit job. I'm also only 
showing the two previously called out checkstyle bugs when running locally, not 
three as shown above. Reattaching the same patch to rerun the jenkins job.

> Fix signal handling for docker containers
> -
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
> Attachments: YARN-4759.001.patch, YARN-4759.002.patch, 
> YARN-4759.003.patch
>
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >