[jira] [Updated] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-3191:
-
Attachment: 0001-YARN-3191.patch

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3147) Clean up RM web proxy code

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318754#comment-14318754
 ] 

Xuan Gong commented on YARN-3147:
-

That makes sense. Thanks for explanation. [~ste...@apache.org]
+1 for the patch. Will commit it.

 Clean up RM web proxy code 
 ---

 Key: YARN-3147
 URL: https://issues.apache.org/jira/browse/YARN-3147
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-3147-001.patch, YARN-3147-002.patch


 YARN-2084 covers fixing up the RM proxy  filter for REST support.
 Before doing that, prepare for it by cleaning up the codebase: factoring out 
 the redirect logic into a single method, some minor reformatting, move to 
 SLF4J and Java7 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318845#comment-14318845
 ] 

Hadoop QA commented on YARN-3191:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698492/0001-YARN-3191.patch
  against trunk revision 83be450.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainerMetrics

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6619//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6619//console

This message is automatically generated.

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2994) Document work-preserving RM restart

2015-02-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318842#comment-14318842
 ] 

Jian He commented on YARN-2994:
---

Since work-preserving recovery is enabled by default and recommended for users, 
I removed the max-attempt config which is not needed for configuring 
work-preserving recovery.

 Document work-preserving RM restart
 ---

 Key: YARN-2994
 URL: https://issues.apache.org/jira/browse/YARN-2994
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-2994.1.patch, YARN-2994.2.patch, YARN-2994.3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2847) Linux native container executor segfaults if default banned user detected

2015-02-12 Thread Olaf Flebbe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318885#comment-14318885
 ] 

Olaf Flebbe commented on YARN-2847:
---

Oops fix not correct


 Linux native container executor segfaults if default banned user detected
 -

 Key: YARN-2847
 URL: https://issues.apache.org/jira/browse/YARN-2847
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0, 2.4.1, 2.6.0
Reporter: Jason Lowe
Assignee: Chang Li
 Attachments: YARN-2487.04.trunk.patch, yarn2847.patch, 
 yarn2847.patch, yarn2847notest.patch


 The check_user function in container-executor.c can cause a segmentation 
 fault if banned.users is not provided but the user is detected as one of the 
 default users.  In that scenario it will call free_values on a NULL pointer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2847) Linux native container executor segfaults if default banned user detected

2015-02-12 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated YARN-2847:
--
Attachment: YARN-2487.05.trunk.patch

PATCH from YARN-3180 does not use uninitialized variable, please review

 Linux native container executor segfaults if default banned user detected
 -

 Key: YARN-2847
 URL: https://issues.apache.org/jira/browse/YARN-2847
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0, 2.4.1, 2.6.0
Reporter: Jason Lowe
Assignee: Chang Li
 Attachments: YARN-2487.04.trunk.patch, YARN-2487.05.trunk.patch, 
 yarn2847.patch, yarn2847.patch, yarn2847notest.patch


 The check_user function in container-executor.c can cause a segmentation 
 fault if banned.users is not provided but the user is detected as one of the 
 default users.  In that scenario it will call free_values on a NULL pointer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Rohith (JIRA)
Rohith created YARN-3191:


 Summary: Log object should be initialized with its own class
 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial


In ContainerImpl and ApplicationImpl class, Log object is initialized with 
interface name. This causes in logging happen with interface class.
{{private static final Log LOG = LogFactory.getLog(Container.class);}} 
{{private static final Log LOG = LogFactory.getLog(Application.class);}}

it should be 
{{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
{{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3183) Some classes define hashcode() but not equals()

2015-02-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318719#comment-14318719
 ] 

Robert Kanter commented on YARN-3183:
-

That's a good point, though I'm not sure what we'd return for the hashcode 
otherwise.  It probably make sense to have the constructor, where the variable 
used for hashCode and equals is set, do the null check and throw an 
{{IllegalArgumentException}} or something like that?

 Some classes define hashcode() but not equals()
 ---

 Key: YARN-3183
 URL: https://issues.apache.org/jira/browse/YARN-3183
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor
 Attachments: YARN-3183.patch


 These files all define {{hashCode}}, but don't define {{equals}}:
 {noformat}
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationAttemptFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationAttemptStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingContainerFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingContainerStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AppAttemptFinishedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AppAttemptRegisteredEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationCreatedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationFinishedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ContainerCreatedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ContainerFinishedEvent.java
 {noformat}
 This one unnecessarily defines {{equals}}:
 {noformat}
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceRetentionSet.java
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3158) Correct log messages in ResourceTrackerService

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318770#comment-14318770
 ] 

Hadoop QA commented on YARN-3158:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698479/YARN-3158.patch
  against trunk revision 9b0ba59.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadTests
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6614//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6614//console

This message is automatically generated.

 Correct log messages in ResourceTrackerService
 --

 Key: YARN-3158
 URL: https://issues.apache.org/jira/browse/YARN-3158
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Devaraj K
Assignee: Varun Saxena
  Labels: newbie
 Attachments: YARN-3158.patch


 There is a space missing after the container id in the below message.
 {code:xml}
 2015-02-07 08:26:12,641 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
 Received finished container : container_1423277052568_0001_01_01for 
 unknown application application_1423277052568_0001 Skipping.
 {code}
 Again, there is a space missing before the application id.
 {code:xml}
 LOG.debug(Ignoring container completion status for unmanaged AM
 + rmApp.getApplicationId());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3192) Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java and #/ResourceManager.java

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318773#comment-14318773
 ] 

Hadoop QA commented on YARN-3192:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698491/YARN-3192.patch
  against trunk revision 83be450.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6620//console

This message is automatically generated.

 Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java 
 and #/ResourceManager.java
 

 Key: YARN-3192
 URL: https://issues.apache.org/jira/browse/YARN-3192
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: YARN-3192.patch


 The InterruptedException is completely ignored. As a result, any events 
 causing this interrupt will be lost.
  File: org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 {code}
try {
 event = eventQueue.take();
   } catch (InterruptedException e) {
 LOG.error(Returning, interrupted :  + e);
 return; // TODO: Kill RM.
   }
 {code}
 File: org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
 {code}
 public void join() {
 if(proxyServer != null) {
   try {
 proxyServer.join();
   } catch (InterruptedException e) {
   }
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318775#comment-14318775
 ] 

Xuan Gong commented on YARN-3188:
-

bq. The main purpose is users are interested in seeing all their outstanding 
applications by default.

Yes, this is reason.

 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3164) rmadmin command usage prints incorrect command name

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318791#comment-14318791
 ] 

Xuan Gong commented on YARN-3164:
-

Patch looks good over all.

But, is this new function really necessary ?
{code}
+  protected void setErrOut(PrintStream errOut) {
+this.errOut = errOut;
+  }
{code}

If this one is only for testing purpose, probably we do not need it.
Please take a look at TestRMAdminCli.testHelp(). 

 rmadmin command usage prints incorrect command name
 ---

 Key: YARN-3164
 URL: https://issues.apache.org/jira/browse/YARN-3164
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3164.1.patch, YARN-3164.2.patch


 /hadoop/bin{color:red} ./yarn rmadmin -transitionToActive {color}
 transitionToActive: incorrect number of arguments
 Usage:{color:red}  HAAdmin  {color} [-transitionToActive serviceId 
 [--forceactive]]
 {color:red} ./yarn HAAdmin {color} 
 Error: Could not find or load main class HAAdmin
 Expected it should be rmadmin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3124) Capacity Scheduler LeafQueue/ParentQueue should use QueueCapacities to track capacities-by-label

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318812#comment-14318812
 ] 

Hadoop QA commented on YARN-3124:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698222/YARN-3124.5.patch
  against trunk revision 9b0ba59.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerUtils
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
  
org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter
  
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6616//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6616//console

This message is automatically generated.

 Capacity Scheduler LeafQueue/ParentQueue should use QueueCapacities to track 
 capacities-by-label
 

 Key: YARN-3124
 URL: https://issues.apache.org/jira/browse/YARN-3124
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-3124.1.patch, YARN-3124.2.patch, YARN-3124.3.patch, 
 YARN-3124.4.patch, YARN-3124.5.patch


 After YARN-3098, capacities-by-label (include 
 used-capacity/maximum-capacity/absolute-maximum-capacity, etc.) should be 
 tracked in QueueCapacities.
 This patch is targeting to make capacities-by-label in CS Queues are all 
 tracked by QueueCapacities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2994) Document work-preserving RM restart

2015-02-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318837#comment-14318837
 ] 

Jian He commented on YARN-2994:
---

thanks [~djp], [~ozawa] ! I incorporated your comments.
Also added a section for level-db based state store. 

 Document work-preserving RM restart
 ---

 Key: YARN-2994
 URL: https://issues.apache.org/jira/browse/YARN-2994
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-2994.1.patch, YARN-2994.2.patch, YARN-2994.3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3031) [Storage abstraction] Create backing storage write interface for ATS writers

2015-02-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3031:
-
Attachment: Sequence_diagram_write_interaction.2.png

 [Storage abstraction] Create backing storage write interface for ATS writers
 

 Key: YARN-3031
 URL: https://issues.apache.org/jira/browse/YARN-3031
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Vrushali C
 Attachments: Sequence_diagram_write_interaction.2.png, 
 Sequence_diagram_write_interaction.png, YARN-3031.01.patch


 Per design in YARN-2928, come up with the interface for the ATS writer to 
 write to various backing storages. The interface should be created to capture 
 the right level of abstractions so that it will enable all backing storage 
 implementations to implement it efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3158) Correct log messages in ResourceTrackerService

2015-02-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318714#comment-14318714
 ] 

Brahma Reddy Battula commented on YARN-3158:


LGTM,+1..

 Correct log messages in ResourceTrackerService
 --

 Key: YARN-3158
 URL: https://issues.apache.org/jira/browse/YARN-3158
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Devaraj K
Assignee: Varun Saxena
  Labels: newbie
 Attachments: YARN-3158.patch


 There is a space missing after the container id in the below message.
 {code:xml}
 2015-02-07 08:26:12,641 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
 Received finished container : container_1423277052568_0001_01_01for 
 unknown application application_1423277052568_0001 Skipping.
 {code}
 Again, there is a space missing before the application id.
 {code:xml}
 LOG.debug(Ignoring container completion status for unmanaged AM
 + rmApp.getApplicationId());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318708#comment-14318708
 ] 

Rohith commented on YARN-3191:
--

Attached the straight forward patch.Kindly review

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3074) Nodemanager dies when localizer runner tries to write to a full disk

2015-02-12 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318756#comment-14318756
 ] 

Varun Saxena commented on YARN-3074:


Thanks [~eepayne] for the review
Thanks [~jlowe] for the review and commit

 Nodemanager dies when localizer runner tries to write to a full disk
 

 Key: YARN-3074
 URL: https://issues.apache.org/jira/browse/YARN-3074
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-3074.001.patch, YARN-3074.002.patch, 
 YARN-3074.03.patch


 When a LocalizerRunner tries to write to a full disk it can bring down the 
 nodemanager process.  Instead of failing the whole process we should fail 
 only the container and make a best attempt to keep going.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3124) Capacity Scheduler LeafQueue/ParentQueue should use QueueCapacities to track capacities-by-label

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318861#comment-14318861
 ] 

Hadoop QA commented on YARN-3124:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698222/YARN-3124.5.patch
  against trunk revision 9b0ba59.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6617//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6617//console

This message is automatically generated.

 Capacity Scheduler LeafQueue/ParentQueue should use QueueCapacities to track 
 capacities-by-label
 

 Key: YARN-3124
 URL: https://issues.apache.org/jira/browse/YARN-3124
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-3124.1.patch, YARN-3124.2.patch, YARN-3124.3.patch, 
 YARN-3124.4.patch, YARN-3124.5.patch


 After YARN-3098, capacities-by-label (include 
 used-capacity/maximum-capacity/absolute-maximum-capacity, etc.) should be 
 tracked in QueueCapacities.
 This patch is targeting to make capacities-by-label in CS Queues are all 
 tracked by QueueCapacities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-3180) container-executor gets SEGV for default banned user

2015-02-12 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe resolved YARN-3180.
---
Resolution: Duplicate

 container-executor gets SEGV for default banned user
 

 Key: YARN-3180
 URL: https://issues.apache.org/jira/browse/YARN-3180
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.4.1, 2.6.1
Reporter: Olaf Flebbe
 Attachments: 
 0001-YARN-3180-container-executor-gets-SEGV-for-default-b.patch


 container-executor dumps core if container-executor.cfg 
 * Does not contain a banned.users statement, getting the default in effect
 * The banned user id is above min.user.id
 * The user is contained in the default banned.user
 and yes this did happened to me.
 Patch and test appended (relativ to git trunk)
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3031) [Storage abstraction] Create backing storage write interface for ATS writers

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318758#comment-14318758
 ] 

Hadoop QA commented on YARN-3031:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12698488/Sequence_diagram_write_interaction.2.png
  against trunk revision 9e33c99.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6618//console

This message is automatically generated.

 [Storage abstraction] Create backing storage write interface for ATS writers
 

 Key: YARN-3031
 URL: https://issues.apache.org/jira/browse/YARN-3031
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Vrushali C
 Attachments: Sequence_diagram_write_interaction.2.png, 
 Sequence_diagram_write_interaction.png, YARN-3031.01.patch


 Per design in YARN-2928, come up with the interface for the ATS writer to 
 write to various backing storages. The interface should be created to capture 
 the right level of abstractions so that it will enable all backing storage 
 implementations to implement it efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3147) Clean up RM web proxy code

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318784#comment-14318784
 ] 

Hudson commented on YARN-3147:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7092 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7092/])
YARN-3147. Clean up RM web proxy code. Contributed by Steve Loughran (xgong: 
rev 83be450acc7fc9bb9f7bbd006e7b0804bf10279c)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestAmFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/amfilter/AmIpFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/amfilter/AmFilterInitializer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java


 Clean up RM web proxy code 
 ---

 Key: YARN-3147
 URL: https://issues.apache.org/jira/browse/YARN-3147
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: YARN-3147-001.patch, YARN-3147-002.patch


 YARN-2084 covers fixing up the RM proxy  filter for REST support.
 Before doing that, prepare for it by cleaning up the codebase: factoring out 
 the redirect logic into a single method, some minor reformatting, move to 
 SLF4J and Java7 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3021) YARN's delegation-token handling disallows certain trust setups to operate properly over DistCp

2015-02-12 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318891#comment-14318891
 ] 

Vinod Kumar Vavilapalli commented on YARN-3021:
---

bq. Seems regardless of this jira, we could do a renewer address match as a 
validation step. Right?
+1. Not as a validation, but to see if this RM should attempt renewal or not.

bq. In this case, actually looks like the renewer would be cluster A's yarn, 
based on TokenCache@obtainTokensForNamenodesInternal and 
Master.getMasterPrincipal.
bq. So it looks like that even if we check, the renewer would match in this 
case. Please correct me if I'm wrong.
To make it work, we will still have to change the applications (MR etc). App 
changes are needed irrespective of the approach.

bq. I'd be willing to accept that approach, but for one small worry: Any app 
sending in a token with a bad renewer set could get through with such a change, 
whereas previously it'd be rejected outright. Not that it'd be harmful (as it 
is ignored), but it could still be seen as a behaviour change, no?
This is what you originally wanted :) [In the 1.x JobTracker the same call is 
present, but it is done asynchronously and once the renewal attempt failed we 
simply ceased to schedule any further attempts of renewals, rather than fail 
the job immediately.]
I think the problem is that RM doesn't have enough knowledge to know what is a 
valid third-party renewer (that is not this RM itself), and what is an invalid 
renewer. Even the app can really be not sure.

Overall I think automatic token renewal has always been an auxiliary 
service provided by YARN's RM. If you want to make use of that service as an 
application, you need to get token with the right token-service ('me') and pass 
it to 'me' to renew it correctly. If either of those conditions, I'll not give 
you that service.

Implicitly we also had a automatic token validation as a auxiliary feature. 
But given the history I know, this was never our intention. The question is 
whether we continue supporting this implicit aux feature or drop it. And given 
my earlier point that RM cannot know either ways, this implicit feature was 
always broken. I'm wary of adding this new API (I know I started with that 
proposal :) )

 YARN's delegation-token handling disallows certain trust setups to operate 
 properly over DistCp
 ---

 Key: YARN-3021
 URL: https://issues.apache.org/jira/browse/YARN-3021
 Project: Hadoop YARN
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Harsh J
 Attachments: YARN-3021.001.patch, YARN-3021.002.patch, 
 YARN-3021.003.patch, YARN-3021.patch


 Consider this scenario of 3 realms: A, B and COMMON, where A trusts COMMON, 
 and B trusts COMMON (one way trusts both), and both A and B run HDFS + YARN 
 clusters.
 Now if one logs in with a COMMON credential, and runs a job on A's YARN that 
 needs to access B's HDFS (such as a DistCp), the operation fails in the RM, 
 as it attempts a renewDelegationToken(…) synchronously during application 
 submission (to validate the managed token before it adds it to a scheduler 
 for automatic renewal). The call obviously fails cause B realm will not trust 
 A's credentials (here, the RM's principal is the renewer).
 In the 1.x JobTracker the same call is present, but it is done asynchronously 
 and once the renewal attempt failed we simply ceased to schedule any further 
 attempts of renewals, rather than fail the job immediately.
 We should change the logic such that we attempt the renewal but go easy on 
 the failure and skip the scheduling alone, rather than bubble back an error 
 to the client, failing the app submission. This way the old behaviour is 
 retained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3192) Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java and #/ResourceManager.java

2015-02-12 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created YARN-3192:
--

 Summary: Empty handler for exception: 
java.lang.InterruptedException #WebAppProxy.java and #/ResourceManager.java
 Key: YARN-3192
 URL: https://issues.apache.org/jira/browse/YARN-3192
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


The InterruptedException is completely ignored. As a result, any events causing 
this interrupt will be lost.

 File: org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java

{code}
   try {
event = eventQueue.take();
  } catch (InterruptedException e) {
LOG.error(Returning, interrupted :  + e);
return; // TODO: Kill RM.
  }
{code}

File: org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java

{code}
public void join() {
if(proxyServer != null) {
  try {
proxyServer.join();
  } catch (InterruptedException e) {
  }
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3192) Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java and #/ResourceManager.java

2015-02-12 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated YARN-3192:
---
Attachment: YARN-3192.patch

 Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java 
 and #/ResourceManager.java
 

 Key: YARN-3192
 URL: https://issues.apache.org/jira/browse/YARN-3192
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: YARN-3192.patch


 The InterruptedException is completely ignored. As a result, any events 
 causing this interrupt will be lost.
  File: org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 {code}
try {
 event = eventQueue.take();
   } catch (InterruptedException e) {
 LOG.error(Returning, interrupted :  + e);
 return; // TODO: Kill RM.
   }
 {code}
 File: org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
 {code}
 public void join() {
 if(proxyServer != null) {
   try {
 proxyServer.join();
   } catch (InterruptedException e) {
   }
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3033) [Aggregator wireup] Implement NM starting the ATS writer companion

2015-02-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318833#comment-14318833
 ] 

Li Lu commented on YARN-3033:
-

Hi [~sjlee0] and [~devaraj.k], I've already started thinking on this, would you 
mind if I take this Jira over? Thanks! 

 [Aggregator wireup] Implement NM starting the ATS writer companion
 --

 Key: YARN-3033
 URL: https://issues.apache.org/jira/browse/YARN-3033
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Devaraj K

 Per design in YARN-2928, implement node managers starting the ATS writer 
 companion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2079) Recover NonAggregatingLogHandler state upon nodemanager restart

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318886#comment-14318886
 ] 

Hudson commented on YARN-2079:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7093 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7093/])
YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager restart. 
(Contributed by Jason Lowe) (junping_du: rev 
04f5ef18f7877ce30b12b1a3c1e851c420531b72)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMStateStoreService.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMMemoryStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMLeveldbStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/TestNMLeveldbStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/recovery/NMNullStateStoreService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/proto/yarn_server_nodemanager_recovery.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/NonAggregatingLogHandler.java


 Recover NonAggregatingLogHandler state upon nodemanager restart
 ---

 Key: YARN-2079
 URL: https://issues.apache.org/jira/browse/YARN-2079
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.7.0

 Attachments: YARN-2079.002.patch, YARN-2079.003.patch, YARN-2079.patch


 The state of NonAggregatingLogHandler needs to be persisted so logs are 
 properly deleted across a nodemanager restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2079) Recover NonAggregatingLogHandler state upon nodemanager restart

2015-02-12 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318647#comment-14318647
 ] 

Rohith commented on YARN-2079:
--

Thanks [~jlowe] for your explanation. That sounds good for me..
[~djp] Good catch..!!! :-)

 Recover NonAggregatingLogHandler state upon nodemanager restart
 ---

 Key: YARN-2079
 URL: https://issues.apache.org/jira/browse/YARN-2079
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-2079.002.patch, YARN-2079.003.patch, YARN-2079.patch


 The state of NonAggregatingLogHandler needs to be persisted so logs are 
 properly deleted across a nodemanager restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3076) YarnClient implementation to retrieve label to node mapping

2015-02-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318645#comment-14318645
 ] 

Wangda Tan commented on YARN-3076:
--

[~varun_saxena], could you take a look at test failures, after YARN-2694, test 
cases need to be updated. Thanks.

 YarnClient implementation to retrieve label to node mapping
 ---

 Key: YARN-3076
 URL: https://issues.apache.org/jira/browse/YARN-3076
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
 Attachments: YARN-3076.001.patch, YARN-3076.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2079) Recover NonAggregatingLogHandler state upon nodemanager restart

2015-02-12 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318745#comment-14318745
 ] 

Rohith commented on YARN-2079:
--

Patch looks good to me,+1

 Recover NonAggregatingLogHandler state upon nodemanager restart
 ---

 Key: YARN-2079
 URL: https://issues.apache.org/jira/browse/YARN-2079
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-2079.002.patch, YARN-2079.003.patch, YARN-2079.patch


 The state of NonAggregatingLogHandler needs to be persisted so logs are 
 properly deleted across a nodemanager restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3147) Clean up RM web proxy code

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318763#comment-14318763
 ] 

Xuan Gong commented on YARN-3147:
-

Committed into trunk/branch-2. Thanks, Steve

 Clean up RM web proxy code 
 ---

 Key: YARN-3147
 URL: https://issues.apache.org/jira/browse/YARN-3147
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: YARN-3147-001.patch, YARN-3147-002.patch


 YARN-2084 covers fixing up the RM proxy  filter for REST support.
 Before doing that, prepare for it by cleaning up the codebase: factoring out 
 the redirect logic into a single method, some minor reformatting, move to 
 SLF4J and Java7 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2796) deprecate sbin/yarn-daemon.sh

2015-02-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-2796:
---
Summary: deprecate sbin/yarn-daemon.sh  (was: deprecate sbin/*.sh)

 deprecate sbin/yarn-daemon.sh
 -

 Key: YARN-2796
 URL: https://issues.apache.org/jira/browse/YARN-2796
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: YARN-2796-00.patch


 We should deprecate mark all yarn sbin/*.sh commands (except for start and 
 stop) as deprecated in trunk so that they may be removed in a future release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3177) Fix the order of the parameters in YarnConfiguration

2015-02-12 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318874#comment-14318874
 ] 

Chris Douglas commented on YARN-3177:
-

[~brahmareddy] moving code for readability is completely reasonable.

In this particular instance, {{YarnConfiguration}} is a set of fields... 
Javadoc orders them and devs will look up the symbol directly. Those two cover 
basically all the users of the class; it's almost never read. Restructuring it 
offers a low payoff, compared to maintaining the history of when and why that 
field was added to {{YarnConfiguration}}. Of course that's still available, but 
this adds another lookup for a developer, which is more common.

 Fix the order of the parameters in YarnConfiguration
 

 Key: YARN-3177
 URL: https://issues.apache.org/jira/browse/YARN-3177
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
 Attachments: YARN-3177.patch


  *1. keep Process principal and keytab one place..( NM and RM are not placed 
 in order)* 
 {code} 
 public static final String RM_AM_MAX_ATTEMPTS =
 RM_PREFIX + am.max-attempts;
   public static final int DEFAULT_RM_AM_MAX_ATTEMPTS = 2;
   
   /** The keytab for the resource manager.*/
   public static final String RM_KEYTAB = 
 RM_PREFIX + keytab;
   /**The kerberos principal to be used for spnego filter for RM.*/
   public static final String RM_WEBAPP_SPNEGO_USER_NAME_KEY =
   RM_PREFIX + webapp.spnego-principal;
   
   /**The kerberos keytab to be used for spnego filter for RM.*/
   public static final String RM_WEBAPP_SPNEGO_KEYTAB_FILE_KEY =
   RM_PREFIX + webapp.spnego-keytab-file;
 {code}
  *2.RM  webapp adress and port are not in order* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3179) Update use of Iterator to Iterable

2015-02-12 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-3179:
-
Attachment: YARN-3179.002.patch

Update based on Devaraj K's feedback

 Update use of Iterator to Iterable
 --

 Key: YARN-3179
 URL: https://issues.apache.org/jira/browse/YARN-3179
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: YARN-3179.001.patch, YARN-3179.002.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2079) Recover NonAggregatingLogHandler state upon nodemanager restart

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318683#comment-14318683
 ] 

Hadoop QA commented on YARN-2079:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698482/YARN-2079.003.patch
  against trunk revision 9b0ba59.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6615//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6615//console

This message is automatically generated.

 Recover NonAggregatingLogHandler state upon nodemanager restart
 ---

 Key: YARN-2079
 URL: https://issues.apache.org/jira/browse/YARN-2079
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-2079.002.patch, YARN-2079.003.patch, YARN-2079.patch


 The state of NonAggregatingLogHandler needs to be persisted so logs are 
 properly deleted across a nodemanager restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2079) Recover NonAggregatingLogHandler state upon nodemanager restart

2015-02-12 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318741#comment-14318741
 ] 

Junping Du commented on YARN-2079:
--

Thanks [~jlowe] for addressing my comments in 003 patch.
bq. ScheduledThreadPoolExecutor already treats negative delays as delays of 
zero, so I didn't bother to replicate that logic.
Make sense. We can just keep it as it is now. However, it could be slightly 
better if we log the minus value to notify that deletion get delayed because of 
NM restart. Isn't it? Anyway, I think this is only a nit and we can fix it 
later.

003 patch looks pretty good to me. [~rohithsharma], do you have additional 
comments here? If not, I will go ahead to commit this.

 Recover NonAggregatingLogHandler state upon nodemanager restart
 ---

 Key: YARN-2079
 URL: https://issues.apache.org/jira/browse/YARN-2079
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.4.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-2079.002.patch, YARN-2079.003.patch, YARN-2079.patch


 The state of NonAggregatingLogHandler needs to be persisted so logs are 
 properly deleted across a nodemanager restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318738#comment-14318738
 ] 

Brahma Reddy Battula commented on YARN-3191:


Patch LGTM,+1..Thanks for reporting and giving patch..

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3021) YARN's delegation-token handling disallows certain trust setups to operate properly over DistCp

2015-02-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318760#comment-14318760
 ] 

Yongjun Zhang commented on YARN-3021:
-

HI [~vinodkv] and [~jianhe],

Would you please comment on [~qwertymaniac]'s comment above?

Thanks a lot.


 YARN's delegation-token handling disallows certain trust setups to operate 
 properly over DistCp
 ---

 Key: YARN-3021
 URL: https://issues.apache.org/jira/browse/YARN-3021
 Project: Hadoop YARN
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Harsh J
 Attachments: YARN-3021.001.patch, YARN-3021.002.patch, 
 YARN-3021.003.patch, YARN-3021.patch


 Consider this scenario of 3 realms: A, B and COMMON, where A trusts COMMON, 
 and B trusts COMMON (one way trusts both), and both A and B run HDFS + YARN 
 clusters.
 Now if one logs in with a COMMON credential, and runs a job on A's YARN that 
 needs to access B's HDFS (such as a DistCp), the operation fails in the RM, 
 as it attempts a renewDelegationToken(…) synchronously during application 
 submission (to validate the managed token before it adds it to a scheduler 
 for automatic renewal). The call obviously fails cause B realm will not trust 
 A's credentials (here, the RM's principal is the renewer).
 In the 1.x JobTracker the same call is present, but it is done asynchronously 
 and once the renewal attempt failed we simply ceased to schedule any further 
 attempts of renewals, rather than fail the job immediately.
 We should change the logic such that we attempt the renewal but go easy on 
 the failure and skip the scheduling alone, rather than bubble back an error 
 to the client, failing the app submission. This way the old behaviour is 
 retained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3183) Some classes define hashcode() but not equals()

2015-02-12 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318776#comment-14318776
 ] 

Rohith commented on YARN-3183:
--

I'd suggest for returning 0 in hashCode() if null.

 Some classes define hashcode() but not equals()
 ---

 Key: YARN-3183
 URL: https://issues.apache.org/jira/browse/YARN-3183
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor
 Attachments: YARN-3183.patch


 These files all define {{hashCode}}, but don't define {{equals}}:
 {noformat}
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationAttemptFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationAttemptStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingApplicationStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingContainerFinishEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ahs/WritingContainerStartEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AppAttemptFinishedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AppAttemptRegisteredEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationCreatedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationFinishedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ContainerCreatedEvent.java
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ContainerFinishedEvent.java
 {noformat}
 This one unnecessarily defines {{equals}}:
 {noformat}
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceRetentionSet.java
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3179) Update use of Iterator to Iterable

2015-02-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318663#comment-14318663
 ] 

Ray Chiang commented on YARN-3179:
--

Got it.   Thanks!

 Update use of Iterator to Iterable
 --

 Key: YARN-3179
 URL: https://issues.apache.org/jira/browse/YARN-3179
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: YARN-3179.001.patch, YARN-3179.002.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3031) [Storage abstraction] Create backing storage write interface for ATS writers

2015-02-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318680#comment-14318680
 ] 

Vrushali C commented on YARN-3031:
--

hi [~djp],
Thanks for looking at this! Yes, the calls from client (like AM) to write an 
entity info could be sync ( blocking) or async (non-blocking), but the storage 
writer api (TimelineServiceWriter) calls are always synchronous (blocking).  
The Base Aggregator Service would provide both options of writes (sync and 
async) in it's API. Would be part of YARN-3167 I think.

I have modified the sequence diagram to say that calls from AM to Base 
Aggregator Service can be sync or async.

thanks
Vrushali





 [Storage abstraction] Create backing storage write interface for ATS writers
 

 Key: YARN-3031
 URL: https://issues.apache.org/jira/browse/YARN-3031
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Vrushali C
 Attachments: Sequence_diagram_write_interaction.png, 
 YARN-3031.01.patch


 Per design in YARN-2928, come up with the interface for the ATS writer to 
 write to various backing storages. The interface should be created to capture 
 the right level of abstractions so that it will enable all backing storage 
 implementations to implement it efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3035) [Storage implementation] Create a test-only backing storage implementation for ATS writes

2015-02-12 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee reassigned YARN-3035:
-

Assignee: Sangjin Lee  (was: Devaraj K)

 [Storage implementation] Create a test-only backing storage implementation 
 for ATS writes
 -

 Key: YARN-3035
 URL: https://issues.apache.org/jira/browse/YARN-3035
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Sangjin Lee

 Per design in YARN-2928, create a test-only bare bone backing storage 
 implementation for ATS writes.
 We could consider something like a no-op or in-memory storage strictly for 
 development and testing purposes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2847) Linux native container executor segfaults if default banned user detected

2015-02-12 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated YARN-2847:
--
Attachment: YARN-2487.04.trunk.patch

Merged test from YARN-3810, cleaned up whitespace errors. Ready for review.

 Linux native container executor segfaults if default banned user detected
 -

 Key: YARN-2847
 URL: https://issues.apache.org/jira/browse/YARN-2847
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Chang Li
 Attachments: YARN-2487.04.trunk.patch, yarn2847.patch, 
 yarn2847.patch, yarn2847notest.patch


 The check_user function in container-executor.c can cause a segmentation 
 fault if banned.users is not provided but the user is detected as one of the 
 default users.  In that scenario it will call free_values on a NULL pointer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3187) Documentation of Capacity Scheduler Queue mapping based on user or group

2015-02-12 Thread Gururaj Shetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gururaj Shetty updated YARN-3187:
-
Attachment: YARN-3187.1.patch

 Documentation of Capacity Scheduler Queue mapping based on user or group
 

 Key: YARN-3187
 URL: https://issues.apache.org/jira/browse/YARN-3187
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler, documentation
Affects Versions: 2.6.0
Reporter: Naganarasimha G R
Assignee: Gururaj Shetty
  Labels: documentation
 Fix For: 2.6.0

 Attachments: YARN-3187.1.patch


 YARN-2411 exposes a very useful feature {{support simple user and group 
 mappings to queues}} but its not captured in the documentation. So in this 
 jira we plan to document this feature



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Anushri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anushri updated YARN-3188:
--
Description: 
By default yarn application --list should list all the applications since we 
are not giving -appstate option.

Currently it is giving like following..
{noformat}
[hdfs@host194 bin]$ ./yarn application -list
15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Total number of applications (application-types: [] and states: [SUBMITTED, 
ACCEPTED, RUNNING]):1
Application-Id  Application-NameApplication-Type
  User   Queue   State Final-State  
   ProgressTracking-URL
application_1422888408992_15010  grep-search   MAPREDUCE
  hdfs defaultACCEPTED   UNDEFINED  
 0% N/A
[
{noformat}


*Can somebody please assign this issue to me..?* 

  was:
By default yarn application --list should list all the applications since we 
are not giving -appstate option.

{noformat}
[hdfs@host194 bin]$ ./yarn application -list
15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Total number of applications (application-types: [] and states: [SUBMITTED, 
ACCEPTED, RUNNING]):1
Application-Id  Application-NameApplication-Type
  User   Queue   State Final-State  
   ProgressTracking-URL
application_1422888408992_15010  grep-search   MAPREDUCE
  hdfs defaultACCEPTED   UNDEFINED  
 0% N/A
[
{noformat}


*Can somebody please assign this issue to me..?* 


 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-913) Umbrella: Add a way to register long-lived services in a YARN cluster

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318068#comment-14318068
 ] 

Hudson commented on YARN-913:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-2616 [YARN-913] Add CLI client to the registry to list, view and 
manipulate entries. (Akshay Radia via stevel) (stevel: rev 
362565cf5a8cbc1e7e66847649c29666d79f6938)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/cli/TestRegistryCli.java
YARN-2683. [YARN-913] registry config options: document and move to 
core-default. (stevel) (stevel: rev c3da2db48fd18c41096fe5d6d4650978fb31ae24)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/registry-security.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/index.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/using-the-yarn-service-registry.md
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/registry-configuration.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md


 Umbrella: Add a way to register long-lived services in a YARN cluster
 -

 Key: YARN-913
 URL: https://issues.apache.org/jira/browse/YARN-913
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: api, resourcemanager
Affects Versions: 2.5.0, 2.4.1
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: 2014-09-03_Proposed_YARN_Service_Registry.pdf, 
 2014-09-08_YARN_Service_Registry.pdf, RegistrationServiceDetails.txt, 
 YARN-913-001.patch, YARN-913-002.patch, YARN-913-003.patch, 
 YARN-913-003.patch, YARN-913-004.patch, YARN-913-006.patch, 
 YARN-913-007.patch, YARN-913-008.patch, YARN-913-009.patch, 
 YARN-913-010.patch, YARN-913-011.patch, YARN-913-012.patch, 
 YARN-913-013.patch, YARN-913-014.patch, YARN-913-015.patch, 
 YARN-913-016.patch, YARN-913-017.patch, YARN-913-018.patch, 
 YARN-913-019.patch, YARN-913-020.patch, YARN-913-021.patch, yarnregistry.pdf, 
 yarnregistry.pdf, yarnregistry.pdf, yarnregistry.tla


 In a YARN cluster you can't predict where services will come up -or on what 
 ports. The services need to work those things out as they come up and then 
 publish them somewhere.
 Applications need to be able to find the service instance they are to bond to 
 -and not any others in the cluster.
 Some kind of service registry -in the RM, in ZK, could do this. If the RM 
 held the write access to the ZK nodes, it would be more secure than having 
 apps register with ZK themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3189) Yarn application usage command should not give -appstate and -apptype

2015-02-12 Thread Anushri (JIRA)
Anushri created YARN-3189:
-

 Summary: Yarn application usage command should not give -appstate 
and -apptype
 Key: YARN-3189
 URL: https://issues.apache.org/jira/browse/YARN-3189
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Anushri
Priority: Minor


Yarn application usage command should not give -appstate and -apptype since 
these two are applicable to --list command..


 *Can somebody please assign this issue to me* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317990#comment-14317990
 ] 

Rohith commented on YARN-3188:
--

I believe this behaviour is made intentionally in jira YARN-1074 to list 
applications for submitted,accepted and running state using {{yarn application 
-list}}. The main purpose is users are interested in seeing all their 
outstanding applications by default.
[~xgong] Please give your opinion

 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3157) Refactor the exception handling in ConverterUtils#to*Id

2015-02-12 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-3157:
---
Attachment: YARN-3157.2.patch

Uploading same patch again. For container appattempt and application same issue 
on similar conditions mentioned in defect happens . Please do review 

 Refactor the exception handling in ConverterUtils#to*Id
 ---

 Key: YARN-3157
 URL: https://issues.apache.org/jira/browse/YARN-3157
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3157.1.patch, YARN-3157.2.patch, YARN-3157.2.patch, 
 YARN-3157.patch, YARN-3157.patch


 yarn.cmd application -kill application_123
 Format wrong given for application id or attempt. Exception will be thrown to 
 console with out any info
 {quote}
 15/02/07 22:18:01 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:146)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:205)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.killApplication(ApplicationCLI.java:383)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:219)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 {quote}
 Need to add catch block for java.util.NoSuchElementException also
 {color:red}./yarn container -status container_e20_1423221031460_0003_01{color}
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.api.records.ContainerId.fromString(ContainerId.java:227)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:178)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3157) Refactor the exception handling in ConverterUtils#to*Id

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318085#comment-14318085
 ] 

Tsuyoshi OZAWA commented on YARN-3157:
--

[~bibinchundatt] 

{quote}
For container appattempt and application same issue on similar conditions 
mentioned in defect happens
{quote}

Yes, I've agreed with you in this point.

{code}
   public static ContainerId toContainerId(String containerIdStr) {
-return ContainerId.fromString(containerIdStr);
+try {
+  return ContainerId.fromString(containerIdStr);
+} catch (NoSuchElementException e) {
+  throw new IllegalArgumentException(Invalid ContainerId: 
+  + containerIdStr, e);
+}
   }
{code}

My point is: only about ConverterUtils#toContainerId, I think we should catch 
NoSuchElementException and raise IllegalArgumentException in 
ContainerId.fromString instead of ConverterUtils#toContainerId for consistency. 
Does this make sense?

 Refactor the exception handling in ConverterUtils#to*Id
 ---

 Key: YARN-3157
 URL: https://issues.apache.org/jira/browse/YARN-3157
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3157.1.patch, YARN-3157.2.patch, YARN-3157.2.patch, 
 YARN-3157.patch, YARN-3157.patch


 yarn.cmd application -kill application_123
 Format wrong given for application id or attempt. Exception will be thrown to 
 console with out any info
 {quote}
 15/02/07 22:18:01 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:146)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:205)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.killApplication(ApplicationCLI.java:383)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:219)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 {quote}
 Need to add catch block for java.util.NoSuchElementException also
 {color:red}./yarn container -status container_e20_1423221031460_0003_01{color}
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.api.records.ContainerId.fromString(ContainerId.java:227)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:178)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3189) Yarn application usage command should not give -appstate and -apptype

2015-02-12 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3189:
---
Assignee: Anushri

 Yarn application usage command should not give -appstate and -apptype
 -

 Key: YARN-3189
 URL: https://issues.apache.org/jira/browse/YARN-3189
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 Yarn application usage command should not give -appstate and -apptype since 
 these two are applicable to --list command..
  *Can somebody please assign this issue to me* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3164) rmadmin command usage prints incorrect command name

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318031#comment-14318031
 ] 

Hadoop QA commented on YARN-3164:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698376/YARN-3164.2.patch
  against trunk revision 4cbaa74.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6610//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6610//console

This message is automatically generated.

 rmadmin command usage prints incorrect command name
 ---

 Key: YARN-3164
 URL: https://issues.apache.org/jira/browse/YARN-3164
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3164.1.patch, YARN-3164.2.patch


 /hadoop/bin{color:red} ./yarn rmadmin -transitionToActive {color}
 transitionToActive: incorrect number of arguments
 Usage:{color:red}  HAAdmin  {color} [-transitionToActive serviceId 
 [--forceactive]]
 {color:red} ./yarn HAAdmin {color} 
 Error: Could not find or load main class HAAdmin
 Expected it should be rmadmin



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3187) Documentation of Capacity Scheduler Queue mapping based on user or group

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318055#comment-14318055
 ] 

Hadoop QA commented on YARN-3187:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698394/YARN-3187.1.patch
  against trunk revision 46c7577.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6611//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6611//console

This message is automatically generated.

 Documentation of Capacity Scheduler Queue mapping based on user or group
 

 Key: YARN-3187
 URL: https://issues.apache.org/jira/browse/YARN-3187
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler, documentation
Affects Versions: 2.6.0
Reporter: Naganarasimha G R
Assignee: Gururaj Shetty
  Labels: documentation
 Fix For: 2.6.0

 Attachments: YARN-3187.1.patch


 YARN-2411 exposes a very useful feature {{support simple user and group 
 mappings to queues}} but its not captured in the documentation. So in this 
 jira we plan to document this feature



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Anushri (JIRA)
Anushri created YARN-3188:
-

 Summary: yarn application --list should list all the applications 
( Not only submitted,accepted and running)
 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Priority: Minor


By default yarn application --list should list all the applications since we 
are not giving -appstate option.

{noformat}
[hdfs@host194 bin]$ ./yarn application -list
15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Total number of applications (application-types: [] and states: [SUBMITTED, 
ACCEPTED, RUNNING]):1
Application-Id  Application-NameApplication-Type
  User   Queue   State Final-State  
   ProgressTracking-URL
application_1422888408992_15010  grep-search   MAPREDUCE
  hdfs defaultACCEPTED   UNDEFINED  
 0% N/A
[
{noformat}


*Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned YARN-3188:
---

Assignee: Anushri

Thanks for your interest in contributing. 

I have added you as contributor and assigned the Jira to you.

 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1237) Description for yarn.nodemanager.aux-services in yarn-default.xml is misleading

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318069#comment-14318069
 ] 

Hudson commented on YARN-1237:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-1237. Description for yarn.nodemanager.aux-services in yarn-default.xml is 
misleading. Contributed by Brahma Reddy Battula. (ozawa: rev 
b3bcbaf277ec389ec048a4b5cd59b2e90781a30b)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


 Description for yarn.nodemanager.aux-services in yarn-default.xml is 
 misleading
 ---

 Key: YARN-1237
 URL: https://issues.apache.org/jira/browse/YARN-1237
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Hitesh Shah
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-1237.patch


 Description states:
 the valid service name should only contain a-zA-Z0-9_ and can not start with 
 numbers 
 It seems to indicate only one service is supported. If multiple services are 
 allowed, it does not indicate how they should be specified i.e. 
 comma-separated or space-separated? If the service name cannot contain 
 spaces, does this imply that space-separated lists are also permitted?
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3151) On Failover tracking url wrong in application cli for KILLED application

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318066#comment-14318066
 ] 

Hudson commented on YARN-3151:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-3151. On Failover tracking url wrong in application cli for KILLED (xgong: 
rev 65c69e296edad48e50ef36e47803625ea46b51e1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 On Failover tracking url wrong in application cli for KILLED application
 

 Key: YARN-3151
 URL: https://issues.apache.org/jira/browse/YARN-3151
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client, resourcemanager
Affects Versions: 2.6.0
 Environment: 2 RM HA 
Reporter: Bibin A Chundatt
Assignee: Rohith
Priority: Minor
 Fix For: 2.7.0

 Attachments: 0001-YARN-3151.patch, 0002-YARN-3151.patch, 
 0002-YARN-3151.patch


 Run an application and kill the same after starting
 Check {color:red} ./yarn application -list -appStates KILLED {color}
 (empty line)
 {quote}
 Application-Id Tracking-URL
 application_1423219262738_0001  
 http://IP:PORT/cluster/app/application_1423219262738_0001
 {quote}
 Shutdown the active RM1
 Check the same command {color:red} ./yarn application -list -appStates KILLED 
 {color} after RM2 is active
 {quote}
 Application-Id Tracking-URL
 application_1423219262738_0001  null
 {quote}
 Tracking url for application is shown as null 
 Expected : Same url before failover should be shown
 ApplicationReport .getOriginalTrackingUrl() is null after failover
 org.apache.hadoop.yarn.client.cli.ApplicationCLI
 listApplications(SetString appTypes,
   EnumSetYarnApplicationState appStates)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2683) registry config options: document and move to core-default

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318065#comment-14318065
 ] 

Hudson commented on YARN-2683:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-2683. [YARN-913] registry config options: document and move to 
core-default. (stevel) (stevel: rev c3da2db48fd18c41096fe5d6d4650978fb31ae24)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/using-the-yarn-service-registry.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/registry-configuration.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/registry-security.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/index.md
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-yarn-project/CHANGES.txt


 registry config options: document and move to core-default
 --

 Key: YARN-2683
 URL: https://issues.apache.org/jira/browse/YARN-2683
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: HADOOP-10530-005.patch, YARN-2683-001.patch, 
 YARN-2683-002.patch, YARN-2683-003.patch, YARN-2683-006.patch

   Original Estimate: 1h
  Time Spent: 1h
  Remaining Estimate: 0.5h

 Add to {{yarn-site}} a page on registry configuration parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2616) Add CLI client to the registry to list, view and manipulate entries

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318079#comment-14318079
 ] 

Hudson commented on YARN-2616:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-2616 [YARN-913] Add CLI client to the registry to list, view and 
manipulate entries. (Akshay Radia via stevel) (stevel: rev 
362565cf5a8cbc1e7e66847649c29666d79f6938)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/cli/RegistryCli.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/cli/TestRegistryCli.java


 Add CLI client to the registry to list, view and manipulate entries
 ---

 Key: YARN-2616
 URL: https://issues.apache.org/jira/browse/YARN-2616
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Akshay Radia
 Fix For: 2.7.0

 Attachments: YARN-2616-003.patch, YARN-2616-008.patch, 
 YARN-2616-008.patch, yarn-2616-v1.patch, yarn-2616-v2.patch, 
 yarn-2616-v4.patch, yarn-2616-v5.patch, yarn-2616-v6.patch, yarn-2616-v7.patch


 registry needs a CLI interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3160) Non-atomic operation on nodeUpdateQueue in RMNodeImpl

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318076#comment-14318076
 ] 

Hudson commented on YARN-3160:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-3160. Fix non-atomic operation on nodeUpdateQueue in RMNodeImpl. 
(Contributed by Chengbing Liu) (junping_du: rev 
c541a374d88ffed6ee71b0e5d556939ccd2c5159)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* hadoop-yarn-project/CHANGES.txt


 Non-atomic operation on nodeUpdateQueue in RMNodeImpl
 -

 Key: YARN-3160
 URL: https://issues.apache.org/jira/browse/YARN-3160
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.0
Reporter: Chengbing Liu
Assignee: Chengbing Liu
 Fix For: 2.7.0

 Attachments: YARN-3160.2.patch, YARN-3160.patch


 {code:title=RMNodeImpl.java|borderStyle=solid}
 while(nodeUpdateQueue.peek() != null){
   latestContainerInfoList.add(nodeUpdateQueue.poll());
 }
 {code}
 The above code brings potential risk of adding null value to 
 {{latestContainerInfoList}}. Since {{ConcurrentLinkedQueue}} implements a 
 wait-free algorithm, we can directly poll the queue, before checking whether 
 the value is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3157) Refactor the exception handling in ConverterUtils#to*Id

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318062#comment-14318062
 ] 

Hudson commented on YARN-3157:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-3157. Refactor the exception handling in ConverterUtils#to*Id. Contributed 
by Bibin A Chundatt. (ozawa: rev 95a41bf35d8ba0a1ec087f456914231103d98fb9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java
Revert YARN-3157. Refactor the exception handling in ConverterUtils#to*Id. 
Contributed by Bibin A Chundatt. (ozawa: rev 
4cbaa74f623ac8ee2c5b7308ac33a807a33e17f7)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java


 Refactor the exception handling in ConverterUtils#to*Id
 ---

 Key: YARN-3157
 URL: https://issues.apache.org/jira/browse/YARN-3157
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3157.1.patch, YARN-3157.2.patch, YARN-3157.2.patch, 
 YARN-3157.patch, YARN-3157.patch


 yarn.cmd application -kill application_123
 Format wrong given for application id or attempt. Exception will be thrown to 
 console with out any info
 {quote}
 15/02/07 22:18:01 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:146)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:205)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.killApplication(ApplicationCLI.java:383)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:219)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 {quote}
 Need to add catch block for java.util.NoSuchElementException also
 {color:red}./yarn container -status container_e20_1423221031460_0003_01{color}
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.api.records.ContainerId.fromString(ContainerId.java:227)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:178)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3074) Nodemanager dies when localizer runner tries to write to a full disk

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318067#comment-14318067
 ] 

Hudson commented on YARN-3074:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
YARN-3074. Nodemanager dies when localizer runner tries to write to a full 
disk. Contributed by Varun Saxena (jlowe: rev 
b379972ab39551d4b57436a54c0098a63742c7e1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* hadoop-yarn-project/CHANGES.txt


 Nodemanager dies when localizer runner tries to write to a full disk
 

 Key: YARN-3074
 URL: https://issues.apache.org/jira/browse/YARN-3074
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-3074.001.patch, YARN-3074.002.patch, 
 YARN-3074.03.patch


 When a LocalizerRunner tries to write to a full disk it can bring down the 
 nodemanager process.  Instead of failing the whole process we should fail 
 only the container and make a best attempt to keep going.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3147) Clean up RM web proxy code

2015-02-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317968#comment-14317968
 ] 

Steve Loughran commented on YARN-3147:
--

{{serviceStart()}} throws Exception, which gets caught and wrapped in the outer 
{{Service.start()}} operation; this catch/wrap/rethrow is entirely needless. In 
fact, as it doesn't propagate the nested exception's {{toString()}} value into 
the new string, actually makes things worse.

It's there because when I went through all those start() operations in 
YARN-117, I didn't try to remove all such wrap operations; too much to change, 
makes the patch bigger and more complex.

Now we're cleaning up one little module, we can cull it.

Now, if were to add more diagnostics, to the exception that would be different. 
At the very least, though, it should go 
{{Proxy Server Failed to login  + ie}}

 Clean up RM web proxy code 
 ---

 Key: YARN-3147
 URL: https://issues.apache.org/jira/browse/YARN-3147
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: webapp
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-3147-001.patch, YARN-3147-002.patch


 YARN-2084 covers fixing up the RM proxy  filter for REST support.
 Before doing that, prepare for it by cleaning up the codebase: factoring out 
 the redirect logic into a single method, some minor reformatting, move to 
 SLF4J and Java7 code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3076) YarnClient implementation to retrieve label to node mapping

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317992#comment-14317992
 ] 

Hadoop QA commented on YARN-3076:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698321/YARN-3076.002.patch
  against trunk revision 89a5449.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.conf.TestJobConf
  
org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6609//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6609//console

This message is automatically generated.

 YarnClient implementation to retrieve label to node mapping
 ---

 Key: YARN-3076
 URL: https://issues.apache.org/jira/browse/YARN-3076
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.7.0
Reporter: Varun Saxena
Assignee: Varun Saxena
 Attachments: YARN-3076.001.patch, YARN-3076.002.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3157) Refactor the exception handling in ConverterUtils#to*Id

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14317995#comment-14317995
 ] 

Hudson commented on YARN-3157:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7085 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7085/])
YARN-3157. Refactor the exception handling in ConverterUtils#to*Id. Contributed 
by Bibin A Chundatt. (ozawa: rev 95a41bf35d8ba0a1ec087f456914231103d98fb9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java
* hadoop-yarn-project/CHANGES.txt
Revert YARN-3157. Refactor the exception handling in ConverterUtils#to*Id. 
Contributed by Bibin A Chundatt. (ozawa: rev 
4cbaa74f623ac8ee2c5b7308ac33a807a33e17f7)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestConverterUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
* hadoop-yarn-project/CHANGES.txt


 Refactor the exception handling in ConverterUtils#to*Id
 ---

 Key: YARN-3157
 URL: https://issues.apache.org/jira/browse/YARN-3157
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Minor
 Attachments: YARN-3157.1.patch, YARN-3157.2.patch, YARN-3157.patch, 
 YARN-3157.patch


 yarn.cmd application -kill application_123
 Format wrong given for application id or attempt. Exception will be thrown to 
 console with out any info
 {quote}
 15/02/07 22:18:01 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:146)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:205)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.killApplication(ApplicationCLI.java:383)
 at 
 org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:219)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 {quote}
 Need to add catch block for java.util.NoSuchElementException also
 {color:red}./yarn container -status container_e20_1423221031460_0003_01{color}
 Exception in thread main java.util.NoSuchElementException
 at 
 com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
 at 
 org.apache.hadoop.yarn.api.records.ContainerId.fromString(ContainerId.java:227)
 at 
 org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUtils.java:178)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Anushri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318028#comment-14318028
 ] 

Anushri commented on YARN-3188:
---

As far as i opine at the very first view when a user goes for  yarn 
application -list  command he will expect all the applications to be listed. 
Moreover the description of list command doesn't  specify it.

Also we have option to filter out from the list, so on a view level we are 
trying to filter out the objects from a list  that are not present there.

 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1580) Documentation error regarding container-allocation.expiry-interval-ms

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318038#comment-14318038
 ] 

Hudson commented on YARN-1580:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7086 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7086/])
YARN-1580. Documentation error regarding 
container-allocation.expiry-interval-ms (Brahma Reddy Battula via junping_du) 
(junping_du: rev 46c7577b9843766b8cc3e81eae1100d4c194286a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* hadoop-yarn-project/CHANGES.txt


 Documentation error regarding container-allocation.expiry-interval-ms
 ---

 Key: YARN-1580
 URL: https://issues.apache.org/jira/browse/YARN-1580
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
 Environment: CentOS 6.4
Reporter: German Florez-Larrahondo
Assignee: Brahma Reddy Battula
Priority: Trivial
 Fix For: 2.7.0

 Attachments: YARN-1580.patch


 While trying to control settings related to expiration of tokens for long 
 running jobs,based on the documentation ( 
 http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-common/yarn-default.xml)
  I attempted to increase values for 
 yarn.rm.container-allocation.expiry-interval-ms without luck. Looking code 
 like YarnConfiguration.java I noticed that in  recent versions all these kind 
 of settings now have the prefix yarn.resourcemanager.rm as opposed to 
 yarn.rm. So for this specific case the setting of interest is 
 yarn.resourcemanager.rm.container-allocation.expiry-interval-ms
 I supposed there are other documentation errors similar to this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3188) yarn application --list should list all the applications ( Not only submitted,accepted and running)

2015-02-12 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318033#comment-14318033
 ] 

Junping Du commented on YARN-3188:
--

I agree with Rohit. I think we do this intentionally because hundreds or 
thousands of applications could be listed if include completed or killed ones 
but users are typically more interest on actively ones. User can use 
--appStatus if they want to list finished applications. 

 yarn application --list should list all the applications ( Not only 
 submitted,accepted and running)
 ---

 Key: YARN-3188
 URL: https://issues.apache.org/jira/browse/YARN-3188
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, client
Reporter: Anushri
Assignee: Anushri
Priority: Minor

 By default yarn application --list should list all the applications since we 
 are not giving -appstate option.
 Currently it is giving like following..
 {noformat}
 [hdfs@host194 bin]$ ./yarn application -list
 15/02/12 19:33:02 INFO client.RMProxy: Connecting to ResourceManager at 
 /0.0.0.0:8032
 15/02/12 19:33:03 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Total number of applications (application-types: [] and states: [SUBMITTED, 
 ACCEPTED, RUNNING]):1
 Application-Id  Application-NameApplication-Type  
 User   Queue   State Final-State  
ProgressTracking-URL
 application_1422888408992_15010  grep-search   MAPREDUCE  
 hdfs defaultACCEPTED   UNDEFINED  
  0% N/A
 [
 {noformat}
 *Can somebody please assign this issue to me..?* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2994) Document work-preserving RM restart

2015-02-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-2994:
--
Attachment: YARN-2994.4.patch

fixed some typos

 Document work-preserving RM restart
 ---

 Key: YARN-2994
 URL: https://issues.apache.org/jira/browse/YARN-2994
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-2994.1.patch, YARN-2994.2.patch, YARN-2994.3.patch, 
 YARN-2994.4.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3171) Sort by application id doesn't work in ATS web ui

2015-02-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319117#comment-14319117
 ] 

Naganarasimha G R commented on YARN-3171:
-

Hi [~zjshen] ,
YARN-2766 is for fixing the sorting in testcases and from the latest trunk code 
i am able to still see the issue. But the code in AHS web page seems to be same 
as in RM apps page, but still not able to sort only for this column ... will 
analyze more and inform abt the fix for the issue

 Sort by application id doesn't work in ATS web ui
 -

 Key: YARN-3171
 URL: https://issues.apache.org/jira/browse/YARN-3171
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Jeff Zhang
Assignee: Naganarasimha G R
Priority: Minor
 Attachments: ats_webui.png


 The order doesn't change when I click the column header



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2986) Support hierarchical and unified scheduler configuration

2015-02-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319147#comment-14319147
 ] 

Wangda Tan commented on YARN-2986:
--

Thanks for [~vinodkv] reporting, a huge +1 for the proposal.

*In addition to Vinod's suggest, I think we should define scheduler conf 
standard first, my proposal is*

{code}
scheduler
   global-option1value/global-option1
   global-option2value/global-option2
   global-option3value/global-option3

   queue name=root
  staterunning/state

  queue-option1value/queue-option1
  queue-option2value/queue-option2
  queue-option3value/queue-option3

  queue name=a
 staterunning/state

 queue name=a1
...
 /queue
  /queue

  queue name=b
...
  /queue
   /queue
/scheduler
{code}

There're several things need to be highlight in above example:
1) There's no group xml node, instead of
{code}
children
queue name=1/
queue name=2/
children
{code}

or 
{code}
scheduler-custom-configuration
!-- Scheduler specific global configuration --
!-- For e.g.
  maximum-applications1/maximum-applications
  resource-calculatorDominantResourceCalculator/resource-calculator
--
  /scheduler-custom-configuration
{code}

I would suggest
{code}
queue name=1/
queue name=2/
{code}

2) Element vs. Attribute
(Definition of E/A, see http://www.w3schools.com/xml/xml_attributes.asp)
I think we shouldn't mix element/attribute in config file, attribute field 
should be only a limited set of properties. IMO, name/type should be only 
properties that need to put as attribute.
With this, admin will not hesitate about which property needs go to attribute 
and which needs to go Element.

*To Implement this, I think there're two steps:*

1) A parser to support easily get/set values for scheduler conf.
- Hide complexities of a generic XML parser
- Have basic functionalities to handle value inherit, etc.
For each level ({{key.../key}}), data structure could as simple as:
{code}
SchedulerConfNode {
String getName();
String getType();
String get(String key, bool inherit=false, default=null);
ListNode getChildren(String key);
}
{code}

2) Common implementation of existing Hadoop scheduler
Since there're some fields common in fair/capacity/fifo scheduler, we can 
extend the {{SchedulerConfNode}} to {{BaseSchedulerConfNode}}, which has more 
methods like:
{code}
BaseSchedulerConfNode : SchedulerConfNode {
Resource getMinimumAllocation();
Resource getMaximumAllocation();
ListString getSubmitAcls();
// ...
}
{code}

Thoughts?

 Support hierarchical and unified scheduler configuration
 

 Key: YARN-2986
 URL: https://issues.apache.org/jira/browse/YARN-2986
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli

 Today's scheduler configuration is fragmented and non-intuitive, and needs to 
 be improved. Details in comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2847) Linux native container executor segfaults if default banned user detected

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319027#comment-14319027
 ] 

Hadoop QA commented on YARN-2847:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12698521/YARN-2487.05.trunk.patch
  against trunk revision 58cb9f5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6623//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6623//console

This message is automatically generated.

 Linux native container executor segfaults if default banned user detected
 -

 Key: YARN-2847
 URL: https://issues.apache.org/jira/browse/YARN-2847
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0, 2.4.1, 2.6.0
Reporter: Jason Lowe
Assignee: Chang Li
 Attachments: YARN-2487.04.trunk.patch, YARN-2487.05.trunk.patch, 
 yarn2847.patch, yarn2847.patch, yarn2847notest.patch


 The check_user function in container-executor.c can cause a segmentation 
 fault if banned.users is not provided but the user is detected as one of the 
 default users.  In that scenario it will call free_values on a NULL pointer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3127) Apphistory url crashes when RM switches with ATS enabled

2015-02-12 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3127:

Attachment: YARN-3127.20150213-1.patch

Attaching initial patch to avoid events sent to System metrics publisher during 
RM application recovery from state store

 Apphistory url crashes when RM switches with ATS enabled
 

 Key: YARN-3127
 URL: https://issues.apache.org/jira/browse/YARN-3127
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, timelineserver
Affects Versions: 2.6.0
 Environment: RM HA with ATS
Reporter: Bibin A Chundatt
Assignee: Naganarasimha G R
 Attachments: YARN-3127.20150213-1.patch


 1.Start RM with HA and ATS configured and run some yarn applications
 2.Once applications are finished sucessfully start timeline server
 3.Now failover HA form active to standby
 4.Access timeline server URL IP:PORT/applicationhistory
 Result: Application history URL fails with below info
 {quote}
 2015-02-03 20:28:09,511 ERROR org.apache.hadoop.yarn.webapp.View: Failed to 
 read the applications.
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1643)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock.render(AppsBlock.java:80)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:67)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:77)
   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
   ...
 Caused by: 
 org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: The 
 entity for application attempt appattempt_1422972608379_0001_01 doesn't 
 exist in the timeline store
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getApplicationAttempt(ApplicationHistoryManagerOnTimelineStore.java:151)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.generateApplicationReport(ApplicationHistoryManagerOnTimelineStore.java:499)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getAllApplications(ApplicationHistoryManagerOnTimelineStore.java:108)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock$1.run(AppsBlock.java:84)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock$1.run(AppsBlock.java:81)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
   ... 51 more
 2015-02-03 20:28:09,512 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
 handling URI: /applicationhistory
 org.apache.hadoop.yarn.webapp.WebAppException: Error rendering block: 
 nestLevel=6 expected 5
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:77)
 {quote}
 Behaviour with AHS with file based history store
   -Apphistory url is working 
   -No attempt entries are shown for each application.
   
 Based on inital analysis when RM switches ,application attempts from state 
 store  are not replayed but only applications are.
 So when /applicaitonhistory url is accessed it tries for all attempt id and 
 fails



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2994) Document work-preserving RM restart

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14318946#comment-14318946
 ] 

Hadoop QA commented on YARN-2994:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698513/YARN-2994.3.patch
  against trunk revision 58cb9f5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6621//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6621//console

This message is automatically generated.

 Document work-preserving RM restart
 ---

 Key: YARN-2994
 URL: https://issues.apache.org/jira/browse/YARN-2994
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-2994.1.patch, YARN-2994.2.patch, YARN-2994.3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3021) YARN's delegation-token handling disallows certain trust setups to operate properly over DistCp

2015-02-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319024#comment-14319024
 ] 

Yongjun Zhang commented on YARN-3021:
-

Hi [~vinodkv],

Thanks a lot for your comment!

{quote}
The question is whether we continue supporting this implicit aux feature or 
drop it. And given my earlier point that RM cannot know either ways, this 
implicit feature was always broken. 
{quote}
Agree. What about we use the patch of this jira to disable/enable this implicit 
feature (as it currently does), and create a new jira to address the broken 
implicit feature when enabled?

Thanks.



 YARN's delegation-token handling disallows certain trust setups to operate 
 properly over DistCp
 ---

 Key: YARN-3021
 URL: https://issues.apache.org/jira/browse/YARN-3021
 Project: Hadoop YARN
  Issue Type: Bug
  Components: security
Affects Versions: 2.3.0
Reporter: Harsh J
 Attachments: YARN-3021.001.patch, YARN-3021.002.patch, 
 YARN-3021.003.patch, YARN-3021.patch


 Consider this scenario of 3 realms: A, B and COMMON, where A trusts COMMON, 
 and B trusts COMMON (one way trusts both), and both A and B run HDFS + YARN 
 clusters.
 Now if one logs in with a COMMON credential, and runs a job on A's YARN that 
 needs to access B's HDFS (such as a DistCp), the operation fails in the RM, 
 as it attempts a renewDelegationToken(…) synchronously during application 
 submission (to validate the managed token before it adds it to a scheduler 
 for automatic renewal). The call obviously fails cause B realm will not trust 
 A's credentials (here, the RM's principal is the renewer).
 In the 1.x JobTracker the same call is present, but it is done asynchronously 
 and once the renewal attempt failed we simply ceased to schedule any further 
 attempts of renewals, rather than fail the job immediately.
 We should change the logic such that we attempt the renewal but go easy on 
 the failure and skip the scheduling alone, rather than bubble back an error 
 to the client, failing the app submission. This way the old behaviour is 
 retained.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3181) FairScheduler: Fix up outdated findbugs issues

2015-02-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319038#comment-14319038
 ] 

Karthik Kambatla commented on YARN-3181:


The Jenkins jobs looks busted - this patch has no tests, and no NM changes. I 
ran all the fair-scheduler tests locally and no new failures. Checking this in. 

 FairScheduler: Fix up outdated findbugs issues
 --

 Key: YARN-3181
 URL: https://issues.apache.org/jira/browse/YARN-3181
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: yarn-3181-1.patch


 In FairScheduler, we have excluded some findbugs-reported errors. Some of 
 them aren't applicable anymore, and there are a few that can be easily fixed 
 without needing an exclusion. It would be nice to fix them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3181) FairScheduler: Fix up outdated findbugs issues

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319013#comment-14319013
 ] 

Hadoop QA commented on YARN-3181:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698227/yarn-3181-1.patch
  against trunk revision 58cb9f5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6622//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6622//console

This message is automatically generated.

 FairScheduler: Fix up outdated findbugs issues
 --

 Key: YARN-3181
 URL: https://issues.apache.org/jira/browse/YARN-3181
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: yarn-3181-1.patch


 In FairScheduler, we have excluded some findbugs-reported errors. Some of 
 them aren't applicable anymore, and there are a few that can be easily fixed 
 without needing an exclusion. It would be nice to fix them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2796) deprecate sbin/yarn-daemon.sh

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319091#comment-14319091
 ] 

Hudson commented on YARN-2796:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7094 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7094/])
YARN-2796. deprecate sbin/yarn-daemon.sh (aw) (aw: rev 
f1070230d146518f42334ac01b8ec32daf18ac0b)
* hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
* hadoop-yarn-project/CHANGES.txt


 deprecate sbin/yarn-daemon.sh
 -

 Key: YARN-2796
 URL: https://issues.apache.org/jira/browse/YARN-2796
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: YARN-2796-00.patch


 We should deprecate mark all yarn sbin/*.sh commands (except for start and 
 stop) as deprecated in trunk so that they may be removed in a future release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3181) FairScheduler: Fix up outdated findbugs issues

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319090#comment-14319090
 ] 

Hudson commented on YARN-3181:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7094 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7094/])
YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha) (kasha: rev 
c2b185def846f5577a130003a533b9c377b58fab)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
* hadoop-yarn-project/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSOpDurations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java


 FairScheduler: Fix up outdated findbugs issues
 --

 Key: YARN-3181
 URL: https://issues.apache.org/jira/browse/YARN-3181
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: yarn-3181-1.patch


 In FairScheduler, we have excluded some findbugs-reported errors. Some of 
 them aren't applicable anymore, and there are a few that can be easily fixed 
 without needing an exclusion. It would be nice to fix them. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-3192) Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java and #/ResourceManager.java

2015-02-12 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved YARN-3192.
-
Resolution: Not a Problem

Calling {{System.exit(-1)}} is not an acceptable way to shut down the RM. 
Please review the surrounding code.

I'm going to close this, until we can tie a bug to this code. Graceful shutdown 
is difficult to effect, and this issue's scope is too narrow to contribute to 
it.

[~brahmareddy], many of the JIRAs you're filing appear to be detected by 
automated tools. If the interrupt handling here can cause hangs, HA bugs, 
inconsistent replies to users, etc. then please file reports on the 
consequences, citing this as the source.

 Empty handler for exception: java.lang.InterruptedException #WebAppProxy.java 
 and #/ResourceManager.java
 

 Key: YARN-3192
 URL: https://issues.apache.org/jira/browse/YARN-3192
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: YARN-3192.patch


 The InterruptedException is completely ignored. As a result, any events 
 causing this interrupt will be lost.
  File: org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 {code}
try {
 event = eventQueue.take();
   } catch (InterruptedException e) {
 LOG.error(Returning, interrupted :  + e);
 return; // TODO: Kill RM.
   }
 {code}
 File: org/apache/hadoop/yarn/server/webproxy/WebAppProxy.java
 {code}
 public void join() {
 if(proxyServer != null) {
   try {
 proxyServer.join();
   } catch (InterruptedException e) {
   }
 }
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3127) Apphistory url crashes when RM switches with ATS enabled

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319221#comment-14319221
 ] 

Hadoop QA commented on YARN-3127:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12698538/YARN-3127.20150213-1.patch
  against trunk revision 6f5290b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6624//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/6624//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6624//console

This message is automatically generated.

 Apphistory url crashes when RM switches with ATS enabled
 

 Key: YARN-3127
 URL: https://issues.apache.org/jira/browse/YARN-3127
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, timelineserver
Affects Versions: 2.6.0
 Environment: RM HA with ATS
Reporter: Bibin A Chundatt
Assignee: Naganarasimha G R
 Attachments: YARN-3127.20150213-1.patch


 1.Start RM with HA and ATS configured and run some yarn applications
 2.Once applications are finished sucessfully start timeline server
 3.Now failover HA form active to standby
 4.Access timeline server URL IP:PORT/applicationhistory
 Result: Application history URL fails with below info
 {quote}
 2015-02-03 20:28:09,511 ERROR org.apache.hadoop.yarn.webapp.View: Failed to 
 read the applications.
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1643)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock.render(AppsBlock.java:80)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:67)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:77)
   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
   ...
 Caused by: 
 org.apache.hadoop.yarn.exceptions.ApplicationAttemptNotFoundException: The 
 entity for application attempt appattempt_1422972608379_0001_01 doesn't 
 exist in the timeline store
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getApplicationAttempt(ApplicationHistoryManagerOnTimelineStore.java:151)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.generateApplicationReport(ApplicationHistoryManagerOnTimelineStore.java:499)
   at 
 org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryManagerOnTimelineStore.getAllApplications(ApplicationHistoryManagerOnTimelineStore.java:108)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock$1.run(AppsBlock.java:84)
   at 
 org.apache.hadoop.yarn.server.webapp.AppsBlock$1.run(AppsBlock.java:81)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
   ... 51 more
 2015-02-03 20:28:09,512 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
 handling URI: /applicationhistory
 org.apache.hadoop.yarn.webapp.WebAppException: Error rendering block: 
 nestLevel=6 expected 5
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
   at 
 org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:77)
 {quote}
 Behaviour with AHS with file based history store
   -Apphistory url is working 
   -No attempt entries are shown for each application.
   
 Based on inital analysis when RM switches ,application attempts from state 
 store  are not replayed but 

[jira] [Updated] (YARN-3041) [Data Model] create the ATS entity/event API

2015-02-12 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-3041:
--
Attachment: YARN-3041.2.patch
Data_model_proposal_v2.pdf

Based on the online/offline discussion so far around the timeline data model, 
I've updated the proposal doc and attached it hear. Different from the first 
proposal,

* we're going to treat Cluster, Flow, FlowRun, Application, ApplicationAttempt 
and Container as the first class citizen entities. There's parent-child 
relationship among them. FlowRun could be nested.

* In addition, we also define User and Queue to support the aggregation from 
these two aspects.

* Moreover, the metric will host either signal value or time series.

I created and attached a patch, which translates the data model into Java 
objects. It may still need to be adjusted according to JAXB requirement.


 [Data Model] create the ATS entity/event API
 

 Key: YARN-3041
 URL: https://issues.apache.org/jira/browse/YARN-3041
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Zhijie Shen
 Attachments: Data_model_proposal_v2.pdf, YARN-3041.2.patch, 
 YARN-3041.preliminary.001.patch


 Per design in YARN-2928, create the ATS entity and events API.
 Also, as part of this JIRA, create YARN system entities (e.g. cluster, user, 
 flow, flow run, YARN app, ...).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3179) Update use of Iterator to Iterable

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319385#comment-14319385
 ] 

Xuan Gong commented on YARN-3179:
-

+1, LGTM. Will commit

 Update use of Iterator to Iterable
 --

 Key: YARN-3179
 URL: https://issues.apache.org/jira/browse/YARN-3179
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: YARN-3179.001.patch, YARN-3179.002.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319446#comment-14319446
 ] 

Hudson commented on YARN-3191:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7099 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7099/])
YARN-3191. Log object should be initialized with its own class. Contributed by 
Rohith. (aajisaka: rev 6a49e58cb81e2d0971166a11a79adc2e1a5aae2a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java


 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Fix For: 2.7.0

 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3025) Provide API for retrieving blacklisted nodes

2015-02-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned YARN-3025:


Assignee: Ted Yu

 Provide API for retrieving blacklisted nodes
 

 Key: YARN-3025
 URL: https://issues.apache.org/jira/browse/YARN-3025
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu

 We have the following method which updates blacklist:
 {code}
   public synchronized void updateBlacklist(ListString blacklistAdditions,
   ListString blacklistRemovals) {
 {code}
 Upon AM failover, there should be an API which returns the blacklisted nodes 
 so that the new AM can make consistent decisions.
 The new API can be:
 {code}
   public synchronized ListString getBlacklistedNodes()
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3104) RM generates new AMRM tokens every heartbeat between rolling and activation

2015-02-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319268#comment-14319268
 ] 

Jian He commented on YARN-3104:
---

make sense to me. committing this. [~jlowe] would you like to open a jira to 
track this issue ?

 RM generates new AMRM tokens every heartbeat between rolling and activation
 ---

 Key: YARN-3104
 URL: https://issues.apache.org/jira/browse/YARN-3104
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-3104.001.patch, YARN-3104.002.patch, 
 YARN-3104.003.patch


 When the RM rolls a new AMRM secret, it conveys this to the AMs when it 
 notices they are still connected with the old key.  However neither the RM 
 nor the AM explicitly close the connection or otherwise try to reconnect with 
 the new secret.  Therefore the RM keeps thinking the AM doesn't have the new 
 token on every heartbeat and keeps sending new tokens for the period between 
 the key roll and the key activation.  Once activated the RM no longer squawks 
 in its logs about needing to generate a new token every heartbeat (i.e.: 
 second) for every app, but the apps can still be using the old token.  The 
 token is only checked upon connection to the RM.  The apps don't reconnect 
 when sent a new token, and the RM doesn't force them to reconnect by closing 
 the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3179) Update use of Iterator to Iterable

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319389#comment-14319389
 ] 

Xuan Gong commented on YARN-3179:
-

Committed into trunk/branch-2. Thanks, Ray!

 Update use of Iterator to Iterable
 --

 Key: YARN-3179
 URL: https://issues.apache.org/jira/browse/YARN-3179
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-3179.001.patch, YARN-3179.002.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319413#comment-14319413
 ] 

Akira AJISAKA commented on YARN-3191:
-

LGTM +1, the test failure looks unrelated to the patch.

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3179) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319412#comment-14319412
 ] 

Hudson commented on YARN-3179:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7097 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7097/])
YARN-3179. Update use of Iterator to Iterable in RMAdminCLI and (xgong: rev 
2586915bb3178d26ad692f93d53aaffbb55d9ed9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java


 Update use of Iterator to Iterable
 --

 Key: YARN-3179
 URL: https://issues.apache.org/jira/browse/YARN-3179
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-3179.001.patch, YARN-3179.002.patch


 Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3041) [Data Model] create the ATS entity/event API

2015-02-12 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen reassigned YARN-3041:
-

Assignee: Zhijie Shen  (was: Robert Kanter)

 [Data Model] create the ATS entity/event API
 

 Key: YARN-3041
 URL: https://issues.apache.org/jira/browse/YARN-3041
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Zhijie Shen
 Attachments: YARN-3041.preliminary.001.patch


 Per design in YARN-2928, create the ATS entity and events API.
 Also, as part of this JIRA, create YARN system entities (e.g. cluster, user, 
 flow, flow run, YARN app, ...).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3132) RMNodeLabelsManager should remove node from node-to-label mapping when node becomes deactivated

2015-02-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-3132:
--
Description: 
Using an example to explain:
1) Admin specify host1 has label=x
2) node=host1:123 registered
3) Get node-to-label mapping, return host1/host1:123
4) node=host1:123 unregistered
5) Get node-to-label mapping, still returns host1:123

Probably we should remove host1:123 when it becomes deactivated and no directly 
label assigned to it (directly assign means admin specify host1:123 has x 
instead of host1 has x).

  was:
Using an example to explain:
1) Admin specify host1 has label=x
2) node=host1:123 registered
3) Get node-to-label mapping, return host1/host1:123
4) node=host1:123 unregistered
5) Get node-to-label mapping, still returns host1:123

Probably we should remove host1:123 when it becomes deactivated and no directly 
label assigned to it (directly assign means admin specify host1:123 has x 
instead of host1 has 123).


 RMNodeLabelsManager should remove node from node-to-label mapping when node 
 becomes deactivated
 ---

 Key: YARN-3132
 URL: https://issues.apache.org/jira/browse/YARN-3132
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-3132.1.patch


 Using an example to explain:
 1) Admin specify host1 has label=x
 2) node=host1:123 registered
 3) Get node-to-label mapping, return host1/host1:123
 4) node=host1:123 unregistered
 5) Get node-to-label mapping, still returns host1:123
 Probably we should remove host1:123 when it becomes deactivated and no 
 directly label assigned to it (directly assign means admin specify host1:123 
 has x instead of host1 has x).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3191) Log object should be initialized with its own class

2015-02-12 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-3191:

Fix Version/s: 2.7.0

 Log object should be initialized with its own class
 ---

 Key: YARN-3191
 URL: https://issues.apache.org/jira/browse/YARN-3191
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Rohith
Assignee: Rohith
Priority: Trivial
 Fix For: 2.7.0

 Attachments: 0001-YARN-3191.patch


 In ContainerImpl and ApplicationImpl class, Log object is initialized with 
 interface name. This causes in logging happen with interface class.
 {{private static final Log LOG = LogFactory.getLog(Container.class);}} 
 {{private static final Log LOG = LogFactory.getLog(Application.class);}}
 it should be 
 {{private static final Log LOG = LogFactory.getLog(ContainerImpl.class);}} 
 {{private static final Log LOG = LogFactory.getLog(ApplicationImpl.class);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3041) [Data Model] create the ATS entity/event API

2015-02-12 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319230#comment-14319230
 ] 

Zhijie Shen commented on YARN-3041:
---

Take it over, Thanks! - Zhijie

 [Data Model] create the ATS entity/event API
 

 Key: YARN-3041
 URL: https://issues.apache.org/jira/browse/YARN-3041
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Zhijie Shen
 Attachments: YARN-3041.preliminary.001.patch


 Per design in YARN-2928, create the ATS entity and events API.
 Also, as part of this JIRA, create YARN system entities (e.g. cluster, user, 
 flow, flow run, YARN app, ...).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3104) RM generates new AMRM tokens every heartbeat between rolling and activation

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319289#comment-14319289
 ] 

Hudson commented on YARN-3104:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #7095 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7095/])
YARN-3104. Fixed RM to not generate new AMRM tokens on every heartbeat between 
rolling and activation. Contributed by Jason Lowe (jianhe: rev 
18297e09727e4af95140084760ae1267e8fe51c4)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java


 RM generates new AMRM tokens every heartbeat between rolling and activation
 ---

 Key: YARN-3104
 URL: https://issues.apache.org/jira/browse/YARN-3104
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.7.0

 Attachments: YARN-3104.001.patch, YARN-3104.002.patch, 
 YARN-3104.003.patch


 When the RM rolls a new AMRM secret, it conveys this to the AMs when it 
 notices they are still connected with the old key.  However neither the RM 
 nor the AM explicitly close the connection or otherwise try to reconnect with 
 the new secret.  Therefore the RM keeps thinking the AM doesn't have the new 
 token on every heartbeat and keeps sending new tokens for the period between 
 the key roll and the key activation.  Once activated the RM no longer squawks 
 in its logs about needing to generate a new token every heartbeat (i.e.: 
 second) for every app, but the apps can still be using the old token.  The 
 token is only checked upon connection to the RM.  The apps don't reconnect 
 when sent a new token, and the RM doesn't force them to reconnect by closing 
 the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2868) Add metric for initial container launch time to FairScheduler

2015-02-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319300#comment-14319300
 ] 

Ray Chiang commented on YARN-2868:
--

[~leftnoteasy] just following up to be sure.  Are you okay with the latest 
patch uploaded?

 Add metric for initial container launch time to FairScheduler
 -

 Key: YARN-2868
 URL: https://issues.apache.org/jira/browse/YARN-2868
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: metrics, supportability
 Attachments: YARN-2868-01.patch, YARN-2868.002.patch, 
 YARN-2868.003.patch, YARN-2868.004.patch, YARN-2868.005.patch, 
 YARN-2868.006.patch, YARN-2868.007.patch, YARN-2868.008.patch


 Add a metric to measure the latency between starting container allocation 
 and first container actually allocated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3132) RMNodeLabelsManager should remove node from node-to-label mapping when node becomes deactivated

2015-02-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3132:
-
Attachment: YARN-3132.1.patch

Attached initial patch

 RMNodeLabelsManager should remove node from node-to-label mapping when node 
 becomes deactivated
 ---

 Key: YARN-3132
 URL: https://issues.apache.org/jira/browse/YARN-3132
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-3132.1.patch


 Using an example to explain:
 1) Admin specify host1 has label=x
 2) node=host1:123 registered
 3) Get node-to-label mapping, return host1/host1:123
 4) node=host1:123 unregistered
 5) Get node-to-label mapping, still returns host1:123
 Probably we should remove host1:123 when it becomes deactivated and no 
 directly label assigned to it (directly assign means admin specify host1:123 
 has x instead of host1 has 123).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3158) Correct log messages in ResourceTrackerService

2015-02-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319325#comment-14319325
 ] 

Xuan Gong commented on YARN-3158:
-

Committed into trunk/branch-2. Thanks, varun

 Correct log messages in ResourceTrackerService
 --

 Key: YARN-3158
 URL: https://issues.apache.org/jira/browse/YARN-3158
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Devaraj K
Assignee: Varun Saxena
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-3158.patch


 There is a space missing after the container id in the below message.
 {code:xml}
 2015-02-07 08:26:12,641 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
 Received finished container : container_1423277052568_0001_01_01for 
 unknown application application_1423277052568_0001 Skipping.
 {code}
 Again, there is a space missing before the application id.
 {code:xml}
 LOG.debug(Ignoring container completion status for unmanaged AM
 + rmApp.getApplicationId());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3158) Correct log messages in ResourceTrackerService

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14319368#comment-14319368
 ] 

Hudson commented on YARN-3158:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7096 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7096/])
YARN-3158. Correct log messages in ResourceTrackerService. Contributed (xgong: 
rev 99f6bd4f7ab1c5cac57362690c686139e73251d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* hadoop-yarn-project/CHANGES.txt


 Correct log messages in ResourceTrackerService
 --

 Key: YARN-3158
 URL: https://issues.apache.org/jira/browse/YARN-3158
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Devaraj K
Assignee: Varun Saxena
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-3158.patch


 There is a space missing after the container id in the below message.
 {code:xml}
 2015-02-07 08:26:12,641 ERROR 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
 Received finished container : container_1423277052568_0001_01_01for 
 unknown application application_1423277052568_0001 Skipping.
 {code}
 Again, there is a space missing before the application id.
 {code:xml}
 LOG.debug(Ignoring container completion status for unmanaged AM
 + rmApp.getApplicationId());
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-3043) [Data Model] Create ATS configuration, metadata, etc. as part of entities

2015-02-12 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen resolved YARN-3043.
---
Resolution: Duplicate

Let's make the all-inclusive data model definition in YARN-3041.

 [Data Model] Create ATS configuration, metadata, etc. as part of entities
 -

 Key: YARN-3043
 URL: https://issues.apache.org/jira/browse/YARN-3043
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Varun Saxena

 Per design in YARN-2928, create APIs for configuration, metadata, etc. and 
 integrate them into entities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-3042) [Data Model] Create ATS metrics API

2015-02-12 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen resolved YARN-3042.
---
Resolution: Duplicate

Let's make the all-inclusive data model definition in YARN-3041.

 [Data Model] Create ATS metrics API
 ---

 Key: YARN-3042
 URL: https://issues.apache.org/jira/browse/YARN-3042
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Siddharth Wagle

 Per design in YARN-2928, create the ATS metrics API and integrate it into the 
 entities.
 The concept may be based on the existing hadoop metrics, but we want to make 
 sure we have something that would satisfy all ATS use cases.
 It also needs to capture whether a metric should be aggregated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >