[jira] [Commented] (YARN-345) Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593300#comment-13593300
 ] 

Hudson commented on YARN-345:
-

Integrated in Hadoop-Yarn-trunk #146 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/146/])
YARN-345. Many InvalidStateTransitonException errors for ApplicationImpl in 
Node Manager. Contributed by Robert Parker (Revision 1452548)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1452548
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


 Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager
 --

 Key: YARN-345
 URL: https://issues.apache.org/jira/browse/YARN-345
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 2.0.1-alpha, 0.23.5
Reporter: Devaraj K
Assignee: Robert Parker
Priority: Critical
 Fix For: 0.23.7, 2.0.4-beta

 Attachments: YARN-345.patch, YARN-354v2.patch


 {code:xml}
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at FINISHED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 04:03:46,726 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at APPLICATION_RESOURCES_CLEANINGUP
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 00:01:11,006 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle 

[jira] [Commented] (YARN-448) Remove unnecessary hflush from log aggregation

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593302#comment-13593302
 ] 

Hudson commented on YARN-448:
-

Integrated in Hadoop-Yarn-trunk #146 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/146/])
YARN-448. Remove unnecessary hflush from log aggregation (Kihwal Lee via 
bobby) (Revision 1452475)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1452475
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java


 Remove unnecessary hflush from log aggregation
 --

 Key: YARN-448
 URL: https://issues.apache.org/jira/browse/YARN-448
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.7, 2.0.4-beta
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: yarn-448.patch.txt


 AggregatedLogFormat#writeVersion() calls hflush() after writing the version. 
 Calling hflush does not seem to be necessary. It can add a lot of load to 
 hdfs in a big busy cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-425) coverage fix for yarn api

2013-03-05 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated YARN-425:
--

Attachment: YARN-425-trunk-a.patch

update patch

 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
Reporter: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23.patch, YARN-425-branch-2.patch, 
 YARN-425-trunk-a.patch, YARN-425-trunk.patch


 coverage fix for yarn api
 patch YARN-425-trunk.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-425) coverage fix for yarn api

2013-03-05 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated YARN-425:
--

Description: 
coverage fix for yarn api
patch YARN-425-trunk-a.patch for trunk
patch YARN-425-branch-2.patch for branch-2
patch YARN-425-branch-0.23.patch for branch-0.23

  was:
coverage fix for yarn api
patch YARN-425-trunk.patch for trunk
patch YARN-425-branch-2.patch for branch-2
patch YARN-425-branch-0.23.patch for branch-0.23


 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
Reporter: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23.patch, YARN-425-branch-2.patch, 
 YARN-425-trunk-a.patch, YARN-425-trunk.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-425) coverage fix for yarn api

2013-03-05 Thread Aleksey Gorshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1359#comment-1359
 ] 

Aleksey Gorshkov commented on YARN-425:
---

update patch


 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
Reporter: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23.patch, YARN-425-branch-2.patch, 
 YARN-425-trunk-a.patch, YARN-425-trunk.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-425) coverage fix for yarn api

2013-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593346#comment-13593346
 ] 

Hadoop QA commented on YARN-425:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12572073/YARN-425-trunk-a.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/469//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/469//console

This message is automatically generated.

 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 3.0.0, 0.23.7, 2.0.4-beta
Reporter: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23.patch, YARN-425-branch-2.patch, 
 YARN-425-trunk-a.patch, YARN-425-trunk.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-448) Remove unnecessary hflush from log aggregation

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593366#comment-13593366
 ] 

Hudson commented on YARN-448:
-

Integrated in Hadoop-Hdfs-0.23-Build #544 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/544/])
svn merge -c 1452475 FIXES: YARN-448. Remove unnecessary hflush from log 
aggregation (Kihwal Lee via bobby) (Revision 1452478)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1452478
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java


 Remove unnecessary hflush from log aggregation
 --

 Key: YARN-448
 URL: https://issues.apache.org/jira/browse/YARN-448
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.7, 2.0.4-beta
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 3.0.0, 0.23.7, 2.0.4-beta

 Attachments: yarn-448.patch.txt


 AggregatedLogFormat#writeVersion() calls hflush() after writing the version. 
 Calling hflush does not seem to be necessary. It can add a lot of load to 
 hdfs in a big busy cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-345) Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593365#comment-13593365
 ] 

Hudson commented on YARN-345:
-

Integrated in Hadoop-Hdfs-0.23-Build #544 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/544/])
svn merge -c 1452548 FIXES: YARN-345. Many InvalidStateTransitonException 
errors for ApplicationImpl in Node Manager. Contributed by Robert Parker 
(Revision 1452555)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1452555
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationEventType.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


 Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager
 --

 Key: YARN-345
 URL: https://issues.apache.org/jira/browse/YARN-345
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 2.0.1-alpha, 0.23.5
Reporter: Devaraj K
Assignee: Robert Parker
Priority: Critical
 Fix For: 0.23.7, 2.0.4-beta

 Attachments: YARN-345.patch, YARN-354v2.patch


 {code:xml}
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at FINISHED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 04:03:46,726 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at APPLICATION_RESOURCES_CLEANINGUP
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 

[jira] [Commented] (YARN-345) Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593377#comment-13593377
 ] 

Hudson commented on YARN-345:
-

Integrated in Hadoop-Hdfs-trunk #1335 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1335/])
YARN-345. Many InvalidStateTransitonException errors for ApplicationImpl in 
Node Manager. Contributed by Robert Parker (Revision 1452548)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1452548
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationEventType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java


 Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager
 --

 Key: YARN-345
 URL: https://issues.apache.org/jira/browse/YARN-345
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 2.0.1-alpha, 0.23.5
Reporter: Devaraj K
Assignee: Robert Parker
Priority: Critical
 Fix For: 0.23.7, 2.0.4-beta

 Attachments: YARN-345.patch, YARN-354v2.patch


 {code:xml}
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at FINISHED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 04:03:46,726 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle this event at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 FINISH_APPLICATION at APPLICATION_RESOURCES_CLEANINGUP
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-01-17 00:01:11,006 WARN 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application:
  Can't handle 

[jira] [Commented] (YARN-227) Application expiration difficult to debug for end-users

2013-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593465#comment-13593465
 ] 

Hadoop QA commented on YARN-227:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572100/YARN-227.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

  {color:red}-1 one of tests included doesn't have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/470//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/470//console

This message is automatically generated.

 Application expiration difficult to debug for end-users
 ---

 Key: YARN-227
 URL: https://issues.apache.org/jira/browse/YARN-227
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3, 2.0.1-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
  Labels: usability
 Attachments: YARN-227-branch-0.23.patch, YARN-227-branch-0.23.patch, 
 YARN-227.patch, YARN-227.patch


 When an AM attempt expires the AMLivelinessMonitor in the RM will kill the 
 job and mark it as failed.  However there are no diagnostic messages set for 
 the application indicating that the application failed because of expiration. 
  Even if the AM logs are examined, it's often not obvious that the 
 application was externally killed.  The only evidence of what happened to the 
 application is currently in the RM logs, and those are often not accessible 
 by users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593599#comment-13593599
 ] 

Zhijie Shen commented on YARN-378:
--

Thanks, Robert! Ive also realized that RM cannot read job.xml. There're two 
places where max-retries are used. One is in MRAppMaster and the other in 
RMAppImpl. In the first place, AM can read job.xml directly to get the 
application-specific configuration. In the second place, RM has to get the 
setting through ApplicationSubmissionContext, such that I add the setter/getter 
for NumMaxRetries.

The MR client can either use -Dyarn.resourcemanager.am.max-retries or parse 
mapred-site.xml to get the configuration, and set it in 
ApplicationSubmissionContext.

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability

 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-417) Add a poller that allows the AM to receive notifications when it is assigned containers

2013-03-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593689#comment-13593689
 ] 

Bikas Saha commented on YARN-417:
-

Please do use join and make sure all the threads are complete and get cleaned 
up. It may be a convenience to call stop() in the callback but I dont think we 
should encourage that. It might be fine now but not future proof. Its not 
uncommon for API's to require users to be well behaved. As far as such use 
cases are concerned, since we have made this class an AbstractService, the 
common use case would be to init and start the service in the beginning and the 
stop it in the end, similar to other services.

 Add a poller that allows the AM to receive notifications when it is assigned 
 containers
 ---

 Key: YARN-417
 URL: https://issues.apache.org/jira/browse/YARN-417
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, applications
Affects Versions: 2.0.3-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: AMRMClientAsync-1.java, AMRMClientAsync.java, 
 YARN-417-1.patch, YARN-417-2.patch, YARN-417-3.patch, YARN-417.patch, 
 YarnAppMaster.java, YarnAppMasterListener.java


 Writing AMs would be easier for some if they did not have to handle 
 heartbeating to the RM on their own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593736#comment-13593736
 ] 

Robert Joseph Evans commented on YARN-378:
--

I don't really want the client config to be called 
yarn.resourcemanager.am.max-retries.  That is a YARN resource manager config, 
and is intended to be used by the RM, not by the map reduce client.  I would 
much rather have a mapreduce.am.max-retries that the MR client reads and uses 
to populate the ApplicationSubmissionContext.

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability

 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593811#comment-13593811
 ] 

Zhijie Shen commented on YARN-378:
--

Sorry, it's a typo in my previous comment. In fact, I wanted to say 
-Dyarn.application.am.max-retries.

I'd like to yarn.application.am.max-retries as the name of the 
application-specific configuration. IMHO, mapreduce.am.max-retries will not 
be suitable if the submitted application is not a mapreduce job. Since Yarn is 
eventually a management system of various computation frameworks (e.g. Apache 
Giraph). I'd rather have the configuration name to be independent of mapreduce, 
as it is not only for mapreduce.

However, if the yarn prefix is confusing, how do you think about 
application.am.max-retries?

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability

 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593858#comment-13593858
 ] 

Zhijie Shen commented on YARN-378:
--

I've just had an offline discussion with Hitesh. Please ignore the previous 
comment. We agree to use mapreduce.am.max-retries. Thanks!

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability

 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593863#comment-13593863
 ] 

Robert Joseph Evans commented on YARN-378:
--

But the config *is* specific to mapreduce.  Every other application client will 
have to provide their own way of putting that value into the container launch 
context.  It could be through a hadoop config or it could be through something 
else entirely.

I am in the process of porting Storm to run on top of YARN.  I don't see us 
ever using a Hadoop Configuration in the client except the default one to be 
able to access HDFS.  Storm has its own configuration object and for better 
integration with Storm I would set up a Storm conf for that, although in 
reality I would probably just never set it because I never want it to go down 
entirely, and that is how I would get the maximum number of retries allowed by 
the cluster.

I can see other applications that already exist and are being ported to run on 
YARN, like OpenMPI, to want to set that config in a way that is consistent with 
their current configuration and not in a Hadoop specific way.

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability

 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-237) Refreshing the RM page forgets how many rows I had in my Datatables

2013-03-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594078#comment-13594078
 ] 

Hadoop QA commented on YARN-237:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572187/YARN-237.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/471//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/471//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-mapreduce-client-hs.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/471//console

This message is automatically generated.

 Refreshing the RM page forgets how many rows I had in my Datatables
 ---

 Key: YARN-237
 URL: https://issues.apache.org/jira/browse/YARN-237
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.2-alpha, 0.23.4, 3.0.0
Reporter: Ravi Prakash
Assignee: jian he
  Labels: usability
 Attachments: YARN-237.patch, YARN-237.v2.patch


 If I choose a 100 rows, and then refresh the page, DataTables goes back to 
 showing me 20 rows.
 This user preference should be stored in a cookie.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-227) Application expiration difficult to debug for end-users

2013-03-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594086#comment-13594086
 ] 

Hudson commented on YARN-227:
-

Integrated in Hadoop-trunk-Commit #3420 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3420/])
YARN-227. Application expiration difficult to debug for end-users (Jason 
Lowe via jeagles) (Revision 1453080)

 Result = SUCCESS
jeagles : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1453080
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java


 Application expiration difficult to debug for end-users
 ---

 Key: YARN-227
 URL: https://issues.apache.org/jira/browse/YARN-227
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3, 2.0.1-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
  Labels: usability
 Attachments: YARN-227-branch-0.23.patch, YARN-227-branch-0.23.patch, 
 YARN-227.patch, YARN-227.patch


 When an AM attempt expires the AMLivelinessMonitor in the RM will kill the 
 job and mark it as failed.  However there are no diagnostic messages set for 
 the application indicating that the application failed because of expiration. 
  Even if the AM logs are examined, it's often not obvious that the 
 application was externally killed.  The only evidence of what happened to the 
 application is currently in the RM logs, and those are often not accessible 
 by users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-378) ApplicationMaster retry times should be set by Client

2013-03-05 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-378:
-

Attachment: YARN-378_1.patch

Here's patch. In addition the aforementioned changes, I've updated the 
TestMRAppMaster and TestAppManager to verify the two spots, where max-retries 
is used.

 ApplicationMaster retry times should be set by Client
 -

 Key: YARN-378
 URL: https://issues.apache.org/jira/browse/YARN-378
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client, resourcemanager
 Environment: suse
Reporter: xieguiming
Assignee: Zhijie Shen
  Labels: usability
 Attachments: YARN-378_1.patch


 We should support that different client or user have different 
 ApplicationMaster retry times. It also say that 
 yarn.resourcemanager.am.max-retries should be set by client. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-227) Application expiration difficult to debug for end-users

2013-03-05 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594092#comment-13594092
 ] 

Jonathan Eagles commented on YARN-227:
--

+1. Thanks so much for this patch, Jason.

 Application expiration difficult to debug for end-users
 ---

 Key: YARN-227
 URL: https://issues.apache.org/jira/browse/YARN-227
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3, 2.0.1-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
  Labels: usability
 Attachments: YARN-227-branch-0.23.patch, YARN-227-branch-0.23.patch, 
 YARN-227.patch, YARN-227.patch


 When an AM attempt expires the AMLivelinessMonitor in the RM will kill the 
 job and mark it as failed.  However there are no diagnostic messages set for 
 the application indicating that the application failed because of expiration. 
  Even if the AM logs are examined, it's often not obvious that the 
 application was externally killed.  The only evidence of what happened to the 
 application is currently in the RM logs, and those are often not accessible 
 by users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-450) Define value for * in the scheduling protocol

2013-03-05 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-450:
---

 Summary: Define value for * in the scheduling protocol
 Key: YARN-450
 URL: https://issues.apache.org/jira/browse/YARN-450
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Zhijie Shen


The ResourceRequest has a string field to specify node/rack locations. For the 
cross-rack/cluster-wide location (ie when there is no locality constraint) the 
* string is used everywhere. However, its not defined anywhere and each piece 
of code either defines a local constant or uses the string literal. Defining 
* in the protocol and removing other local references from the code base will 
be good.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-451) Add more metrics to RM page

2013-03-05 Thread Lohit Vijayarenu (JIRA)
Lohit Vijayarenu created YARN-451:
-

 Summary: Add more metrics to RM page
 Key: YARN-451
 URL: https://issues.apache.org/jira/browse/YARN-451
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.3-alpha
Reporter: Lohit Vijayarenu
Priority: Minor


ResourceManager webUI shows list of RUNNING applications, but it does not tell 
which applications are requesting more resource compared to others. With 
cluster running hundreds of applications at once it would be useful to have 
some kind of metric to show high-resource usage applications vs low-resource 
usage ones. At the minimum showing number of containers is good option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-429) capacity-scheduler config missing from yarn-test artifact

2013-03-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594247#comment-13594247
 ] 

Andrew Purtell commented on YARN-429:
-

Thanks for looking into this. +1 on patch.

 capacity-scheduler config missing from yarn-test artifact
 -

 Key: YARN-429
 URL: https://issues.apache.org/jira/browse/YARN-429
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.3-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Blocker
 Attachments: YARN-429.txt


 MiniYARNCluster and MiniMRCluster are unusable by downstream projects with 
 the 2.0.3-alpha release, since the capacity-scheduler configuration is 
 missing from the test artifact.
 hadoop-yarn-server-tests-3.0.0-SNAPSHOT-tests.jar should include the default 
 capacity-scheduler configuration. Also, this doesn't need to be part of the 
 default classpath - and should be moved out of the top level directory in the 
 dist package.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-417) Add a poller that allows the AM to receive notifications when it is assigned containers

2013-03-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594316#comment-13594316
 ] 

Bikas Saha commented on YARN-417:
-

available resource should be first thing notified to client because it can 
affect how it allocates the new containers.

 Add a poller that allows the AM to receive notifications when it is assigned 
 containers
 ---

 Key: YARN-417
 URL: https://issues.apache.org/jira/browse/YARN-417
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, applications
Affects Versions: 2.0.3-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: AMRMClientAsync-1.java, AMRMClientAsync.java, 
 YARN-417-1.patch, YARN-417-2.patch, YARN-417-3.patch, YARN-417.patch, 
 YarnAppMaster.java, YarnAppMasterListener.java


 Writing AMs would be easier for some if they did not have to handle 
 heartbeating to the RM on their own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-443) allow OS scheduling priority of NM to be different than the containers it launches

2013-03-05 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594319#comment-13594319
 ] 

Thomas Graves commented on YARN-443:


I chose to do this in the container-executor itself and the config goes into 
the container-executor.cfg.  This made the change more straight forward in that 
we didn't have to change the args passed to container-executor.  Having the 
config in the container-executor.cfg also allows you to change it without 
restarting the NM.

new config is: process.sched.priority.  It can be set to anything that 
setpriority takes, including negative values since container-executor has root 
permissions.  If the config is left out it retains the same behavior of having 
priority 0.

 allow OS scheduling priority of NM to be different than the containers it 
 launches
 --

 Key: YARN-443
 URL: https://issues.apache.org/jira/browse/YARN-443
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.3-alpha, 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
 Attachments: YARN-443.patch


 It would be nice if we could have the nodemanager run at a different OS 
 scheduling priority than the containers so that you can still communicate 
 with the nodemanager if the containers out of control.  
 On linux we could launch the nodemanager at a higher priority, but then all 
 the containers it launches would also be at that higher priority, so we need 
 a way for the container executor to launch them at a lower priority.
 I'm not sure how this applies to windows if at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-443) allow OS scheduling priority of NM to be different than the containers it launches

2013-03-05 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated YARN-443:
---

Attachment: YARN-443.patch

 allow OS scheduling priority of NM to be different than the containers it 
 launches
 --

 Key: YARN-443
 URL: https://issues.apache.org/jira/browse/YARN-443
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.3-alpha, 0.23.6
Reporter: Thomas Graves
Assignee: Thomas Graves
 Attachments: YARN-443.patch


 It would be nice if we could have the nodemanager run at a different OS 
 scheduling priority than the containers so that you can still communicate 
 with the nodemanager if the containers out of control.  
 On linux we could launch the nodemanager at a higher priority, but then all 
 the containers it launches would also be at that higher priority, so we need 
 a way for the container executor to launch them at a lower priority.
 I'm not sure how this applies to windows if at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-449) MRAppMaster classpath not set properly for unit tests in downstream projects

2013-03-05 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594332#comment-13594332
 ] 

Siddharth Seth commented on YARN-449:
-

Ted, good catch on the isMiniYARNCluster property not being set. I assumed that 
would be in place.

The MiniCluster, as part of it's startup process, sets parameters like the RM 
address in the configuration. 
This is then available via createJobConf in MiniMRCluster, or getConfig in 
MiniMRClientCluster. Instead of selectively copying out parameters, downstream 
projects should really be using the configuration objects returned by these 
APIs to submit jobs. That would allow things to keep working if parameters were 
changed.

Looked at the PIG code, and that's exactly what it is doing - so the tests 
passing is expected.
I'm not sure if Hive unit tests will work. In the test command you pasted, I 
believe TestCliDriver needs to be replaced with TestMinimrCliDriver to actually 
get it to use the MiniMRCluster.

IAC, does it make sense for HBase to make use of config objects returned by the 
getConfig objects so that similar changes in the future don't break unit 
tests ?

 MRAppMaster classpath not set properly for unit tests in downstream projects
 

 Key: YARN-449
 URL: https://issues.apache.org/jira/browse/YARN-449
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Siddharth Seth
Priority: Blocker
 Attachments: hbase-TestHFileOutputFormat-wip.txt, 
 hbase-TestingUtility-wip.txt


 Post YARN-429, unit tests for HBase continue to fail since the classpath for 
 the MRAppMaster is not being set correctly.
 Reverting YARN-129 may fix this, but I'm not sure that's the correct 
 solution. My guess is, as Alexandro pointed out in YARN-129, maven 
 classloader magic is messing up java.class.path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-449) MRAppMaster classpath not set properly for unit tests in downstream projects

2013-03-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594377#comment-13594377
 ] 

Ted Yu commented on YARN-449:
-

I tried the following change:
{code}
Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
===
--- hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java 
(revision 1453107)
+++ hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java 
(working copy)
@@ -1578,6 +1578,7 @@
 mrCluster = new MiniMRCluster(servers,
   FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1,
   null, null, new JobConf(this.conf));
+this.conf = mrCluster.createJobConf();
 JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster);
 if (jobConf == null) {
   jobConf = mrCluster.createJobConf();
{code}
mapreduce.TestTableMapReduce#testMultiRegionTable hangs running against hadoop 
1.0

 MRAppMaster classpath not set properly for unit tests in downstream projects
 

 Key: YARN-449
 URL: https://issues.apache.org/jira/browse/YARN-449
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Siddharth Seth
Priority: Blocker
 Attachments: hbase-TestHFileOutputFormat-wip.txt, 
 hbase-TestingUtility-wip.txt


 Post YARN-429, unit tests for HBase continue to fail since the classpath for 
 the MRAppMaster is not being set correctly.
 Reverting YARN-129 may fix this, but I'm not sure that's the correct 
 solution. My guess is, as Alexandro pointed out in YARN-129, maven 
 classloader magic is messing up java.class.path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira