[jira] [Commented] (YARN-355) RM app submission jams under load

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573425#comment-13573425
 ] 

Hudson commented on YARN-355:
-

Integrated in Hadoop-Hdfs-0.23-Build #518 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/518/])
YARN-355. Fixes a bug where RM app submission could jam under load. 
Contributed by Daryn Sharp. (Revision 1443136)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443136
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/RMDelegationTokenRenewer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java


 RM app submission jams under load
 -

 Key: YARN-355
 URL: https://issues.apache.org/jira/browse/YARN-355
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.0-alpha, 0.23.6
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: YARN-355.branch-23.patch, YARN-355.branch-23.patch, 
 YARN-355.branch-23.patch, YARN-355.patch, YARN-355.patch, YARN-355.patch


 The RM performs a loopback connection to itself to renew its own tokens.  If 
 app submissions consume all RPC handlers for {{ClientRMProtocol}}, then app 
 submissions block because it cannot loopback to itself to do the renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-150) AppRejectedTransition does not unregister app from master service and scheduler

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573426#comment-13573426
 ] 

Hudson commented on YARN-150:
-

Integrated in Hadoop-Hdfs-0.23-Build #518 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/518/])
YARN-150. AppRejectedTransition does not unregister app from master service 
and scheduler (Bikas Shah via tgraves) (Revision 1443143)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443143
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java


 AppRejectedTransition does not unregister app from master service and 
 scheduler
 ---

 Key: YARN-150
 URL: https://issues.apache.org/jira/browse/YARN-150
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 0.23.3, 3.0.0, 2.0.0-alpha
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: MAPREDUCE-4436.1.patch


 AttemptStartedTransition() adds the app to the ApplicationMasterService and 
 scheduler. when the scheduler rejects the app then AppRejectedTransition() 
 forgets to unregister it from the ApplicationMasterService.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-83) Change package of YarnClient to include apache

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-83?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573430#comment-13573430
 ] 

Hudson commented on YARN-83:


Integrated in Hadoop-Hdfs-0.23-Build #518 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/518/])
YARN-83. missed checking in pom.xml in original checkin (Revision 1443123)
YARN-83. Change package of YarnClient to include apache (Bikas Saha via 
tgraves). This als includes YARN-29. (Revision 1443118)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443123
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-project/pom.xml

tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443118
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/dev-support
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/dev-support/findbugs-exclude.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/YarnCLI.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestYarnClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/pom.xml


 Change package of YarnClient to include apache
 --

 Key: YARN-83
 URL: 

[jira] [Commented] (YARN-40) Provide support for missing yarn commands

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573433#comment-13573433
 ] 

Hudson commented on YARN-40:


Integrated in Hadoop-Hdfs-0.23-Build #518 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/518/])
YARN-40. Provide support for missing yarn commands (Devaraj K via tgraves) 
(Revision 1443119)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443119
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/bin/yarn
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YarnCommands.apt.vm


 Provide support for missing yarn commands
 -

 Key: YARN-40
 URL: https://issues.apache.org/jira/browse/YARN-40
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.0.0-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: MAPREDUCE-4155-1.patch, MAPREDUCE-4155.patch, 
 YARN-40-1.patch, YARN-40-20120917.1.txt, YARN-40-20120917.txt, 
 YARN-40-20120924.txt, YARN-40-20121008.txt, YARN-40.patch


 1. status app-id
 2. kill app-id (Already issue present with Id : MAPREDUCE-3793)
 3. list-apps [all]
 4. nodes-report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-355) RM app submission jams under load

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573447#comment-13573447
 ] 

Hudson commented on YARN-355:
-

Integrated in Hadoop-Hdfs-trunk #1309 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1309/])
YARN-355. Fixes a bug where RM app submission could jam under load. 
Contributed by Daryn Sharp. (Revision 1443131)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443131
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/security/RMDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/resources
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java


 RM app submission jams under load
 -

 Key: YARN-355
 URL: https://issues.apache.org/jira/browse/YARN-355
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.0-alpha, 0.23.6
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: YARN-355.branch-23.patch, YARN-355.branch-23.patch, 
 YARN-355.branch-23.patch, YARN-355.patch, YARN-355.patch, YARN-355.patch


 The RM performs a loopback connection to itself to renew its own tokens.  If 
 app submissions consume all RPC handlers for {{ClientRMProtocol}}, then app 
 submissions block because it cannot loopback to itself to do the renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-3) Add support for CPU isolation/monitoring of containers

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573455#comment-13573455
 ] 

Hudson commented on YARN-3:
---

Integrated in Hadoop-Hdfs-trunk #1309 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1309/])
YARN-355. Fixes a bug where RM app submission could jam under load. 
Contributed by Daryn Sharp. (Revision 1443131)
YARN-357. App submission should not be synchronized (daryn) (Revision 1443016)
YARN-3. Merged to branch-2. (Revision 1443011)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443131
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/security/RMDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/resources
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java

daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443016
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java

acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443011
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 Add support for CPU isolation/monitoring of containers
 --

 Key: YARN-3
 URL: https://issues.apache.org/jira/browse/YARN-3
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Andrew Ferguson
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4334-design-doc.txt, 
 mapreduce-4334-design-doc-v2.txt, MAPREDUCE-4334-executor-v1.patch, 
 MAPREDUCE-4334-executor-v2.patch, MAPREDUCE-4334-executor-v3.patch, 
 MAPREDUCE-4334-executor-v4.patch, MAPREDUCE-4334-pre1.patch, 
 MAPREDUCE-4334-pre2.patch, MAPREDUCE-4334-pre2-with_cpu.patch, 
 MAPREDUCE-4334-pre3.patch, MAPREDUCE-4334-pre3-with_cpu.patch, 
 MAPREDUCE-4334-v1.patch, MAPREDUCE-4334-v2.patch, YARN-3-lce_only-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-355) RM app submission jams under load

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573499#comment-13573499
 ] 

Hudson commented on YARN-355:
-

Integrated in Hadoop-Mapreduce-trunk #1337 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1337/])
YARN-355. Fixes a bug where RM app submission could jam under load. 
Contributed by Daryn Sharp. (Revision 1443131)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443131
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/security/RMDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/resources
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java


 RM app submission jams under load
 -

 Key: YARN-355
 URL: https://issues.apache.org/jira/browse/YARN-355
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.0-alpha, 0.23.6
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 2.0.3-alpha, 0.23.7

 Attachments: YARN-355.branch-23.patch, YARN-355.branch-23.patch, 
 YARN-355.branch-23.patch, YARN-355.patch, YARN-355.patch, YARN-355.patch


 The RM performs a loopback connection to itself to renew its own tokens.  If 
 app submissions consume all RPC handlers for {{ClientRMProtocol}}, then app 
 submissions block because it cannot loopback to itself to do the renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-357) App submission should not be synchronized

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573503#comment-13573503
 ] 

Hudson commented on YARN-357:
-

Integrated in Hadoop-Mapreduce-trunk #1337 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1337/])
YARN-357. App submission should not be synchronized (daryn) (Revision 
1443016)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443016
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java


 App submission should not be synchronized
 -

 Key: YARN-357
 URL: https://issues.apache.org/jira/browse/YARN-357
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3, 3.0.0, 2.0.0-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.7

 Attachments: YARN-357.branch-23.patch, YARN-357.patch, 
 YARN-357.patch, YARN-357.txt


 MAPREDUCE-2953 fixed a race condition with querying of app status by making 
 {{RMClientService#submitApplication}} synchronously invoke 
 {{RMAppManager#submitApplication}}. However, the {{synchronized}} keyword was 
 also added to {{RMAppManager#submitApplication}} with the comment:
 bq. I made the submitApplication synchronized to keep it consistent with the 
 other routines in RMAppManager although I do not believe it needs it since 
 the rmapp datastructure is already a concurrentMap and I don't see anything 
 else that would be an issue.
 It's been observed that app submission latency is being unnecessarily 
 impacted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-3) Add support for CPU isolation/monitoring of containers

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573507#comment-13573507
 ] 

Hudson commented on YARN-3:
---

Integrated in Hadoop-Mapreduce-trunk #1337 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1337/])
YARN-355. Fixes a bug where RM app submission could jam under load. 
Contributed by Daryn Sharp. (Revision 1443131)
YARN-357. App submission should not be synchronized (daryn) (Revision 1443016)
YARN-3. Merged to branch-2. (Revision 1443011)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443131
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/security/RMDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/resources
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/RMDelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java

daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443016
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java

acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443011
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 Add support for CPU isolation/monitoring of containers
 --

 Key: YARN-3
 URL: https://issues.apache.org/jira/browse/YARN-3
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Andrew Ferguson
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4334-design-doc.txt, 
 mapreduce-4334-design-doc-v2.txt, MAPREDUCE-4334-executor-v1.patch, 
 MAPREDUCE-4334-executor-v2.patch, MAPREDUCE-4334-executor-v3.patch, 
 MAPREDUCE-4334-executor-v4.patch, MAPREDUCE-4334-pre1.patch, 
 MAPREDUCE-4334-pre2.patch, MAPREDUCE-4334-pre2-with_cpu.patch, 
 MAPREDUCE-4334-pre3.patch, MAPREDUCE-4334-pre3-with_cpu.patch, 
 MAPREDUCE-4334-v1.patch, MAPREDUCE-4334-v2.patch, YARN-3-lce_only-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-142) Change YARN APIs to throw IOException

2013-02-07 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573516#comment-13573516
 ] 

Tom White commented on YARN-142:


 I think it'd be useful to have the APIs throw IOException and 
 YarnRemoteException. The IOException indicating errors from the RPC layer, 
 YarnException indicating errors from Yarn itself.

I see the latest patch has

{noformat}throws 
UnknownApplicationException,YarnRemoteException,IOException{noformat}

even though UnknownApplicationException is a subclass of YarnRemoteException, 
and YarnRemoteException is a subclass of IOException. It would be simpler to 
make the method signature

{noformat}throws IOException{noformat}

and draw attention to the different subclasses in the javadoc if needed.

 Change YARN APIs to throw IOException
 -

 Key: YARN-142
 URL: https://issues.apache.org/jira/browse/YARN-142
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Siddharth Seth
Assignee: Xuan Gong
Priority: Critical
 Attachments: YARN-142.1.patch, YARN-142.2.patch, YARN-142.3.patch, 
 YARN-142.4.patch


 Ref: MAPREDUCE-4067
 All YARN APIs currently throw YarnRemoteException.
 1) This cannot be extended in it's current form.
 2) The RPC layer can throw IOExceptions. These end up showing up as 
 UndeclaredThrowableExceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-236) RM should point tracking URL to RM web page when app fails to start

2013-02-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573546#comment-13573546
 ] 

Jason Lowe commented on YARN-236:
-

Unfortunately YARN-165 only handled the case where the AM crashes after making 
it to the RUNNING state.  If the AM crashes before it registers with the RM 
then it doesn't apply.  A quick way to see this in action is either set 
{{yarn.app.mapreduce.am.command-opts}} to some garbage string or run a 
wordcount job with {{mapreduce.jobtracker.split.metainfo.maxsize}} set to 1.

 RM should point tracking URL to RM web page when app fails to start
 ---

 Key: YARN-236
 URL: https://issues.apache.org/jira/browse/YARN-236
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.4
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-236.patch


 Similar to YARN-165, the RM should redirect the tracking URL to the specific 
 app page on the RM web UI when the application fails to start.  For example, 
 if the AM completely fails to start due to bad AM config or bad job config 
 like invalid queuename, then the user gets the unhelpful The requested 
 application exited before setting a tracking URL.
 Usually the diagnostic string on the RM app page has something useful, so we 
 might as well point there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-388) testContainerKillOnMemoryOverflow is failing

2013-02-07 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-388:


Attachment: YARN-388.patch

Patch to make the message pattern being checked a bit more lenient on the 
memory usage format.

 testContainerKillOnMemoryOverflow is failing
 

 Key: YARN-388
 URL: https://issues.apache.org/jira/browse/YARN-388
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jason Lowe
 Attachments: YARN-388.patch


 testContainerKillOnMemoryOverflow is failing after HADOOP-9252 since 
 humanReadableInt() is now returning megabytes as Mb instead of mb.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-365) Each NM heartbeat should not generate and event for the Scheduler

2013-02-07 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573592#comment-13573592
 ] 

Thomas Graves commented on YARN-365:


Sorry Sid. I missed your comments and a few important points yesterday upon 
quick review.  

By aggregating I meant the information in the heartbeat aggregated with all 
previous heartbeats for that single node and then handled all at once in a 
single pass by the scheduler before it tries to do any allocations.  Really its 
the same as your comment (which I missed yesterday) scheduler should really be 
pulling everything available in the node being processed.

I was originally thinking something along the lines of it having a single list 
for each completed and launched containers that it would just add to rather 
then having the queue of the individual completed and launched lists (one per 
heartbeat). But as long as the scheduler handles all the updates in the queue 
before it tries to schedule you get the same affect.  I'll review the current 
patch in more detail.

A few comments on the current patch:
- we don't need to add an update to the queue if there were no changes
- I don't think the current patch is handling all the updates in a single 
scheduler pass.



 Each NM heartbeat should not generate and event for the Scheduler
 -

 Key: YARN-365
 URL: https://issues.apache.org/jira/browse/YARN-365
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Affects Versions: 0.23.5
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: Prototype2.txt, Prototype3.txt, YARN-365.1.patch, 
 YARN-365.2.patch, YARN-365.3.patch


 Follow up from YARN-275
 https://issues.apache.org/jira/secure/attachment/12567075/Prototype.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-388) testContainerKillOnMemoryOverflow is failing

2013-02-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573603#comment-13573603
 ] 

Hadoop QA commented on YARN-388:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568422/YARN-388.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/392//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/392//console

This message is automatically generated.

 testContainerKillOnMemoryOverflow is failing
 

 Key: YARN-388
 URL: https://issues.apache.org/jira/browse/YARN-388
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-388.patch


 testContainerKillOnMemoryOverflow is failing after HADOOP-9252 since 
 humanReadableInt() is now returning megabytes as Mb instead of mb.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-362) Unexpected extra results when using the task attempt table search

2013-02-07 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573686#comment-13573686
 ] 

Ravi Prakash commented on YARN-362:
---

I manually tested the patch. Looks good to me. +1 :D

 Unexpected extra results when using the task attempt table search
 -

 Key: YARN-362
 URL: https://issues.apache.org/jira/browse/YARN-362
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Jason Lowe
Assignee: Ravi Prakash
Priority: Minor
 Attachments: MAPREDUCE-4960.patch, YARN-362.branch-0.23.patch, 
 YARN-362.patch


 When using the search box on the web UI to search for a specific task number 
 (e.g.: 0831), sometimes unexpected extra results are shown.  Using the web 
 browser's built-in search-within-page does not show any hits, so these look 
 like completely spurious results.
 It looks like the raw timestamp value for time columns, which is not shown in 
 the table, is also being searched with the search box.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-377) Fix test failure for HADOOP-9252

2013-02-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573722#comment-13573722
 ] 

Jason Lowe commented on YARN-377:
-

Sorry, didn't see this when I filed YARN-388.  Any ETA on this?  If it's coming 
shortly then we can dup that JIRA to this, otherwise would like to get the test 
failure fixed soon.  (Or I can put up a patch for fixing the humanReadableInt 
calls from the containers monitor.

 Fix test failure for HADOOP-9252
 

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor

 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-365) Each NM heartbeat should not generate and event for the Scheduler

2013-02-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573727#comment-13573727
 ] 

Xuan Gong commented on YARN-365:


Thanks for the comment. Will add the aggregation part at next patch.

 Each NM heartbeat should not generate and event for the Scheduler
 -

 Key: YARN-365
 URL: https://issues.apache.org/jira/browse/YARN-365
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Affects Versions: 0.23.5
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: Prototype2.txt, Prototype3.txt, YARN-365.1.patch, 
 YARN-365.2.patch, YARN-365.3.patch


 Follow up from YARN-275
 https://issues.apache.org/jira/secure/attachment/12567075/Prototype.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-388) testContainerKillOnMemoryOverflow is failing

2013-02-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573813#comment-13573813
 ] 

Chris Nauroth commented on YARN-388:


+1

I applied the patch and verified that the test passes.

[~jlowe], I leave it up to you whether to commit this right now or roll it into 
YARN-377, as per the discussion there.  Thanks!


 testContainerKillOnMemoryOverflow is failing
 

 Key: YARN-388
 URL: https://issues.apache.org/jira/browse/YARN-388
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-388.patch


 testContainerKillOnMemoryOverflow is failing after HADOOP-9252 since 
 humanReadableInt() is now returning megabytes as Mb instead of mb.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-377) Fix test failure for HADOOP-9252

2013-02-07 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573872#comment-13573872
 ] 

Jason Lowe commented on YARN-377:
-

If the total fix is coming real soon, then no worries let's just do it in this 
JIRA.  Thanks!

 Fix test failure for HADOOP-9252
 

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor

 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-385) ResourceRequestPBImpl's toString() is missing location and # containers

2013-02-07 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573881#comment-13573881
 ] 

Siddharth Seth commented on YARN-385:
-

+1. Trivial change, doesn't require a unit test. Committing...

 ResourceRequestPBImpl's toString() is missing location and # containers
 ---

 Key: YARN-385
 URL: https://issues.apache.org/jira/browse/YARN-385
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-385.patch


 ResourceRequestPBImpl's toString method includes priority and resource 
 capability, but omits location and number of containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-383) AMRMClientImpl should handle null rmClient in stop()

2013-02-07 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573880#comment-13573880
 ] 

Siddharth Seth commented on YARN-383:
-

+1. Trivial change, doesn't require a unit test. Committing...

 AMRMClientImpl should handle null rmClient in stop()
 

 Key: YARN-383
 URL: https://issues.apache.org/jira/browse/YARN-383
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
Priority: Minor
 Attachments: YARN-383.1.patch, YARN-383.2.patch, YARN-383.3.patch


 2013-02-06 09:31:33,813 INFO  [Thread-2] service.CompositeService 
 (CompositeService.java:stop(101)) - Error stopping 
 org.apache.hadoop.yarn.client.AMRMClientImpl
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy since it 
 is null
 at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:605)
 at 
 org.apache.hadoop.yarn.client.AMRMClientImpl.stop(AMRMClientImpl.java:150)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-383) AMRMClientImpl should handle null rmClient in stop()

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573894#comment-13573894
 ] 

Hudson commented on YARN-383:
-

Integrated in Hadoop-trunk-Commit #3340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3340/])
YARN-383. AMRMClientImpl should handle null rmClient in stop(). Contributed 
by Hitesh Shah. (Revision 1443699)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443699
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/dev-support/findbugs-exclude.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 AMRMClientImpl should handle null rmClient in stop()
 

 Key: YARN-383
 URL: https://issues.apache.org/jira/browse/YARN-383
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: YARN-383.1.patch, YARN-383.2.patch, YARN-383.3.patch


 2013-02-06 09:31:33,813 INFO  [Thread-2] service.CompositeService 
 (CompositeService.java:stop(101)) - Error stopping 
 org.apache.hadoop.yarn.client.AMRMClientImpl
 org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy since it 
 is null
 at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:605)
 at 
 org.apache.hadoop.yarn.client.AMRMClientImpl.stop(AMRMClientImpl.java:150)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-385) ResourceRequestPBImpl's toString() is missing location and # containers

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573895#comment-13573895
 ] 

Hudson commented on YARN-385:
-

Integrated in Hadoop-trunk-Commit #3340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3340/])
YARN-385. Add missing fields - location and #containers to 
ResourceRequestPBImpl's toString(). Contributed by Sandy Ryza. (Revision 
1443702)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443702
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java


 ResourceRequestPBImpl's toString() is missing location and # containers
 ---

 Key: YARN-385
 URL: https://issues.apache.org/jira/browse/YARN-385
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.4-beta

 Attachments: YARN-385.patch


 ResourceRequestPBImpl's toString method includes priority and resource 
 capability, but omits location and number of containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-142) Change YARN APIs to throw IOException

2013-02-07 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13573901#comment-13573901
 ] 

Siddharth Seth commented on YARN-142:
-

bq. and draw attention to the different subclasses in the javadoc if needed.
While YarnRemoteException is a subclass of IOException, explicitly calling it 
out in the API isn't super useful.  Was hoping to keep exceptions generated by 
YARN separate from RPC errors. 
The RPC layer, however, doesn not seem to handle anything other than 
IOException and it's derivatives. Changing YarnRemoteException to be 
independent of IOException would be in incompatible change at a later point - 
so maybe we should consider fixing the RPC layer to allow additional exceptions 
right now.



 Change YARN APIs to throw IOException
 -

 Key: YARN-142
 URL: https://issues.apache.org/jira/browse/YARN-142
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Siddharth Seth
Assignee: Xuan Gong
Priority: Critical
 Attachments: YARN-142.1.patch, YARN-142.2.patch, YARN-142.3.patch, 
 YARN-142.4.patch


 Ref: MAPREDUCE-4067
 All YARN APIs currently throw YarnRemoteException.
 1) This cannot be extended in it's current form.
 2) The RPC layer can throw IOExceptions. These end up showing up as 
 UndeclaredThrowableExceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-377) Fix test failure for HADOOP-9252

2013-02-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated YARN-377:
---

Attachment: YARN-377.1.patch

This patch updates {{ContainersMonitorImpl}} to stop using the recently 
deprecated {{StringUtils#humanReadableInt}} and updates the regex in 
{{TestContainersMonitor}} to match the new format.  Just to be safe, I made the 
regex flexible enough to match any prefix that could be returned from 
{{StringUtils#TraditionalBinaryPrefix#long2String}}, even though we're unlikely 
to see memory usage measured in the exabytes on our dev machines.  :-)

 Fix test failure for HADOOP-9252
 

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor
 Attachments: YARN-377.1.patch


 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-377) Fix test failure for HADOOP-9252

2013-02-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574001#comment-13574001
 ] 

Hadoop QA commented on YARN-377:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568477/YARN-377.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/393//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/393//console

This message is automatically generated.

 Fix test failure for HADOOP-9252
 

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor
 Attachments: YARN-377.1.patch


 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-377) Fix test failure for HADOOP-9252

2013-02-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated YARN-377:
---

Attachment: YARN-377.2.patch

Thank you, Nicholas.  Here is version 2 of the patch, switching to upper-case 
'B'.  Since we're making a commitment to upper-case output, I simplified the 
test regex to check only for upper-case letters.

 Fix test failure for HADOOP-9252
 

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor
 Attachments: YARN-377.1.patch, YARN-377.2.patch


 HADOOP-9252 slightly changes the format of some StringUtils outputs.  It may 
 cause test failures.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-365) Each NM heartbeat should not generate and event for the Scheduler

2013-02-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-365:
---

Attachment: YARN-365.4.patch

1.In a single scheduler pass, it will handle all update info.
2.For every events other that node_update event, it will do a node_update first 
in order to sync up with the status.
3.Define an atomicInteger to track how many node_update events in CS queue.
4.The RMNode will send out the node_update events either a. RMNode status 
changes, or b. no node_update events in CS queue.

 Each NM heartbeat should not generate and event for the Scheduler
 -

 Key: YARN-365
 URL: https://issues.apache.org/jira/browse/YARN-365
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Affects Versions: 0.23.5
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: Prototype2.txt, Prototype3.txt, YARN-365.1.patch, 
 YARN-365.2.patch, YARN-365.3.patch, YARN-365.4.patch


 Follow up from YARN-275
 https://issues.apache.org/jira/secure/attachment/12567075/Prototype.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-377) Fix TestContainersMonitor for HADOOP-9252

2013-02-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574057#comment-13574057
 ] 

Hadoop QA commented on YARN-377:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568486/YARN-377.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/394//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/394//console

This message is automatically generated.

 Fix TestContainersMonitor for HADOOP-9252
 -

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor
 Attachments: YARN-377.1.patch, YARN-377.2.patch


 HADOOP-9252 slightly changed the format of some StringUtils outputs.  It 
 caused TestContainersMonitor to fail.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-377) Fix TestContainersMonitor for HADOOP-9252

2013-02-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574078#comment-13574078
 ] 

Hudson commented on YARN-377:
-

Integrated in Hadoop-trunk-Commit #3343 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3343/])
YARN-377. Use the new StringUtils methods added by HADOOP-9252 and fix 
TestContainersMonitor.  Contributed by Chris Nauroth (Revision 1443796)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1443796
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitor.java


 Fix TestContainersMonitor for HADOOP-9252
 -

 Key: YARN-377
 URL: https://issues.apache.org/jira/browse/YARN-377
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 2.0.4-beta

 Attachments: YARN-377.1.patch, YARN-377.2.patch


 HADOOP-9252 slightly changed the format of some StringUtils outputs.  It 
 caused TestContainersMonitor to fail.
 Also, some methods were deprecated by HADOOP-9252.  The use of them should be 
 replaced with the new methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM.

2013-02-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong reassigned YARN-196:
--

Assignee: Xuan Gong

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)
   at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
   at org.apache.hadoop.ipc.Client.call(Client.java:1117)
   ... 9 more
 2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: 
 AsyncDispatcher thread interrupted
 java.lang.InterruptedException
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)
   at java.lang.Thread.run(Thread.java:619)
 2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: 
 Service:Dispatcher is stopped.
 2012-01-16 15:04:13,392 INFO org.mortbay.log: Stopped 
 SelectChannelConnector@0.0.0.0:
 2012-01-16 15:04:13,493 INFO 

[jira] [Updated] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM.

2013-02-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-196:
---

Attachment: (was: YARN-196.1.patch)

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)
   at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
   at org.apache.hadoop.ipc.Client.call(Client.java:1117)
   ... 9 more
 2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: 
 AsyncDispatcher thread interrupted
 java.lang.InterruptedException
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)
   at java.lang.Thread.run(Thread.java:619)
 2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: 
 Service:Dispatcher is stopped.
 2012-01-16 15:04:13,392 INFO org.mortbay.log: Stopped 
 SelectChannelConnector@0.0.0.0:
 2012-01-16 15:04:13,493 INFO 

[jira] [Updated] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM.

2013-02-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-196:
---

Attachment: YARN-196.1.patch

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)
   at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
   at org.apache.hadoop.ipc.Client.call(Client.java:1117)
   ... 9 more
 2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: 
 AsyncDispatcher thread interrupted
 java.lang.InterruptedException
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)
   at java.lang.Thread.run(Thread.java:619)
 2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: 
 Service:Dispatcher is stopped.
 2012-01-16 15:04:13,392 INFO org.mortbay.log: Stopped 
 SelectChannelConnector@0.0.0.0:
 2012-01-16 15:04:13,493 INFO 

[jira] [Reopened] (YARN-149) ZK-based High Availability (HA) for ResourceManager (RM)

2013-02-07 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reopened YARN-149:
--

  Assignee: (was: Bikas Saha)

 ZK-based High Availability (HA) for ResourceManager (RM)
 

 Key: YARN-149
 URL: https://issues.apache.org/jira/browse/YARN-149
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Harsh J

 One of the goals presented on MAPREDUCE-279 was to have high availability. 
 One way that was discussed, per Mahadev/others on 
 https://issues.apache.org/jira/browse/MAPREDUCE-2648 and other places, was ZK:
 {quote}
 Am not sure, if you already know about the MR-279 branch (the next version of 
 MR framework). We've been trying to integrate ZK into the framework from the 
 beginning. As for now, we are just doing restart with ZK but soon we should 
 have a HA soln with ZK.
 {quote}
 There is now MAPREDUCE-4343 that tracks recoverability via ZK. This JIRA is 
 meant to track HA via ZK.
 Currently there isn't a HA solution for RM, via ZK or otherwise.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM

2013-02-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574145#comment-13574145
 ] 

Hadoop QA commented on YARN-196:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568507/YARN-196.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.TestNodeStatusUpdater

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/396//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/396//console

This message is automatically generated.

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 

[jira] [Commented] (YARN-3) Add support for CPU isolation/monitoring of containers

2013-02-07 Thread Andrew Ferguson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574187#comment-13574187
 ] 

Andrew Ferguson commented on YARN-3:


[~acmurthy] thanks for the merge Arun!

 Add support for CPU isolation/monitoring of containers
 --

 Key: YARN-3
 URL: https://issues.apache.org/jira/browse/YARN-3
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Andrew Ferguson
 Fix For: 2.0.3-alpha

 Attachments: mapreduce-4334-design-doc.txt, 
 mapreduce-4334-design-doc-v2.txt, MAPREDUCE-4334-executor-v1.patch, 
 MAPREDUCE-4334-executor-v2.patch, MAPREDUCE-4334-executor-v3.patch, 
 MAPREDUCE-4334-executor-v4.patch, MAPREDUCE-4334-pre1.patch, 
 MAPREDUCE-4334-pre2.patch, MAPREDUCE-4334-pre2-with_cpu.patch, 
 MAPREDUCE-4334-pre3.patch, MAPREDUCE-4334-pre3-with_cpu.patch, 
 MAPREDUCE-4334-v1.patch, MAPREDUCE-4334-v2.patch, YARN-3-lce_only-v1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-249) Capacity Scheduler web page should show list of active users per queue like it used to (in 1.x)

2013-02-07 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13574231#comment-13574231
 ] 

Thomas Graves commented on YARN-249:


Thanks Ravi, a few comments and minor nits:

- can you rename activeApplications to numActiveApplications and 
pendingApplications to numPendingApplications so it matches the existing ones 
in the LeafQueue.
- TestRMWebServicesCapacitySched - can you add a test for the json output too
- Lets not make the LeafQueue.User class public.  Instead we are already 
copying the info so we can make a separate class.  Actually it would be nice to 
just use UserInfo.

- CapacitySchedulerPage please put {} around all the if statements and add a 
space after if and for before the first (
- ResourceManager.apt.vm since you will be in there can you capitalize the in 
description of username, memory, vCores


 Capacity Scheduler web page should show list of active users per queue like 
 it used to (in 1.x)
 ---

 Key: YARN-249
 URL: https://issues.apache.org/jira/browse/YARN-249
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.0.2-alpha, 3.0.0, 0.23.5
Reporter: Ravi Prakash
Assignee: Ravi Prakash
  Labels: scheduler, web-ui
 Attachments: YARN-249.branch-0.23.patch, YARN-249.branch-0.23.patch, 
 YARN-249.branch-0.23.patch, YARN-249.branch-0.23.patch, 
 YARN-249.branch-0.23.patch, YARN-249.patch, YARN-249.patch, YARN-249.patch, 
 YARN-249.patch, YARN-249.patch, YARN-249.patch, YARN-249.patch, 
 YARN-249.patch, YARN-249.png


 On the jobtracker, the web ui showed the active users for each queue and how 
 much resources each of those users were using. That currently isn't being 
 displayed on the RM capacity scheduler web ui.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM.

2013-02-07 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-196:
---

Attachment: YARN-196.2.patch

Fix the test failure.
testNMShutdownForRegistrationFailure will cover this update

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch, YARN-196.2.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)
   at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
   at org.apache.hadoop.ipc.Client.call(Client.java:1117)
   ... 9 more
 2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: 
 AsyncDispatcher thread interrupted
 java.lang.InterruptedException
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)
   at java.lang.Thread.run(Thread.java:619)
 2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: 
 Service:Dispatcher is stopped.
 2012-01-16 15:04:13,392 INFO