[jira] [Commented] (YARN-330) Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553664#comment-13553664
 ] 

Hudson commented on YARN-330:
-

Integrated in Hadoop-Yarn-trunk #97 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/97/])
YARN-330. Fix flakey test: 
TestNodeManagerShutdown#testKillContainersOnShutdown. Contributed by Sandy Ryza 
(Revision 1433138)

 Result = SUCCESS
hitesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433138
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java


 Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown
 -

 Key: YARN-330
 URL: https://issues.apache.org/jira/browse/YARN-330
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Hitesh Shah
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown-output.txt, 
 YARN-330-1.patch, YARN-330.patch


 =Seems to be timing related as the container status RUNNING as returned by 
 the ContainerManager does not really indicate that the container task has 
 been launched. Sleep of 5 seconds is not reliable. 
 Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec  
 FAILURE!
 testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown)
   Time elapsed: 9283 sec   FAILURE!
 junit.framework.AssertionFailedError: Did not find sigterm message
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
 Logs:
 2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] 
 container.Container (ContainerImpl.java:handle(835)) - Container 
 container_0__01_00 transitioned from NEW to LOCALIZING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource 
 file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh
  transitioned from INIT to DOWNLOADING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:handle(521)) - Created localizer for 
 container_0__01_00
 2013-01-09 14:13:08,589 INFO  [LocalizerRunner for 
 container_0__01_00] localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:writeCredentials(895)) - Writing 
 credentials to the nmPrivate file 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens.
  Credentials list:
 2013-01-09 14:13:08,628 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:createUserCacheDirs(373)) - Initializing user 
 nobody
 2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl 
 (ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id 
 {, app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, 
 attemptId: 1, }, }, state: C_RUNNING, diagnostics: , exit_status: -1000,
 2013-01-09 14:13:08,781 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:startLocalizer(99)) - Copying from 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens
  to 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/usercache/nobody/appcache/application_0_/container_0__01_00.tokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-334) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553728#comment-13553728
 ] 

Hudson commented on YARN-334:
-

Integrated in Hadoop-Hdfs-0.23-Build #495 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/])
YARN-334. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432944)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1432944
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/conf/yarn-site.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/css/demo_page.css
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.css
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockApp.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockContainer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/mock-container-executer-with-error
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/mock-container-executor
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/resources/capacity-scheduler.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java.orig
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/pom.xml


 Maven RAT plugin is not checking all source files
 -

 Key: YARN-334
 URL: https://issues.apache.org/jira/browse/YARN-334
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 3.0.0, 2.0.3-alpha, 0.23.6

 Attachments: YARN-334-branch-0.23.patch, YARN-334-branch-0.23.patch, 
 YARN-334.patch, YARN-334.patch, YARN-334-remove.sh

[jira] [Commented] (YARN-170) NodeManager stop() gets called twice on shutdown

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553730#comment-13553730
 ] 

Hudson commented on YARN-170:
-

Integrated in Hadoop-Hdfs-0.23-Build #495 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/])
YARN-170. NodeManager stop() gets called twice on shutdown (Sandy Ryza via 
tgraves) (Revision 1432965)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1432965
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManagerEvent.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManagerEventType.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java


 NodeManager stop() gets called twice on shutdown
 

 Key: YARN-170
 URL: https://issues.apache.org/jira/browse/YARN-170
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha, 0.23.6

 Attachments: YARN-170-1.patch, YARN-170-20130107.txt, 
 YARN-170-2.patch, YARN-170-3.patch, YARN-170.patch


 The stop method in the NodeManager gets called twice when the NodeManager is 
 shut down via the shutdown hook.
 The first is the stop that gets called directly by the shutdown hook.  The 
 second occurs when the NodeStatusUpdaterImpl is stopped.  The NodeManager 
 responds to the NodeStatusUpdaterImpl stop stateChanged event by stopping 
 itself.  This is so that NodeStatusUpdaterImpl can notify the NodeManager to 
 stop, by stopping itself in response to a request from the ResourceManager
 This could be avoided if the NodeStatusUpdaterImpl were to stop the 
 NodeManager by calling its stop method directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-334) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553752#comment-13553752
 ] 

Hudson commented on YARN-334:
-

Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
YARN-334. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432931)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1432931
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-site.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/css/demo_page.css
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.css
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/container-log4j.properties
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockApp.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/MockContainer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/mock-container-executer-with-error
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/mock-container-executor
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/conf/capacity-scheduler.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerLeafQueueInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenRenewer
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/pom.xml


 Maven RAT plugin is not checking all source files
 -

 Key: YARN-334
 URL: https://issues.apache.org/jira/browse/YARN-334
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 0.23.5
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 

[jira] [Commented] (YARN-330) Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553763#comment-13553763
 ] 

Hudson commented on YARN-330:
-

Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
YARN-330. Fix flakey test: 
TestNodeManagerShutdown#testKillContainersOnShutdown. Contributed by Sandy Ryza 
(Revision 1433138)

 Result = FAILURE
hitesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433138
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java


 Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown
 -

 Key: YARN-330
 URL: https://issues.apache.org/jira/browse/YARN-330
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Hitesh Shah
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown-output.txt, 
 YARN-330-1.patch, YARN-330.patch


 =Seems to be timing related as the container status RUNNING as returned by 
 the ContainerManager does not really indicate that the container task has 
 been launched. Sleep of 5 seconds is not reliable. 
 Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec  
 FAILURE!
 testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown)
   Time elapsed: 9283 sec   FAILURE!
 junit.framework.AssertionFailedError: Did not find sigterm message
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
 Logs:
 2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] 
 container.Container (ContainerImpl.java:handle(835)) - Container 
 container_0__01_00 transitioned from NEW to LOCALIZING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource 
 file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh
  transitioned from INIT to DOWNLOADING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:handle(521)) - Created localizer for 
 container_0__01_00
 2013-01-09 14:13:08,589 INFO  [LocalizerRunner for 
 container_0__01_00] localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:writeCredentials(895)) - Writing 
 credentials to the nmPrivate file 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens.
  Credentials list:
 2013-01-09 14:13:08,628 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:createUserCacheDirs(373)) - Initializing user 
 nobody
 2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl 
 (ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id 
 {, app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, 
 attemptId: 1, }, }, state: C_RUNNING, diagnostics: , exit_status: -1000,
 2013-01-09 14:13:08,781 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:startLocalizer(99)) - Copying from 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens
  to 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/usercache/nobody/appcache/application_0_/container_0__01_00.tokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-328) Use token request messages defined in hadoop common

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553765#comment-13553765
 ] 

Hudson commented on YARN-328:
-

Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
YARN-328. Use token request messages defined in hadoop common. Contributed 
by Suresh Srinivas. (Revision 1433231)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433231
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/client_RM_protocol.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ClientRMProtocolPBClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/ClientRMProtocolPBServiceImpl.java


 Use token request messages defined in hadoop common 
 

 Key: YARN-328
 URL: https://issues.apache.org/jira/browse/YARN-328
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.0.3-alpha

 Attachments: YARN-328.patch, YARN-328.patch, YARN-328.patch, 
 YARN-328.patch


 YARN changes related to HADOOP-9192 to reuse the protobuf messages defined in 
 common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-330) Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553796#comment-13553796
 ] 

Hudson commented on YARN-330:
-

Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
YARN-330. Fix flakey test: 
TestNodeManagerShutdown#testKillContainersOnShutdown. Contributed by Sandy Ryza 
(Revision 1433138)

 Result = FAILURE
hitesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433138
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java


 Flakey test: TestNodeManagerShutdown#testKillContainersOnShutdown
 -

 Key: YARN-330
 URL: https://issues.apache.org/jira/browse/YARN-330
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Hitesh Shah
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown-output.txt, 
 YARN-330-1.patch, YARN-330.patch


 =Seems to be timing related as the container status RUNNING as returned by 
 the ContainerManager does not really indicate that the container task has 
 been launched. Sleep of 5 seconds is not reliable. 
 Running org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.353 sec  
 FAILURE!
 testKillContainersOnShutdown(org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown)
   Time elapsed: 9283 sec   FAILURE!
 junit.framework.AssertionFailedError: Did not find sigterm message
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.assertTrue(Assert.java:20)
   at 
 org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown.testKillContainersOnShutdown(TestNodeManagerShutdown.java:162)
 Logs:
 2013-01-09 14:13:08,401 INFO  [AsyncDispatcher event handler] 
 container.Container (ContainerImpl.java:handle(835)) - Container 
 container_0__01_00 transitioned from NEW to LOCALIZING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.LocalizedResource (LocalizedResource.java:handle(194)) - Resource 
 file:hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/tmpDir/scriptFile.sh
  transitioned from INIT to DOWNLOADING
 2013-01-09 14:13:08,412 INFO  [AsyncDispatcher event handler] 
 localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:handle(521)) - Created localizer for 
 container_0__01_00
 2013-01-09 14:13:08,589 INFO  [LocalizerRunner for 
 container_0__01_00] localizer.ResourceLocalizationService 
 (ResourceLocalizationService.java:writeCredentials(895)) - Writing 
 credentials to the nmPrivate file 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens.
  Credentials list:
 2013-01-09 14:13:08,628 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:createUserCacheDirs(373)) - Initializing user 
 nobody
 2013-01-09 14:13:08,709 INFO  [main] containermanager.ContainerManagerImpl 
 (ContainerManagerImpl.java:getContainerStatus(538)) - Returning container_id 
 {, app_attempt_id {, application_id {, id: 0, cluster_timestamp: 0, }, 
 attemptId: 1, }, }, state: C_RUNNING, diagnostics: , exit_status: -1000,
 2013-01-09 14:13:08,781 INFO  [LocalizerRunner for 
 container_0__01_00] nodemanager.DefaultContainerExecutor 
 (DefaultContainerExecutor.java:startLocalizer(99)) - Copying from 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/nmPrivate/container_0__01_00.tokens
  to 
 hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown/nm0/usercache/nobody/appcache/application_0_/container_0__01_00.tokens

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-328) Use token request messages defined in hadoop common

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553798#comment-13553798
 ] 

Hudson commented on YARN-328:
-

Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
YARN-328. Use token request messages defined in hadoop common. Contributed 
by Suresh Srinivas. (Revision 1433231)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433231
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/CancelDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenRequestPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RenewDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/client_RM_protocol.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_service_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/client/ClientRMProtocolPBClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/impl/pb/service/ClientRMProtocolPBServiceImpl.java


 Use token request messages defined in hadoop common 
 

 Key: YARN-328
 URL: https://issues.apache.org/jira/browse/YARN-328
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.0-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.0.3-alpha

 Attachments: YARN-328.patch, YARN-328.patch, YARN-328.patch, 
 YARN-328.patch


 YARN changes related to HADOOP-9192 to reuse the protobuf messages defined in 
 common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-340) Rename DefaultResourceCalculator

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553944#comment-13553944
 ] 

Hadoop QA commented on YARN-340:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564795/yarn-340.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/349//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/349//console

This message is automatically generated.

 Rename DefaultResourceCalculator
 

 Key: YARN-340
 URL: https://issues.apache.org/jira/browse/YARN-340
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: yarn-340.txt, yarn-340.txt


 Let's rename DefaultResourceCalculator to something like 
 MemoryResourceCalculator or SingleResourceCalculator. The default resource 
 calculator is the one specified by 
 yarn.scheduler.capacity.resource-calculator in yarn-default.xml (which may 
 change).  We can do this compatibly now since YARN-2 hasn't been released 
 yet, but changing this later will be a pain if we ever make a different 
 resource calculator the default (or DefaultResourceCalculator won't actually 
 be the default, which is weird).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-21) Cleanup - Remove org.apache.hadoop.yarn.server.resourcemanager.resource.Resource

2013-01-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-21?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved YARN-21.
--

Resolution: Duplicate

Arun has addressed this as part of YARN-2.

 Cleanup - Remove 
 org.apache.hadoop.yarn.server.resourcemanager.resource.Resource
 

 Key: YARN-21
 URL: https://issues.apache.org/jira/browse/YARN-21
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor

 org.apache.hadoop.yarn.server.resourcemanager.resource.Resources covers the 
 functionality of Resource. We should remove Resource and replace all (just a 
 couple of locations) uses of Resource with Resources.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-319) Submit a job to a queue that not allowed in fairScheduler, client will hold forever.

2013-01-15 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553942#comment-13553942
 ] 

Tom White commented on YARN-319:


Is it possible to write a unit test for this?

 Submit a job to a queue that not allowed in fairScheduler, client will hold 
 forever.
 

 Key: YARN-319
 URL: https://issues.apache.org/jira/browse/YARN-319
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.2-alpha
Reporter: shenhong
 Fix For: 2.0.3-alpha

 Attachments: YARN-319.patch


 RM use fairScheduler, when client submit a job to a queue, but the queue do 
 not allow the user to submit job it, in this case, client  will hold forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-335) Fair scheduler doesn't check whether rack needs containers before assigning to node

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13553955#comment-13553955
 ] 

Hudson commented on YARN-335:
-

Integrated in Hadoop-trunk-Commit #3238 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3238/])
YARN-335. Fair scheduler doesn't check whether rack needs containers before 
assigning to node. Contributed by Sandy Ryza. (Revision 1433484)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433484
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AppSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java


 Fair scheduler doesn't check whether rack needs containers before assigning 
 to node
 ---

 Key: YARN-335
 URL: https://issues.apache.org/jira/browse/YARN-335
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-335.patch


 If an application has an outstanding request on a node, the fair scheduler 
 may try to place a container on that node without checking whether that 
 node's rack needs any containers.  If the rack doesn't need any containers, 
 none should be scheduled on the node.
 This can cause an NPE in the fair scheduler. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-135) ClientTokens should be per app-attempt and be unregistered on App-finish.

2013-01-15 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554089#comment-13554089
 ] 

Siddharth Seth commented on YARN-135:
-

Looks good. +1. Will commit this shortly.

 ClientTokens should be per app-attempt and be unregistered on App-finish.
 -

 Key: YARN-135
 URL: https://issues.apache.org/jira/browse/YARN-135
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 0.23.3
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Attachments: YARN-135-20120928.txt, YARN-135-20121019.1.txt, 
 YARN-135-20121019.2.txt, YARN-135-20121019.3.txt, YARN-135-20121019.4.txt, 
 YARN-135-2013007.1.txt, YARN-135-2013007.txt


 Two issues:
  - ClientTokens are per app-attempt but are created per app.
  - Apps don't get unregistered from RMClientTokenSecretManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-135) ClientTokens should be per app-attempt and be unregistered on App-finish.

2013-01-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554123#comment-13554123
 ] 

Daryn Sharp commented on YARN-135:
--

It may be a good idea to not reuse the {{Token}} identifier.  It is now a class 
in common, and an interface in yarn.  {{ProtoUtils}} illustrates the issue of 
now having to qualify the entire package.  I think it will be less confusing 
and cumbersome if the interface is named {{YarnToken}}. 

 ClientTokens should be per app-attempt and be unregistered on App-finish.
 -

 Key: YARN-135
 URL: https://issues.apache.org/jira/browse/YARN-135
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 0.23.3
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.3-alpha

 Attachments: YARN-135-20120928.txt, YARN-135-20121019.1.txt, 
 YARN-135-20121019.2.txt, YARN-135-20121019.3.txt, YARN-135-20121019.4.txt, 
 YARN-135-2013007.1.txt, YARN-135-2013007.txt


 Two issues:
  - ClientTokens are per app-attempt but are created per app.
  - Apps don't get unregistered from RMClientTokenSecretManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-135) ClientTokens should be per app-attempt and be unregistered on App-finish.

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554124#comment-13554124
 ] 

Hudson commented on YARN-135:
-

Integrated in Hadoop-trunk-Commit #3241 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3241/])
YARN-135. Client tokens should be per app-attempt, and should be 
unregistered on App-finish. Contributed by Vinod Kumar Vavilapalli (Revision 
1433570)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433570
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/client/MRClientService.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/api/protocolrecords/impl/pb/GetDelegationTokenResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ClientServiceDelegate.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/NotRunningJob.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerToken.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/DelegationToken.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationMasterPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerTokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/DelegationTokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/ProtoUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/BaseClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientToAMTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/client/ClientTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseContainerTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ClientToAMTokenSecretManagerInRM.java
* 

[jira] [Commented] (YARN-2) Enhance CS to schedule accounting for both memory and cpu cores

2013-01-15 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554149#comment-13554149
 ] 

Eli Collins commented on YARN-2:


bq. Could we change the name of ResourceMemoryCpuComparator to something more 
like DefaultMultiResourceComparator? I think 
ResourceMemoryCpuNetworkBandwithDiskStorageGPUComparator is a bit long, but it 
is the direction we are headed in.

The problem with naming a class DefaultResource is that changing the default 
in the future is a pain. While I prefer MemoryResourceCalculator (it's 
explicit, and I don't think we'll see lots of different resource calculators) 
something like SingleResourceComparator fixes the naming issue and also won't 
have the issue of a new class for every policy. I filed YARN-340 to address 
this.

 Enhance CS to schedule accounting for both memory and cpu cores
 ---

 Key: YARN-2
 URL: https://issues.apache.org/jira/browse/YARN-2
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: capacityscheduler, scheduler
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Fix For: 2.0.3-alpha

 Attachments: MAPREDUCE-4327.patch, MAPREDUCE-4327.patch, 
 MAPREDUCE-4327.patch, MAPREDUCE-4327-v2.patch, MAPREDUCE-4327-v3.patch, 
 MAPREDUCE-4327-v4.patch, MAPREDUCE-4327-v5.patch, YARN-2-help.patch, 
 YARN-2.patch, YARN-2.patch, YARN-2.patch, YARN-2.patch, YARN-2.patch, 
 YARN-2.patch, YARN-2.patch, YARN-2.patch, YARN-2.patch, YARN-2.patch, 
 YARN-2.patch, YARN-2.patch, YARN-2.patch


 With YARN being a general purpose system, it would be useful for several 
 applications (MPI et al) to specify not just memory but also CPU (cores) for 
 their resource requirements. Thus, it would be useful to the 
 CapacityScheduler to account for both.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-135) ClientTokens should be per app-attempt and be unregistered on App-finish.

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554206#comment-13554206
 ] 

Hudson commented on YARN-135:
-

Integrated in Hadoop-trunk-Commit #3243 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3243/])
YARN-135. Add missing files from last commit. (Revision 1433594)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1433594
Files : 
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ClientToken.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Token.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ClientTokenPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/TokenPBImpl.java


 ClientTokens should be per app-attempt and be unregistered on App-finish.
 -

 Key: YARN-135
 URL: https://issues.apache.org/jira/browse/YARN-135
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 0.23.3
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.3-alpha

 Attachments: YARN-135-20120928.txt, YARN-135-20121019.1.txt, 
 YARN-135-20121019.2.txt, YARN-135-20121019.3.txt, YARN-135-20121019.4.txt, 
 YARN-135-2013007.1.txt, YARN-135-2013007.txt


 Two issues:
  - ClientTokens are per app-attempt but are created per app.
  - Apps don't get unregistered from RMClientTokenSecretManager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-249) Capacity Scheduler web page should show list of active users per queue like it used to (in 1.x)

2013-01-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated YARN-249:
--

Attachment: YARN-249.branch-0.23.patch

 Capacity Scheduler web page should show list of active users per queue like 
 it used to (in 1.x)
 ---

 Key: YARN-249
 URL: https://issues.apache.org/jira/browse/YARN-249
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.0.2-alpha, 3.0.0, 0.23.5
Reporter: Ravi Prakash
Assignee: Ravi Prakash
  Labels: scheduler, web-ui
 Attachments: YARN-249.branch-0.23.patch, YARN-249.branch-0.23.patch, 
 YARN-249.branch-0.23.patch, YARN-249.patch, YARN-249.patch, YARN-249.patch, 
 YARN-249.patch, YARN-249.patch, YARN-249.patch, YARN-249.png


 On the jobtracker, the web ui showed the active users for each queue and how 
 much resources each of those users were using. That currently isn't being 
 displayed on the RM capacity scheduler web ui.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-249) Capacity Scheduler web page should show list of active users per queue like it used to (in 1.x)

2013-01-15 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated YARN-249:
--

Attachment: YARN-249.patch

 Capacity Scheduler web page should show list of active users per queue like 
 it used to (in 1.x)
 ---

 Key: YARN-249
 URL: https://issues.apache.org/jira/browse/YARN-249
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.0.2-alpha, 3.0.0, 0.23.5
Reporter: Ravi Prakash
Assignee: Ravi Prakash
  Labels: scheduler, web-ui
 Attachments: YARN-249.branch-0.23.patch, YARN-249.branch-0.23.patch, 
 YARN-249.branch-0.23.patch, YARN-249.patch, YARN-249.patch, YARN-249.patch, 
 YARN-249.patch, YARN-249.patch, YARN-249.patch, YARN-249.png


 On the jobtracker, the web ui showed the active users for each queue and how 
 much resources each of those users were using. That currently isn't being 
 displayed on the RM capacity scheduler web ui.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-287) NodeManager logs incorrect physical/virtual memory values

2013-01-15 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu resolved YARN-287.
---

Resolution: Invalid

Thanks for explanation. Closing as invalid 

 NodeManager logs incorrect physical/virtual memory values
 -

 Key: YARN-287
 URL: https://issues.apache.org/jira/browse/YARN-287
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.3-alpha
Reporter: Lohit Vijayarenu
Priority: Minor

 Node manager does not log correct configured physical memory or virtual 
 memory while killing containers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-331) Fill in missing fair scheduler documentation

2013-01-15 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554615#comment-13554615
 ] 

Sandy Ryza commented on YARN-331:
-

Updated patch to add units to minSharePreemptionTimeout and add units to 
fairSharePreemptionTimeout.

 Fill in missing fair scheduler documentation
 

 Key: YARN-331
 URL: https://issues.apache.org/jira/browse/YARN-331
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-331-1.patch, YARN-331.patch


 In the fair scheduler documentation, a few config options are missing:
 locality.threshold.node
 locality.threshold.rack
 max.assign
 aclSubmitApps
 minSharePreemptionTimeout

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-331) Fill in missing fair scheduler documentation

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554635#comment-13554635
 ] 

Hadoop QA commented on YARN-331:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565054/YARN-331-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/351//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/351//console

This message is automatically generated.

 Fill in missing fair scheduler documentation
 

 Key: YARN-331
 URL: https://issues.apache.org/jira/browse/YARN-331
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-331-1.patch, YARN-331.patch


 In the fair scheduler documentation, a few config options are missing:
 locality.threshold.node
 locality.threshold.rack
 max.assign
 aclSubmitApps
 minSharePreemptionTimeout

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-40) Provide support for missing yarn commands

2013-01-15 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-40?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13554795#comment-13554795
 ] 

Junping Du commented on YARN-40:


Do we have plan to backport this to branch-1?

 Provide support for missing yarn commands
 -

 Key: YARN-40
 URL: https://issues.apache.org/jira/browse/YARN-40
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.0.0-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 2.0.3-alpha

 Attachments: MAPREDUCE-4155-1.patch, MAPREDUCE-4155.patch, 
 YARN-40-1.patch, YARN-40-20120917.1.txt, YARN-40-20120917.txt, 
 YARN-40-20120924.txt, YARN-40-20121008.txt, YARN-40.patch


 1. status app-id
 2. kill app-id (Already issue present with Id : MAPREDUCE-3793)
 3. list-apps [all]
 4. nodes-report

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira