[jira] [Updated] (YARN-860) JobHistory UI shows -ve times for reducer

2013-06-20 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-860:


Attachment: MR.png

 JobHistory UI shows -ve times for reducer
 -

 Key: YARN-860
 URL: https://issues.apache.org/jira/browse/YARN-860
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Xuan Gong
 Attachments: MR.png


 Attached screenshot

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-853) maximum-am-resource-percent doesn't work consistently with refreshQueues

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689100#comment-13689100
 ] 

Hadoop QA commented on YARN-853:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588806/YARN-853-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1362//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1362//console

This message is automatically generated.

 maximum-am-resource-percent doesn't work consistently with refreshQueues
 

 Key: YARN-853
 URL: https://issues.apache.org/jira/browse/YARN-853
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0, 2.1.0-beta, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: YARN-853-1.patch, YARN-853.patch


 If we update yarn.scheduler.capacity.maximum-am-resource-percent / 
 yarn.scheduler.capacity.queue-path.maximum-am-resource-percent 
 configuration and then do the refreshNodes, it uses the new config value to 
 calculate Max Active Applications and Max Active Application Per User. If we 
 add new node after issuing  'rmadmin -refreshQueues' command, it uses the old 
 maximum-am-resource-percent config value to calculate Max Active Applications 
 and Max Active Application Per User. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689124#comment-13689124
 ] 

Hudson commented on YARN-852:
-

Integrated in Hadoop-Yarn-trunk #246 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/246/])
YARN-852. TestAggregatedLogFormat.testContainerLogsFileAccess fails on 
Windows. Contributed by Chuan Liu. (Revision 1494733)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494733
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java


 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689117#comment-13689117
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Yarn-trunk #246 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/246/])
YARN-597. Change attribution of YARN-597 from trunk to release 2.1.0-beta 
in CHANGES.txt. (cnauroth) (Revision 1494717)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494717
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689123#comment-13689123
 ] 

Hudson commented on YARN-854:
-

Integrated in Hadoop-Yarn-trunk #246 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/246/])
YARN-854. Fixing YARN bugs that are failing applications in secure 
environment. Contributed by Omkar Vinit Joshi. (Revision 1494845)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494845
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java


 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Devaraj K (JIRA)
Devaraj K created YARN-861:
--

 Summary: TestContainerManager is failing
 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Devaraj K


https://builds.apache.org/job/Hadoop-Yarn-trunk/246/

{code:xml}
Running 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec  
FAILURE!
testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
  Time elapsed: 286 sec   FAILURE!
junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
was:[localhos]t
at junit.framework.Assert.assertEquals(Assert.java:85)

{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-861:
---

  Component/s: nodemanager
 Priority: Critical  (was: Major)
Affects Version/s: 2.1.0-beta
   3.0.0

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Priority: Critical

 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689219#comment-13689219
 ] 

Hudson commented on YARN-852:
-

Integrated in Hadoop-Hdfs-trunk #1436 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1436/])
YARN-852. TestAggregatedLogFormat.testContainerLogsFileAccess fails on 
Windows. Contributed by Chuan Liu. (Revision 1494733)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494733
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java


 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689212#comment-13689212
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Hdfs-trunk #1436 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1436/])
YARN-597. Change attribution of YARN-597 from trunk to release 2.1.0-beta 
in CHANGES.txt. (cnauroth) (Revision 1494717)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494717
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689258#comment-13689258
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Mapreduce-trunk #1463 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1463/])
YARN-597. Change attribution of YARN-597 from trunk to release 2.1.0-beta 
in CHANGES.txt. (cnauroth) (Revision 1494717)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494717
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689264#comment-13689264
 ] 

Hudson commented on YARN-854:
-

Integrated in Hadoop-Mapreduce-trunk #1463 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1463/])
YARN-854. Fixing YARN bugs that are failing applications in secure 
environment. Contributed by Omkar Vinit Joshi. (Revision 1494845)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494845
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java


 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689265#comment-13689265
 ] 

Hudson commented on YARN-852:
-

Integrated in Hadoop-Mapreduce-trunk #1463 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1463/])
YARN-852. TestAggregatedLogFormat.testContainerLogsFileAccess fails on 
Windows. Contributed by Chuan Liu. (Revision 1494733)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494733
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java


 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689322#comment-13689322
 ] 

Hitesh Shah commented on YARN-727:
--

@Xuan, the webservices aspects can be handled in a separate jira. Please go and 
ahead file one.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.1.patch, YARN-727.2.patch, 
 YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, 
 YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-819) ResourceManager and NodeManager mismatched version should create an error

2013-06-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-819:
---

Component/s: resourcemanager
 nodemanager

 ResourceManager and NodeManager mismatched version should create an error
 -

 Key: YARN-819
 URL: https://issues.apache.org/jira/browse/YARN-819
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker

 Our use case is during upgrade on a large cluster several NodeManagers may 
 not restart with the new version.  Once the RM comes back up the NodeManager 
 will re-register without issue to the RM.
 The NM should report the version the RM.  The RM should have a configuration 
 to disallow the check (default), equal to the RM (to prevent config change 
 for each release), equal to or greater than RM (to allow NM upgrades), and 
 finally an explicit version or version range.
 The RM should also have an configuration on how to treat the mismatch: 
 REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-819) ResourceManager and NodeManager mismatched version should create an error

2013-06-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-819:
---

Target Version/s: 3.0.0, 2.0.5-alpha  (was: 3.0.0, 2.0.5-alpha, 0.23.9)

 ResourceManager and NodeManager mismatched version should create an error
 -

 Key: YARN-819
 URL: https://issues.apache.org/jira/browse/YARN-819
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker

 Our use case is during upgrade on a large cluster several NodeManagers may 
 not restart with the new version.  Once the RM comes back up the NodeManager 
 will re-register without issue to the RM.
 The NM should report the version the RM.  The RM should have a configuration 
 to disallow the check (default), equal to the RM (to prevent config change 
 for each release), equal to or greater than RM (to allow NM upgrades), and 
 finally an explicit version or version range.
 The RM should also have an configuration on how to treat the mismatch: 
 REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-819) ResourceManager and NodeManager mismatched version should create an error

2013-06-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-819:
---

Affects Version/s: 2.0.4-alpha

 ResourceManager and NodeManager mismatched version should create an error
 -

 Key: YARN-819
 URL: https://issues.apache.org/jira/browse/YARN-819
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker

 Our use case is during upgrade on a large cluster several NodeManagers may 
 not restart with the new version.  Once the RM comes back up the NodeManager 
 will re-register without issue to the RM.
 The NM should report the version the RM.  The RM should have a configuration 
 to disallow the check (default), equal to the RM (to prevent config change 
 for each release), equal to or greater than RM (to allow NM upgrades), and 
 finally an explicit version or version range.
 The RM should also have an configuration on how to treat the mismatch: 
 REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-819) ResourceManager and NodeManager should check for a minimum allowed version

2013-06-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-819:
---

Summary: ResourceManager and NodeManager should check for a minimum allowed 
version  (was: ResourceManager and NodeManager mismatched version should create 
an error)

 ResourceManager and NodeManager should check for a minimum allowed version
 --

 Key: YARN-819
 URL: https://issues.apache.org/jira/browse/YARN-819
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker

 Our use case is during upgrade on a large cluster several NodeManagers may 
 not restart with the new version.  Once the RM comes back up the NodeManager 
 will re-register without issue to the RM.
 The NM should report the version the RM.  The RM should have a configuration 
 to disallow the check (default), equal to the RM (to prevent config change 
 for each release), equal to or greater than RM (to allow NM upgrades), and 
 finally an explicit version or version range.
 The RM should also have an configuration on how to treat the mismatch: 
 REJECT, or REBOOT the NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-862) ResourceManager and NodeManager versions should match on node registration or error out

2013-06-20 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated YARN-862:
---

Summary: ResourceManager and NodeManager versions should match on node 
registration or error out  (was: ResourceManager and NodeManager versions 
should on node registration or error out)

 ResourceManager and NodeManager versions should match on node registration or 
 error out
 ---

 Key: YARN-862
 URL: https://issues.apache.org/jira/browse/YARN-862
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker

 For branch-0.23 the versions of the node manager and the resource manager 
 should match to complete a successful registration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-862) ResourceManager and NodeManager versions should on node registration or error out

2013-06-20 Thread Robert Parker (JIRA)
Robert Parker created YARN-862:
--

 Summary: ResourceManager and NodeManager versions should on node 
registration or error out
 Key: YARN-862
 URL: https://issues.apache.org/jira/browse/YARN-862
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker


For branch-0.23 the versions of the node manager and the resource manager 
should match to complete a successful registration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-863) Crash in YARNRunner because of NULL dagClient

2013-06-20 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-863:
---

 Summary: Crash in YARNRunner because of NULL dagClient
 Key: YARN-863
 URL: https://issues.apache.org/jira/browse/YARN-863
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.15.patch

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.1.patch, 
 YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, 
 YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-864) YARN NM leaking containers with CGroups

2013-06-20 Thread Chris Riccomini (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Riccomini updated YARN-864:
-

Summary: YARN NM leaking containers with CGroups  (was: YARN NM leaking 
containers)

 YARN NM leaking containers with CGroups
 ---

 Key: YARN-864
 URL: https://issues.apache.org/jira/browse/YARN-864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
 Environment: YARN 2.0.5-alpha with patches applied for YARN-799 and 
 YARN-600.
Reporter: Chris Riccomini

 Hey Guys,
 I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
 seeing containers getting leaked by the NMs. I'm not quite sure what's going 
 on -- has anyone seen this before? I'm concerned that maybe it's a 
 mis-understanding on my part about how YARN's lifecycle works.
 When I look in my AM logs for my app (not an MR app master), I see:
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. 
 This means that container container_1371141151815_0008_03_02 was killed 
 by YARN, either due to being released by the application master or being 
 'lost' due to node failures etc.
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
 container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a 
 new container for the task.
 The AM has been running steadily the whole time. Here's what the NM logs say:
 {noformat}
 05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
 java.lang.InterruptedException
 at java.lang.Object.wait(Native Method)
 at java.lang.Thread.join(Thread.java:1143)
 at java.lang.Thread.join(Thread.java:1196)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,314  WARN ContainersMonitorImpl:463 - 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
  is interrupted. Exiting.
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
 

[jira] [Created] (YARN-864) YARN NM leaking containers

2013-06-20 Thread Chris Riccomini (JIRA)
Chris Riccomini created YARN-864:


 Summary: YARN NM leaking containers
 Key: YARN-864
 URL: https://issues.apache.org/jira/browse/YARN-864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
 Environment: YARN 2.0.5-alpha with patches applied for YARN-799 and 
YARN-600.
Reporter: Chris Riccomini


Hey Guys,

I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
seeing containers getting leaked by the NMs. I'm not quite sure what's going on 
-- has anyone seen this before? I'm concerned that maybe it's a 
mis-understanding on my part about how YARN's lifecycle works.

When I look in my AM logs for my app (not an MR app master), I see:

2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. This 
means that container container_1371141151815_0008_03_02 was killed by YARN, 
either due to being released by the application master or being 'lost' due to 
node failures etc.
2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a new 
container for the task.

The AM has been running steadily the whole time. Here's what the NM logs say:

05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1143)
at java.lang.Thread.join(Thread.java:1196)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:619)
05:35:00,314  WARN ContainersMonitorImpl:463 - 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
 is interrupted. Exiting.
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

[jira] [Updated] (YARN-864) YARN NM leaking containers

2013-06-20 Thread Chris Riccomini (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Riccomini updated YARN-864:
-

Description: 
Hey Guys,

I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
seeing containers getting leaked by the NMs. I'm not quite sure what's going on 
-- has anyone seen this before? I'm concerned that maybe it's a 
mis-understanding on my part about how YARN's lifecycle works.

When I look in my AM logs for my app (not an MR app master), I see:

2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. This 
means that container container_1371141151815_0008_03_02 was killed by YARN, 
either due to being released by the application master or being 'lost' due to 
node failures etc.
2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a new 
container for the task.

The AM has been running steadily the whole time. Here's what the NM logs say:

{noformat}
05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1143)
at java.lang.Thread.join(Thread.java:1196)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:619)
05:35:00,314  WARN ContainersMonitorImpl:463 - 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
 is interrupted. Exiting.
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
{noformat}

And, if I look on the machine that's running 
container_1371141151815_0008_03_02, I see:

$ ps -ef | grep container_1371141151815_0008_03_02
criccomi  5365 27915 38 Jun18 ?21:35:05 

[jira] [Updated] (YARN-864) YARN NM leaking containers

2013-06-20 Thread Chris Riccomini (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Riccomini updated YARN-864:
-

Description: 
Hey Guys,

I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
seeing containers getting leaked by the NMs. I'm not quite sure what's going on 
-- has anyone seen this before? I'm concerned that maybe it's a 
mis-understanding on my part about how YARN's lifecycle works.

When I look in my AM logs for my app (not an MR app master), I see:

2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. This 
means that container container_1371141151815_0008_03_02 was killed by YARN, 
either due to being released by the application master or being 'lost' due to 
node failures etc.
2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a new 
container for the task.

The AM has been running steadily the whole time. Here's what the NM logs say:

{noformat}
05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1143)
at java.lang.Thread.join(Thread.java:1196)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
at 
org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
at java.lang.Thread.run(Thread.java:619)
05:35:00,314  WARN ContainersMonitorImpl:463 - 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
 is interrupted. Exiting.
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup at: 
/cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
java.io.IOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
at org.apache.hadoop.util.Shell.run(Shell.java:129)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
{noformat}

And, if I look on the machine that's running 
container_1371141151815_0008_03_02, I see:

{noformat}
$ ps -ef | grep container_1371141151815_0008_03_02
criccomi  5365 27915 38 Jun18 ?

[jira] [Commented] (YARN-862) ResourceManager and NodeManager versions should match on node registration or error out

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689469#comment-13689469
 ] 

Hadoop QA commented on YARN-862:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588878/YARN-862-b0.23-v1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1364//console

This message is automatically generated.

 ResourceManager and NodeManager versions should match on node registration or 
 error out
 ---

 Key: YARN-862
 URL: https://issues.apache.org/jira/browse/YARN-862
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: YARN-862-b0.23-v1.patch


 For branch-0.23 the versions of the node manager and the resource manager 
 should match to complete a successful registration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-864) YARN NM leaking containers with CGroups

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689481#comment-13689481
 ] 

Vinod Kumar Vavilapalli commented on YARN-864:
--

In the log, NodeManager.stop() is getting called. Know why this is happening? 
You can check the RM logs. Things have changed a bit from 2.0.5 to 2.1.0 so 
I've to look at the old code.

 YARN NM leaking containers with CGroups
 ---

 Key: YARN-864
 URL: https://issues.apache.org/jira/browse/YARN-864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
 Environment: YARN 2.0.5-alpha with patches applied for YARN-799 and 
 YARN-600.
Reporter: Chris Riccomini

 Hey Guys,
 I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
 seeing containers getting leaked by the NMs. I'm not quite sure what's going 
 on -- has anyone seen this before? I'm concerned that maybe it's a 
 mis-understanding on my part about how YARN's lifecycle works.
 When I look in my AM logs for my app (not an MR app master), I see:
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. 
 This means that container container_1371141151815_0008_03_02 was killed 
 by YARN, either due to being released by the application master or being 
 'lost' due to node failures etc.
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
 container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a 
 new container for the task.
 The AM has been running steadily the whole time. Here's what the NM logs say:
 {noformat}
 05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
 java.lang.InterruptedException
 at java.lang.Object.wait(Native Method)
 at java.lang.Thread.join(Thread.java:1143)
 at java.lang.Thread.join(Thread.java:1196)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,314  WARN ContainersMonitorImpl:463 - 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
  is interrupted. Exiting.
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 

[jira] [Commented] (YARN-478) fix coverage org.apache.hadoop.yarn.webapp.log

2013-06-20 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689484#comment-13689484
 ] 

Jonathan Eagles commented on YARN-478:
--

+1. Looks like some more good additions to coverage, Aleksey.

*org.apache.hadoop.yarn.server.security*
ApplicationACLsManager 0% - 66.7%

*org.apache.hadoop.yarn.webapp.log*
AggregatedLogsBlock 0% - 66.7%

 fix coverage org.apache.hadoop.yarn.webapp.log
 --

 Key: YARN-478
 URL: https://issues.apache.org/jira/browse/YARN-478
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-478-branch-0.23-v1.patch, YARN-478-trunk.patch, 
 YARN-478-trunk-v1.patch


 fix coverage org.apache.hadoop.yarn.webapp.log
 one patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-864) YARN NM leaking containers with CGroups

2013-06-20 Thread Chris Riccomini (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689491#comment-13689491
 ] 

Chris Riccomini commented on YARN-864:
--

Hey Vinod,

I have no idea why it would be called.

The ps tree shows the NM running since June 13 (this log trace is from the 
19th).

{noformat}
$ ps -ef | grep Node
app  27915 27655  2 Jun13 ?04:01:20 
/export/apps/jdk/JDK-1_6_0_21/bin/java -Dproc_nodemanager...
{noformat}

Cheers,
Chris

 YARN NM leaking containers with CGroups
 ---

 Key: YARN-864
 URL: https://issues.apache.org/jira/browse/YARN-864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
 Environment: YARN 2.0.5-alpha with patches applied for YARN-799 and 
 YARN-600.
Reporter: Chris Riccomini

 Hey Guys,
 I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
 seeing containers getting leaked by the NMs. I'm not quite sure what's going 
 on -- has anyone seen this before? I'm concerned that maybe it's a 
 mis-understanding on my part about how YARN's lifecycle works.
 When I look in my AM logs for my app (not an MR app master), I see:
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. 
 This means that container container_1371141151815_0008_03_02 was killed 
 by YARN, either due to being released by the application master or being 
 'lost' due to node failures etc.
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
 container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a 
 new container for the task.
 The AM has been running steadily the whole time. Here's what the NM logs say:
 {noformat}
 05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
 java.lang.InterruptedException
 at java.lang.Object.wait(Native Method)
 at java.lang.Thread.join(Thread.java:1143)
 at java.lang.Thread.join(Thread.java:1196)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,314  WARN ContainersMonitorImpl:463 - 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
  is interrupted. Exiting.
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 

[jira] [Commented] (YARN-864) YARN NM leaking containers with CGroups

2013-06-20 Thread Chris Riccomini (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689493#comment-13689493
 ] 

Chris Riccomini commented on YARN-864:
--

BTW- my NM/RM are running on WARN right now, in logs. I'm going to switch to 
INFO and see if there's more detail.

 YARN NM leaking containers with CGroups
 ---

 Key: YARN-864
 URL: https://issues.apache.org/jira/browse/YARN-864
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.5-alpha
 Environment: YARN 2.0.5-alpha with patches applied for YARN-799 and 
 YARN-600.
Reporter: Chris Riccomini

 Hey Guys,
 I'm running YARN 2.0.5-alpha with CGroups and stateful RM turned on, and I'm 
 seeing containers getting leaked by the NMs. I'm not quite sure what's going 
 on -- has anyone seen this before? I'm concerned that maybe it's a 
 mis-understanding on my part about how YARN's lifecycle works.
 When I look in my AM logs for my app (not an MR app master), I see:
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Got an exit code of -100. 
 This means that container container_1371141151815_0008_03_02 was killed 
 by YARN, either due to being released by the application master or being 
 'lost' due to node failures etc.
 2013-06-19 05:34:22 AppMasterTaskManager [INFO] Released container 
 container_1371141151815_0008_03_02 was assigned task ID 0. Requesting a 
 new container for the task.
 The AM has been running steadily the whole time. Here's what the NM logs say:
 {noformat}
 05:34:59,783  WARN AsyncDispatcher:109 - Interrupted Exception while stopping
 java.lang.InterruptedException
 at java.lang.Object.wait(Native Method)
 at java.lang.Thread.join(Thread.java:1143)
 at java.lang.Thread.join(Thread.java:1196)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.stop(AsyncDispatcher.java:107)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
 at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:209)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:336)
 at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.handle(NodeManager.java:61)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:130)
 at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,314  WARN ContainersMonitorImpl:463 - 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
  is interrupted. Exiting.
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0006_01_001598
 05:35:00,434  WARN CgroupsLCEResourcesHandler:166 - Unable to delete cgroup 
 at: /cgroup/cpu/hadoop-yarn/container_1371141151815_0008_03_02
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:68)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:619)
 05:35:00,434  WARN ContainerLaunch:247 - Failed to launch container.
 java.io.IOException: java.lang.InterruptedException
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:205)
 at org.apache.hadoop.util.Shell.run(Shell.java:129)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
 at 
 org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:230)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:242)
 at 
 

[jira] [Commented] (YARN-862) ResourceManager and NodeManager versions should match on node registration or error out

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689496#comment-13689496
 ] 

Omkar Vinit Joshi commented on YARN-862:


Won't this become a problem in case we are planning to implement rolling 
upgrades? we will have to bring the whole system down.. right? before updating 
even minor corrections?

 ResourceManager and NodeManager versions should match on node registration or 
 error out
 ---

 Key: YARN-862
 URL: https://issues.apache.org/jira/browse/YARN-862
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager, resourcemanager
Affects Versions: 0.23.8
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: YARN-862-b0.23-v1.patch


 For branch-0.23 the versions of the node manager and the resource manager 
 should match to complete a successful registration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-478) fix coverage org.apache.hadoop.yarn.webapp.log

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689531#comment-13689531
 ] 

Hudson commented on YARN-478:
-

Integrated in Hadoop-trunk-Commit #3990 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3990/])
YARN-478. fix coverage org.apache.hadoop.yarn.webapp.log (Aleksey Gorshkov 
via jeagles) (Revision 1495129)

 Result = SUCCESS
jeagles : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1495129
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogsBlock.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlockForTest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/BlockForTest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/view/HtmlBlockForTest.java


 fix coverage org.apache.hadoop.yarn.webapp.log
 --

 Key: YARN-478
 URL: https://issues.apache.org/jira/browse/YARN-478
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Fix For: 3.0.0, 0.23.9, 2.3.0

 Attachments: YARN-478-branch-0.23-v1.patch, YARN-478-trunk.patch, 
 YARN-478-trunk-v1.patch


 fix coverage org.apache.hadoop.yarn.webapp.log
 one patch for trunk, branch-2, branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (YARN-865) RM webservices can't query on application Types

2013-06-20 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah moved MAPREDUCE-5337 to YARN-865:
-

Key: YARN-865  (was: MAPREDUCE-5337)
Project: Hadoop YARN  (was: Hadoop Map/Reduce)

 RM webservices can't query on application Types
 ---

 Key: YARN-865
 URL: https://issues.apache.org/jira/browse/YARN-865
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: MR-5337.1.patch


 The resource manager web service api to get the list of apps doesn't have a 
 query parameter for appTypes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689539#comment-13689539
 ] 

Hadoop QA commented on YARN-727:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588874/YARN-727.15.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1363//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1363//console

This message is automatically generated.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.1.patch, 
 YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, 
 YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-861:
-

Attachment: YARN-861.txt

Here's a quick patch following [~hitesh]'s suggestions. This works on my Mac as 
well as a Linux Box. Let's see what Jenkins says..

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Priority: Critical
 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689555#comment-13689555
 ] 

Hadoop QA commented on YARN-861:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588890/YARN-861.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1366//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1366//console

This message is automatically generated.

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Priority: Critical
 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-865) RM webservices can't query on application Types

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689557#comment-13689557
 ] 

Hadoop QA commented on YARN-865:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1250/MR-5337.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1365//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1365//console

This message is automatically generated.

 RM webservices can't query on application Types
 ---

 Key: YARN-865
 URL: https://issues.apache.org/jira/browse/YARN-865
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: MR-5337.1.patch


 The resource manager web service api to get the list of apps doesn't have a 
 query parameter for appTypes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned YARN-861:


Assignee: Vinod Kumar Vavilapalli

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689568#comment-13689568
 ] 

Hitesh Shah commented on YARN-861:
--

+1. Committing shortly. 

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: YARN-851-20130619.patch

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689609#comment-13689609
 ] 

Hitesh Shah commented on YARN-861:
--

Thanks Vinod.

 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689610#comment-13689610
 ] 

Hadoop QA commented on YARN-851:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588902/YARN-851-20130619.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client:

  org.apache.hadoop.yarn.client.api.impl.TestNMClient

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1367//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1367//console

This message is automatically generated.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-861) TestContainerManager is failing

2013-06-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689611#comment-13689611
 ] 

Hudson commented on YARN-861:
-

Integrated in Hadoop-trunk-Commit #3991 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3991/])
YARN-861. TestContainerManager is failing. Contributed by Vinod Kumar 
Vavilapalli. (Revision 1495160)

 Result = SUCCESS
hitesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1495160
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java


 TestContainerManager is failing
 ---

 Key: YARN-861
 URL: https://issues.apache.org/jira/browse/YARN-861
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Devaraj K
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 2.1.0-beta

 Attachments: YARN-861.txt


 https://builds.apache.org/job/Hadoop-Yarn-trunk/246/
 {code:xml}
 Running 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
 Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec 
  FAILURE!
 testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)
   Time elapsed: 286 sec   FAILURE!
 junit.framework.ComparisonFailure: expected:[asf009.sp2.ygridcore.ne]t but 
 was:[localhos]t
   at junit.framework.Assert.assertEquals(Assert.java:85)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-827) Need to make Resource arithmetic methods public

2013-06-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-827:


Assignee: Jian He  (was: Zhijie Shen)

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Jian He
Priority: Critical
 Attachments: YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-827) Need to make Resource arithmetic methods public

2013-06-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-827:
-

Attachment: YARN-827.1.patch

mark all as Private, and also modified capacity-scheduler.xml to point to the 
moved DefaultResourceCalculator 

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Jian He
Priority: Critical
 Attachments: YARN-827.1.patch, YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689639#comment-13689639
 ] 

Vinod Kumar Vavilapalli commented on YARN-791:
--

okay, I don't see me going back on the default RUNNING state. But to be clear 
that we are all on the same page, here's the summary
 - Have both RPC and web-service take in multiple states.
-- RPC: a set of states
-- web-service: a comma separate list of states
 - Default for both RPC and the web-service is only RUNNING.
-- RPC:
   --- In proto, not sure we can have default values. If possible, we can 
give a default RUNNING state. If not we can just document it.
   --- In java API, we can have two GetClusterNodesRequest.newInstance() 
methods, one that doesn't take any states, but defaults to RUNNING and one that 
explicitly takes a set of states. Both should be clear in the javadoc.
-- In web-service, we can just document that the default is RUNNING.
 - For getting all the nodes, we can give all states or invent a new state '*' 
which returns everything.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689645#comment-13689645
 ] 

Hitesh Shah commented on YARN-727:
--

Comments:

ResourceMgrDelegate.java:
  - SetString appTypes = new HashSetString(); -- maybe use size 1 as there 
will only be one entry or use a more appropriate set collection? 

ApplicationCLI.java:
  - list_opt = new Option(LIST_CMD, true and appType_opt = new 
Option(appTypes, false,
 - please follow camelCase coding conventions
 - why does listOpt take in an argument but appTypeOpt does not? Should it 
not be the other way around?
  - usage documentation
 - for app types, please clearly mention comma-separated list of 
application types
 - might be good to cut down the text if possible to be a bit more brief
  - appType_opt.setArgs(Option.UNLIMITED_VALUES);
 - is there a reason to set this? What happens if this is not set?
  - --appTypes=YARN can be replaced with --appTypes YARN
 - Can you also add a test to verify that comma separated lists also work 
i.e. something like , YARN, ,,, FOO-YARN ,

 

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.1.patch, 
 YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, 
 YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-827) Need to make Resource arithmetic methods public

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689679#comment-13689679
 ] 

Hadoop QA commented on YARN-827:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588913/YARN-827.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1368//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1368//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1368//console

This message is automatically generated.

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Jian He
Priority: Critical
 Attachments: YARN-827.1.patch, YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)
Wei Yan created YARN-866:


 Summary: Add test for class ResourceWeights
 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan


Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-866:
-

Attachment: Yarn-866.patch

pacth for testing ResourceWeights.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689712#comment-13689712
 ] 

Sandy Ryza commented on YARN-866:
-

Wei, the patch looks good.  My only nit is that two spaces should be used 
instead of tabs.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-827) Need to make Resource arithmetic methods public

2013-06-20 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-827:
-

Attachment: YARN-827.2.patch

Findbug -1 is because of unused field in DefaultResourceCalculator

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Jian He
Priority: Critical
 Attachments: YARN-827.1.patch, YARN-827.2.patch, YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-866:
-

Attachment: Yarn-866.patch

Thansk, Sandy. Attached the new patch.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-866:
-

Attachment: (was: Yarn-866.patch)

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-866:
-

Attachment: Yarn-866.patch

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689755#comment-13689755
 ] 

Sandy Ryza commented on YARN-866:
-

+1

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: YARN-851-20130620.patch

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: (was: YARN-851-20130620.patch)

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-827) Need to make Resource arithmetic methods public

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689765#comment-13689765
 ] 

Hadoop QA commented on YARN-827:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588936/YARN-827.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1369//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1369//console

This message is automatically generated.

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Jian He
Priority: Critical
 Attachments: YARN-827.1.patch, YARN-827.2.patch, YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689782#comment-13689782
 ] 

Hadoop QA commented on YARN-866:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588941/Yarn-866.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1370//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1370//console

This message is automatically generated.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689788#comment-13689788
 ] 

Hadoop QA commented on YARN-851:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588943/YARN-851-20130620.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1371//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1371//console

This message is automatically generated.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689792#comment-13689792
 ] 

Omkar Vinit Joshi commented on YARN-851:


fixing vinod's comments. marking class and all methods as @Public and 
@Evolving. 

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: YARN-851-20130620.1.patch

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.1.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689799#comment-13689799
 ] 

Karthik Kambatla commented on YARN-866:
---

Nit: For the asserts, it would be nice to have an associated message for better 
readability.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: YARN-851-20130620.2.patch

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.1.patch, 
 YARN-851-20130620.2.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689824#comment-13689824
 ] 

Xuan Gong commented on YARN-727:


bq.appType_opt.setArgs(Option.UNLIMITED_VALUES); Is there a reason to set this? 
What happens if this is not set?

This function sets the number of args which the option can have. The default 
value is 1, that means the option can only have one arg. 
In our case, if we type yarn application -list -appTypes YARN,MAPREDUCE, when 
we call cliParser.getOptionValues(appTypes), we will get an array which 
contains one value: {YARN,MAPREDUCE} back. But what we expect is an array which 
contains two values {YARN} and {MAPREDUCE}.

Also, we do not know how many appTypes we will give in one command, so 
Option.UNLIMITED_VALUES is used here. It means that we can type as many 
appTypes as possible in one command and it will parse all of them.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.1.patch, 
 YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, 
 YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689836#comment-13689836
 ] 

Hadoop QA commented on YARN-851:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588958/YARN-851-20130620.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1372//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1372//console

This message is automatically generated.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.1.patch, 
 YARN-851-20130620.2.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689849#comment-13689849
 ] 

Hitesh Shah commented on YARN-791:
--

[~sandyr], [~tucu00], [~vinodkv] - sorry for the late comment on this.

I have a slightly different proposal:

  - The protobuf layer enforces no defaults. 
- It will support the full flexibility and allow users to extract whatever 
information they need.
- Not supplying a state will return all nodes

  - The user-facing layers are the command line and the UI. 
- The UI should not change to return all nodes but just use the apis as 
needed to display default behavior i.e. running/healthy nodes only.
- I would like to recommend that the command line be changed to return all 
nodes too ( with a different option to get only healthy nodes ). However, I am 
ok with the command line remaining as is today with additional options to get 
all nodes with better filtering support. 

  - From the java api and webservices point of view:
- I propose that these confirm to the same standards as the protobuf layer
- Passing nothing to the api will return all nodes.
   - A helper function as Vinod mentioned to allow for default behavior 
could be added as needed to construct a request that will return only healthy 
nodes. 
- From the webservices point of view, we could follow the model that 
/nodes/ returns all nodes, /nodes/healthy returns healthy ones ... ( 
nodes?states= could be used as needed to get both healthy and new nodes )
   - This implies we do not need to introduce something like a * state 
like /nodes/*/ to address getting all nodes.

We seem to have made quite a few changes for 2.1.0. Instead of trying to 
address default behavior and handle default running vs * in various ways, does 
it make sense to change the behavior as mentioned above?



 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689852#comment-13689852
 ] 

Hitesh Shah commented on YARN-791:
--

Looks like the {noformat}*{noformat} converted text to bold in my previous 
comment. 

The last point read as:
{noformat}
This implies we do not need to introduce something like a * state i.e 
/nodes/*/ to address getting all nodes.
{noformat} 

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689860#comment-13689860
 ] 

Hadoop QA commented on YARN-851:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588963/YARN-851-20130620.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1373//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1373//console

This message is automatically generated.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.1.patch, 
 YARN-851-20130620.2.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-867) Isolation of failures in aux services

2013-06-20 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-867:


 Summary: Isolation of failures in aux services 
 Key: YARN-867
 URL: https://issues.apache.org/jira/browse/YARN-867
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Hitesh Shah
Priority: Critical


Today, a malicious application can bring down the NM by sending bad data to a 
service. For example, sending data to the ShuffleService such that it results 
any non-IOException will cause the NM's async dispatcher to exit as the 
service's INIT APP event is not handled properly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-20 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689874#comment-13689874
 ] 

Alejandro Abdelnur commented on YARN-791:
-

I prefer Hitesh's suggestion that if you don't specify a filter you get 
everything, this is more intuitive than an implicit filter.

How about?

The HTTP API would be URL[?filter=STATE+]. if filter= param is not specified 
means ALL. if filter= param is specified and it is empty or invalid we return 
an ERROR response.
The ProtoBuffer would have a filter list. if the list is empty means ALL.
The Java API would have a newInstance() which means ALL.
The Java API would have a newInstance(EnumSetState filter). NULL  
EnumSet.NONE would throw an IllegalArgumentException. EnumSet.ALL is the same 
as newInstance().

The change of param name from state to filter seems also a bit more correct and 
self explanatory.



 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-738) TestClientRMTokens is failing irregularly while running all yarn tests

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689884#comment-13689884
 ] 

Omkar Vinit Joshi commented on YARN-738:


looks like it is fixed for now.. please open if this occurs again.

 TestClientRMTokens is failing irregularly while running all yarn tests
 --

 Key: YARN-738
 URL: https://issues.apache.org/jira/browse/YARN-738
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi

 Running org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens
 Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.787 sec 
  FAILURE!
 testShortCircuitRenewCancel(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
   Time elapsed: 186 sec   ERROR!
 java.lang.RuntimeException: getProxy
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens$YarnBadRPC.getProxy(TestClientRMTokens.java:334)
   at 
 org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.getRmClient(RMDelegationTokenIdentifier.java:157)
   at 
 org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.renew(RMDelegationTokenIdentifier.java:102)
   at org.apache.hadoop.security.token.Token.renew(Token.java:372)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:306)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancel(TestClientRMTokens.java:240)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-738) TestClientRMTokens is failing irregularly while running all yarn tests

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi resolved YARN-738.


Resolution: Invalid
  Assignee: Omkar Vinit Joshi

 TestClientRMTokens is failing irregularly while running all yarn tests
 --

 Key: YARN-738
 URL: https://issues.apache.org/jira/browse/YARN-738
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi

 Running org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens
 Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 16.787 sec 
  FAILURE!
 testShortCircuitRenewCancel(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
   Time elapsed: 186 sec   ERROR!
 java.lang.RuntimeException: getProxy
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens$YarnBadRPC.getProxy(TestClientRMTokens.java:334)
   at 
 org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.getRmClient(RMDelegationTokenIdentifier.java:157)
   at 
 org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.renew(RMDelegationTokenIdentifier.java:102)
   at org.apache.hadoop.security.token.Token.renew(Token.java:372)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:306)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancel(TestClientRMTokens.java:240)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-868) YarnClient should set the service address in tokens returned by getRMDelegationToken()

2013-06-20 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-868:


 Summary: YarnClient should set the service address in tokens 
returned by getRMDelegationToken()
 Key: YARN-868
 URL: https://issues.apache.org/jira/browse/YARN-868
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah


Either the client should set this information into the token or the client 
layer should expose an api that returns the service address.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-20 Thread Hudson (JIRA)
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch, 
 YARN-851-20130619.patch, YARN-851-20130620.1.patch, 
 YARN-851-20130620.2.patch, YARN-851-20130620.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-866:
-

Attachment: YARN-866.patch

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch, YARN-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689896#comment-13689896
 ] 

Wei Yan commented on YARN-866:
--

Thanks, Karthik. Update the patch with assert messages.

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch, YARN-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-866) Add test for class ResourceWeights

2013-06-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689898#comment-13689898
 ] 

Karthik Kambatla commented on YARN-866:
---

+1

 Add test for class ResourceWeights
 --

 Key: YARN-866
 URL: https://issues.apache.org/jira/browse/YARN-866
 Project: Hadoop YARN
  Issue Type: Test
Affects Versions: 2.1.0-beta
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: Yarn-866.patch, Yarn-866.patch, YARN-866.patch


 Add test case for the class ResourceWeights

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-869) ResourceManagerAdministrationProtocol should neither be public(yet) nor in yarn.api

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-869:


 Summary: ResourceManagerAdministrationProtocol should neither be 
public(yet) nor in yarn.api
 Key: YARN-869
 URL: https://issues.apache.org/jira/browse/YARN-869
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker


This is a admin only api that we don't know yet if people can or should write 
new tools against. I am going to move it to yarn.server.api and make it 
@Private..

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-869) ResourceManagerAdministrationProtocol should neither be public(yet) nor in yarn.api

2013-06-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-869:
-

Target Version/s: 2.1.0-beta

 ResourceManagerAdministrationProtocol should neither be public(yet) nor in 
 yarn.api
 ---

 Key: YARN-869
 URL: https://issues.apache.org/jira/browse/YARN-869
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker

 This is a admin only api that we don't know yet if people can or should write 
 new tools against. I am going to move it to yarn.server.api and make it 
 @Private..

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.16.patch

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, 
 YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, 
 YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, 
 YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-35) Move to per-node RM-NM secrets

2013-06-20 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-35?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi reassigned YARN-35:
-

Assignee: Omkar Vinit Joshi

 Move to per-node RM-NM secrets
 --

 Key: YARN-35
 URL: https://issues.apache.org/jira/browse/YARN-35
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Omkar Vinit Joshi

 We should move over to per node secret (RM-NM shared secrets) for security 
 sake. It was what I had in my mind while designing the whole security 
 architecture, but somehow it got lost in all the storm of security patches.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-870) Node manager is no longer required to store ContainerToken as it is required only during startContainer call.

2013-06-20 Thread Omkar Vinit Joshi (JIRA)
Omkar Vinit Joshi created YARN-870:
--

 Summary: Node manager is no longer required to store 
ContainerToken as it is required only during startContainer call.
 Key: YARN-870
 URL: https://issues.apache.org/jira/browse/YARN-870
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi


The Container token is no longer required to be saved on node manager side. 
Should be removed from NMContainerTokenSecretManager. Earlier this was required 
for authentication but now it is no longer required for authentication after 
YARN-613

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689994#comment-13689994
 ] 

Hitesh Shah commented on YARN-727:
--

[~xgong] Thanks for being patient with the reviews. This patch mostly looks 
good - getting close to a final one.

Some comments based on offline feedback from [~vinodkv] assuming this can still 
go into branch-2.1:

  - Change GetAllApplicationsRequest to GetApplicationsRequest. 
  - Introduce a helper empty constructor - public static GetApplicationsRequest 
newInstance()
 - YarnClient should also provide a getApplicationList() in addition to 
getApplicationList(Set)
  - Why is this listOpt.setOptionalArg(true); needed? -list does not take any 
args
  - Make appTypes in ApplicationCLI a final static field? 
  - Description of -list and -appTypes duplicates information about comma 
separated list.
  - How about appTypeOpt.setArgName(Comma-separated application types) to 
help aid usage guide?


 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, 
 YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, 
 YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, 
 YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-20 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13689995#comment-13689995
 ] 

Hitesh Shah commented on YARN-727:
--

To clarify on my previous comment, we need @vinodkv to confirm whether 
GetAllApplicationsRequest can be changed.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.15.patch, YARN-727.16.patch, 
 YARN-727.1.patch, YARN-727.2.patch, YARN-727.3.patch, YARN-727.4.patch, 
 YARN-727.5.patch, YARN-727.6.patch, YARN-727.7.patch, YARN-727.8.patch, 
 YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira