[jira] [Created] (YARN-681) Move ContainerTokenIdentifier from yarn-common to yarn-server-common

2013-05-14 Thread Zhijie Shen (JIRA)
Zhijie Shen created YARN-681:


 Summary: Move ContainerTokenIdentifier from yarn-common to 
yarn-server-common
 Key: YARN-681
 URL: https://issues.apache.org/jira/browse/YARN-681
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen


Move ContainerTokenIdentifier from yarn-common to yarn-server-common, such that 
the client will have have no way of interpreting the ByteBuffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-682) Add application type to submission context for map reduce

2013-05-14 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal resolved YARN-682.


Resolution: Invalid

Accidently opened , closing it

 Add application type to submission context for map reduce
 -

 Key: YARN-682
 URL: https://issues.apache.org/jira/browse/YARN-682
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Mayank Bansal

 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656920#comment-13656920
 ] 

Mayank Bansal commented on YARN-563:


Thanks Vinod for the review.

Incorporated all your comments. Created MAPREDUCE-5246 for the mapred changes.

Attaching both the patches.

Thanks,
Mayank

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-563:
---

Attachment: YARN-563-trunk-2.patch

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656923#comment-13656923
 ] 

Hadoop QA commented on YARN-563:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12583103/YARN-563-trunk-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/924//console

This message is automatically generated.

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-615) ContainerLaunchContext.containerTokens should simply be called tokens

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656948#comment-13656948
 ] 

Hudson commented on YARN-615:
-

Integrated in Hadoop-Yarn-trunk #209 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/209/])
YARN-615. Rename ContainerLaunchContext.containerTokens to tokens. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1482199)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482199
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 ContainerLaunchContext.containerTokens should simply be called tokens
 -

 Key: YARN-615
 URL: https://issues.apache.org/jira/browse/YARN-615
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.5-beta

 Attachments: YARN-615-20130503.txt, YARN-615-20130512.txt


 ContainerToken is the name of the specific token that AMs use to launch 
 containers on NMs, so we should rename CLC.containerTokens to be simply 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13656947#comment-13656947
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Yarn-trunk #209 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/209/])
YARN-597. TestFSDownload fails on Windows due to dependencies on 
tar/gzip/jar tools. Contributed by Ivan Mitic. (Revision 1482149)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482149
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657030#comment-13657030
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Hdfs-trunk #1398 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1398/])
YARN-597. TestFSDownload fails on Windows due to dependencies on 
tar/gzip/jar tools. Contributed by Ivan Mitic. (Revision 1482149)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482149
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-615) ContainerLaunchContext.containerTokens should simply be called tokens

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657031#comment-13657031
 ] 

Hudson commented on YARN-615:
-

Integrated in Hadoop-Hdfs-trunk #1398 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1398/])
YARN-615. Rename ContainerLaunchContext.containerTokens to tokens. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1482199)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482199
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 ContainerLaunchContext.containerTokens should simply be called tokens
 -

 Key: YARN-615
 URL: https://issues.apache.org/jira/browse/YARN-615
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.5-beta

 Attachments: YARN-615-20130503.txt, YARN-615-20130512.txt


 ContainerToken is the name of the specific token that AMs use to launch 
 containers on NMs, so we should rename CLC.containerTokens to be simply 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-615) ContainerLaunchContext.containerTokens should simply be called tokens

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657067#comment-13657067
 ] 

Hudson commented on YARN-615:
-

Integrated in Hadoop-Mapreduce-trunk #1425 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1425/])
YARN-615. Rename ContainerLaunchContext.containerTokens to tokens. 
Contributed by Vinod Kumar Vavilapalli. (Revision 1482199)

 Result = SUCCESS
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482199
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttemptContainerRequest.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerLaunchContext.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerLaunchContextPBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java


 ContainerLaunchContext.containerTokens should simply be called tokens
 -

 Key: YARN-615
 URL: https://issues.apache.org/jira/browse/YARN-615
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.5-beta

 Attachments: YARN-615-20130503.txt, YARN-615-20130512.txt


 ContainerToken is the name of the specific token that AMs use to launch 
 containers on NMs, so we should rename CLC.containerTokens to be simply 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657066#comment-13657066
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-Mapreduce-trunk #1425 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1425/])
YARN-597. TestFSDownload fails on Windows due to dependencies on 
tar/gzip/jar tools. Contributed by Ivan Mitic. (Revision 1482149)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1482149
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-683) Class MiniYARNCluster not found when starting the minicluster

2013-05-14 Thread JIRA
Rémy SAISSY created YARN-683:


 Summary: Class MiniYARNCluster not found when starting the 
minicluster
 Key: YARN-683
 URL: https://issues.apache.org/jira/browse/YARN-683
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.4-alpha, 3.0.0
 Environment: MacOSX 10.8.3 - Java 1.6.0_45
Reporter: Rémy SAISSY


Starting the minicluster with the following command line:
bin/hadoop jar 
share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.0.4-alpha-tests.jar 
minicluster -format

Fails for MiniYARNCluster with the following error:

13/05/14 17:06:58 INFO hdfs.MiniDFSCluster: Cluster is active
13/05/14 17:06:58 INFO mapreduce.MiniHadoopClusterManager: Started 
MiniDFSCluster -- namenode on port 55205
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/MiniYARNCluster
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:170)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:129)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:314)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
at 
org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:115)
at 
org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.yarn.server.MiniYARNCluster
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 16 more



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-669) ContainerTokens sent from the RM to NM via the AM should be a byte field

2013-05-14 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved YARN-669.
-

Resolution: Invalid

 ContainerTokens sent from the RM to NM via the AM should be a byte field
 

 Key: YARN-669
 URL: https://issues.apache.org/jira/browse/YARN-669
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Vinod Kumar Vavilapalli
Priority: Critical

 AMs should not try to read any information from this Token, since this token 
 is used as an authorization mechanism. Converting it to a byte field also 
 allows changes to the token.
 This could be considered part of the API jira - YARN-386.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-638) Restore RMDelegationTokens after RM Restart

2013-05-14 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-638:
-

Attachment: YARN-638.9.patch

fixed hdfs test failure

 Restore RMDelegationTokens after RM Restart
 ---

 Key: YARN-638
 URL: https://issues.apache.org/jira/browse/YARN-638
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-638.1.patch, YARN-638.2.patch, YARN-638.3.patch, 
 YARN-638.4.patch, YARN-638.5.patch, YARN-638.6.patch, YARN-638.7.patch, 
 YARN-638.8.patch, YARN-638.9.patch


 This is missed in YARN-581. After RM restart, RMDelegationTokens need to be 
 added both in DelegationTokenRenewer (addressed in YARN-581), and 
 delegationTokenSecretManager

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-379) yarn [node,application] command print logger info messages

2013-05-14 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657320#comment-13657320
 ] 

Ravi Prakash commented on YARN-379:
---

Hi Abhishek,

I recant. I like the patch's approach better. But we already have 
YARN_CLIENT_OPTS for this. Do you see a reason why we shouldn't add the NoOpLog 
to YARN_CLIENT_OPTS??

Thanks for your contribution. Lets get it in.

 yarn [node,application] command print logger info messages
 --

 Key: YARN-379
 URL: https://issues.apache.org/jira/browse/YARN-379
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Thomas Graves
Assignee: Abhishek Kapoor
  Labels: usability
 Attachments: YARN-379.patch


 Running the yarn node and yarn applications command results in annoying log 
 info messages being printed:
 $ yarn node -list
 13/02/06 02:36:50 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
 13/02/06 02:36:50 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
 Total Nodes:1
  Node-IdNode-State  Node-Http-Address   
 Health-Status(isNodeHealthy)Running-Containers
 foo:8041RUNNING  foo:8042   true  
  0
 13/02/06 02:36:50 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is stopped.
 $ yarn application
 13/02/06 02:38:47 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
 13/02/06 02:38:47 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
 Invalid Command Usage : 
 usage: application
  -kill arg Kills the application.
  -list   Lists all the Applications from RM.
  -status arg   Prints the status of the application.
 13/02/06 02:38:47 INFO service.AbstractService: 
 Service:org.apache.hadoop.yarn.client.YarnClientImpl is stopped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-666) [Umbrella] Support rolling upgrades in YARN

2013-05-14 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657337#comment-13657337
 ] 

Carlo Curino commented on YARN-666:
---

This seems a very important problem (and a very hard one too). 

Just to toss one more idea around: I think that an HDFS-based shuffle (we are 
playing around with it and performance are much better than expected) 
could simplify some of the problems, as we could piggyback on datanode 
decomissioning mechanics to migrate intermediate data out of a node being 
decomissioned. 
And (a bit obvious) preemption could be a good tool to make the draining fast 
without wasting work (the administrative scenarios we mentioned during the 
conversation in YARN-45). 

 [Umbrella] Support rolling upgrades in YARN
 ---

 Key: YARN-666
 URL: https://issues.apache.org/jira/browse/YARN-666
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
 Attachments: YARN_Rolling_Upgrades.pdf, YARN_Rolling_Upgrades_v2.pdf


 Jira to track changes required in YARN to allow rolling upgrades, including 
 documentation and possible upgrade routes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-05-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-642:


Attachment: YARN-642-2.patch

 Fix up /nodes REST API to have 1 param and be consistent with the Java API
 --

 Key: YARN-642
 URL: https://issues.apache.org/jira/browse/YARN-642
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
 YARN-642.patch


 The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
 same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
 even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-05-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657393#comment-13657393
 ] 

Sandy Ryza commented on YARN-642:
-

Latest patch should fix failing test

 Fix up /nodes REST API to have 1 param and be consistent with the Java API
 --

 Key: YARN-642
 URL: https://issues.apache.org/jira/browse/YARN-642
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
 YARN-642.patch


 The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
 same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
 even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657394#comment-13657394
 ] 

Mayank Bansal commented on YARN-563:


Fixing error

Thanks,
Mayank

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch, 
 YARN-563-trunk-3.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Mayank Bansal (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayank Bansal updated YARN-563:
---

Attachment: YARN-563-trunk-3.patch

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch, 
 YARN-563-trunk-3.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-642) Fix up /nodes REST API to have 1 param and be consistent with the Java API

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657409#comment-13657409
 ] 

Hadoop QA commented on YARN-642:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583185/YARN-642-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/926//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/926//console

This message is automatically generated.

 Fix up /nodes REST API to have 1 param and be consistent with the Java API
 --

 Key: YARN-642
 URL: https://issues.apache.org/jira/browse/YARN-642
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
  Labels: incompatible
 Attachments: YARN-642-1.patch, YARN-642-2.patch, YARN-642-2.patch, 
 YARN-642.patch


 The code behind the /nodes RM REST API is unnecessarily muddled, logs the 
 same misspelled INFO message repeatedly, and does not return unhealthy nodes, 
 even when asked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-664) throw InvalidRequestException for requests with different capabilities at the same priority

2013-05-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-664:


Description: Nothing stops an application from submitting a request with 
priority=1, location=*, memory=1024 and a request with priority=1, 
location=rack1, memory=2048.  However, this does not make sense under the 
request model and can cause bad things to happen in the scheduler.  It should 
be possible to detect this at AMRM heartbeat time and throw an exception.  
(was: Nothing stops an application from submitting a request with priority=1, 
location=*, memory=1024 and a request with priority=1, location=rack1, 
memory=1024.  However, this does not make sense under the request model and can 
cause bad things to happen in the scheduler.  It should be possible to detect 
this at AMRM heartbeat time and throw an exception.)

 throw InvalidRequestException for requests with different capabilities at the 
 same priority
 ---

 Key: YARN-664
 URL: https://issues.apache.org/jira/browse/YARN-664
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 Nothing stops an application from submitting a request with priority=1, 
 location=*, memory=1024 and a request with priority=1, location=rack1, 
 memory=2048.  However, this does not make sense under the request model and 
 can cause bad things to happen in the scheduler.  It should be possible to 
 detect this at AMRM heartbeat time and throw an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-638) Restore RMDelegationTokens after RM Restart

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657442#comment-13657442
 ] 

Hadoop QA commented on YARN-638:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583167/YARN-638.9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/925//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/925//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/925//console

This message is automatically generated.

 Restore RMDelegationTokens after RM Restart
 ---

 Key: YARN-638
 URL: https://issues.apache.org/jira/browse/YARN-638
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-638.1.patch, YARN-638.2.patch, YARN-638.3.patch, 
 YARN-638.4.patch, YARN-638.5.patch, YARN-638.6.patch, YARN-638.7.patch, 
 YARN-638.8.patch, YARN-638.9.patch


 This is missed in YARN-581. After RM restart, RMDelegationTokens need to be 
 added both in DelegationTokenRenewer (addressed in YARN-581), and 
 delegationTokenSecretManager

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-664) throw InvalidRequestException for requests with different capabilities at the same priority

2013-05-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657452#comment-13657452
 ] 

Sandy Ryza commented on YARN-664:
-

My mistake - the original example was valid, just switched it to be an invalid 
one.

The AMRM protocol expects a corresponding rack-level and *-level request for 
every node-level request, and a corresponding *-level request for every 
rack-level request.  The requests in the updated example now mean something 
like if you give me a container on rack1, it should have 2048 MB, but if it's 
on any other rack, it should have 1024 MB.  I don't think the scheduler should 
support this, as it's hard to imagine when it would be necessary, and makes it 
impossible to calculate an application's demand.

If what you're saying is that the scheduler should be able to support requests 
like:
priority=1, location=*, memory=1024
priority=1, location=rack1, memory=1024
priority=1, location=*, memory=2048
priority=1, location=rack1, memory=2048

that might make sense.  I had filed YARN-314 for this a while ago, but have 
since become less convinced of its utility.  It would require deepening the 
data structures in the scheduler, which would mean extra hash lookups for each 
request.  The philosophy I've perceived in the design has been that containers 
with different requirements should be requested at explicitly different 
priorities. [~acm], would you be able to weigh in?

 throw InvalidRequestException for requests with different capabilities at the 
 same priority
 ---

 Key: YARN-664
 URL: https://issues.apache.org/jira/browse/YARN-664
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 Nothing stops an application from submitting a request with priority=1, 
 location=*, memory=1024 and a request with priority=1, location=rack1, 
 memory=2048.  However, this does not make sense under the request model and 
 can cause bad things to happen in the scheduler.  It should be possible to 
 detect this at AMRM heartbeat time and throw an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-664) throw InvalidRequestException for requests with different capabilities at the same priority

2013-05-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657453#comment-13657453
 ] 

Sandy Ryza commented on YARN-664:
-

Sorry, [~acm], meant to mention [~acmurthy], not you.

 throw InvalidRequestException for requests with different capabilities at the 
 same priority
 ---

 Key: YARN-664
 URL: https://issues.apache.org/jira/browse/YARN-664
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 Nothing stops an application from submitting a request with priority=1, 
 location=*, memory=1024 and a request with priority=1, location=rack1, 
 memory=2048.  However, this does not make sense under the request model and 
 can cause bad things to happen in the scheduler.  It should be possible to 
 detect this at AMRM heartbeat time and throw an exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-563) Add application type to ApplicationReport

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657465#comment-13657465
 ] 

Hadoop QA commented on YARN-563:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12583186/YARN-563-trunk-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/927//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/927//console

This message is automatically generated.

 Add application type to ApplicationReport 
 --

 Key: YARN-563
 URL: https://issues.apache.org/jira/browse/YARN-563
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Thomas Weise
Assignee: Mayank Bansal
 Attachments: YARN-563-trunk-1.patch, YARN-563-trunk-2.patch, 
 YARN-563-trunk-3.patch


 This field is needed to distinguish different types of applications (app 
 master implementations). For example, we may run applications of type XYZ in 
 a cluster alongside MR and would like to filter applications by type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-05-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-392:


Attachment: YARN-392-4.patch

 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Sandy Ryza
 Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
 YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch


 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-05-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657504#comment-13657504
 ] 

Sandy Ryza commented on YARN-392:
-

Updated patch adds javadocs

 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Sandy Ryza
 Attachments: YARN-392-1.patch, YARN-392-2.patch, YARN-392-2.patch, 
 YARN-392-2.patch, YARN-392-3.patch, YARN-392-4.patch, YARN-392.patch


 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-350) FSSchedulerNode is always instantiated with a 0 virtual core capacity

2013-05-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza resolved YARN-350.
-

Resolution: Duplicate

 FSSchedulerNode is always instantiated with a 0 virtual core capacity
 -

 Key: YARN-350
 URL: https://issues.apache.org/jira/browse/YARN-350
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 After YARN-2, FSSchedulerNode was not updated to initialize with the 
 underlying RMNode's CPU capacity, and thus always has 0 virtual cores.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-129) Simplify classpath construction for mini YARN tests

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657616#comment-13657616
 ] 

Hudson commented on YARN-129:
-

Integrated in HBase-TRUNK #4117 (See 
[https://builds.apache.org/job/HBase-TRUNK/4117/])
HBASE-8528 [hadoop2] TestMultiTableInputFormat always hadoop with YARN-129 
applied (with Gary Helmling) (Revision 1482561)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


 Simplify classpath construction for mini YARN tests
 ---

 Key: YARN-129
 URL: https://issues.apache.org/jira/browse/YARN-129
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.3-alpha

 Attachments: YARN-129.patch, YARN-129.patch, YARN-129.patch


 The test classpath includes a special file called 'mrapp-generated-classpath' 
 (or similar in distributed shell) that is constructed at build time, and 
 whose contents are a classpath with all the dependencies needed to run the 
 tests. When the classpath for a container (e.g. the AM) is constructed the 
 contents of mrapp-generated-classpath is read and added to the classpath, and 
 the file itself is then added to the classpath so that later when the AM 
 constructs a classpath for a task container it can propagate the test 
 classpath correctly.
 This mechanism can be drastically simplified by propagating the system 
 classpath of the current JVM (read from the java.class.path property) to a 
 launched JVM, but only if running in the context of the mini YARN cluster. 
 Any tests that use the mini YARN cluster will automatically work with this 
 change. Although any that explicitly deal with mrapp-generated-classpath can 
 be simplified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-535) TestUnmanagedAMLauncher can corrupt target/test-classes/yarn-site.xml during write phase, breaks later test runs

2013-05-14 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657684#comment-13657684
 ] 

Chris Nauroth commented on YARN-535:


+1 for the patch.  Thanks for making the change in {{TestDistributedShell}} 
too.  I verified the tests on both Mac and Windows.

 TestUnmanagedAMLauncher can corrupt target/test-classes/yarn-site.xml during 
 write phase, breaks later test runs
 

 Key: YARN-535
 URL: https://issues.apache.org/jira/browse/YARN-535
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications
Affects Versions: 3.0.0
 Environment: OS/X laptop, HFS+ filesystem
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Attachments: YARN-535-02.patch, YARN-535.patch


 the setup phase of {{TestUnmanagedAMLauncher}} overwrites {{yarn-site.xml}}. 
 As {{Configuration.writeXml()}} does a reread of all resources, this will 
 break if the (open-for-writing) resource is already visible as an empty file. 
 This leaves a corrupted {{target/test-classes/yarn-site.xml}}, which breaks 
 later test runs -because it is not overwritten by later incremental builds, 
 due to timestamps.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-613) Create NM proxy per NM instead of per container

2013-05-14 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657744#comment-13657744
 ] 

Omkar Vinit Joshi commented on YARN-613:


I am just summarizing the changes which we need to make for AMNMToken per AM 
per NM 

AMNMToken will remain valid until application is Alive. So Ideally AM will be 
able to communicated with NM as long as
* It received AMNMToken and at least started one container on the underlying 
Node (NameNode).
* Application has not yet finished.( Because after this NM will no longer 
remember about this AMNMToken master key...)

List of changes..
* RM side
** RM will now have ...RMAMNMTokenSecretManager which will generate token for 
every application per NM. This token creation will happen only once per NM per 
AM. If AM requests and gets new container on same NM then the token will not be 
regenerated. So RM maintains a map of AMNMTokens sent per AM per NM ... 
** RM will share master key with NM in its heartbeat if updated.

* AM side
** AM will now have to remember AMNMTokens per NM which it will get only once 
per NM during allocate call.
** AM will use this token for authentication by updating UGI while 
communicating with NM

* NM side
** NMAMNMTokenSecretManager will remember current and previous master key 
received as a part of heartbeat.
** It will also remember MasterKeyId per AM (appId) (This is to make sure we 
can support long running jobs).
** It will authenticate startContainer, getContainerStatus and stopContainer 
calls using AMNMToken via already saved master key. For very first 
startContainer request for the application using current/previous master key.


 Create NM proxy per NM instead of per container
 ---

 Key: YARN-613
 URL: https://issues.apache.org/jira/browse/YARN-613
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Omkar Vinit Joshi

 Currently a new NM proxy has to be created per container since the secure 
 authentication is using a containertoken from the container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-681) Move ContainerTokenIdentifier from yarn-common to yarn-server-common

2013-05-14 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-681:
-

Attachment: YARN-681.1.patch

A couple of refactoring stuff: moving ContainerTokenIdentifier, 
ContainerTokenSelector, ContainerManagerSecurityInfo from security of common to 
server.security of server-common, moving TestContainerLaunchRPC from common to 
server-common, adding and editing the services files in META-INF of common and 
server-common, and fixing some small code format problems when moving the code.

 Move ContainerTokenIdentifier from yarn-common to yarn-server-common
 

 Key: YARN-681
 URL: https://issues.apache.org/jira/browse/YARN-681
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-681.1.patch


 Move ContainerTokenIdentifier from yarn-common to yarn-server-common, such 
 that the client will have have no way of interpreting the ByteBuffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-129) Simplify classpath construction for mini YARN tests

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657767#comment-13657767
 ] 

Hudson commented on YARN-129:
-

Integrated in hbase-0.95 #193 (See 
[https://builds.apache.org/job/hbase-0.95/193/])
HBASE-8528 [hadoop2] TestMultiTableInputFormat always on hadoop with 
YARN-129 applied (with Gary Helmling) (Revision 1482563)

 Result = SUCCESS
jmhsieh : 
Files : 
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


 Simplify classpath construction for mini YARN tests
 ---

 Key: YARN-129
 URL: https://issues.apache.org/jira/browse/YARN-129
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.3-alpha

 Attachments: YARN-129.patch, YARN-129.patch, YARN-129.patch


 The test classpath includes a special file called 'mrapp-generated-classpath' 
 (or similar in distributed shell) that is constructed at build time, and 
 whose contents are a classpath with all the dependencies needed to run the 
 tests. When the classpath for a container (e.g. the AM) is constructed the 
 contents of mrapp-generated-classpath is read and added to the classpath, and 
 the file itself is then added to the classpath so that later when the AM 
 constructs a classpath for a task container it can propagate the test 
 classpath correctly.
 This mechanism can be drastically simplified by propagating the system 
 classpath of the current JVM (read from the java.class.path property) to a 
 launched JVM, but only if running in the context of the mini YARN cluster. 
 Any tests that use the mini YARN cluster will automatically work with this 
 change. Although any that explicitly deal with mrapp-generated-classpath can 
 be simplified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-681) Move ContainerTokenIdentifier from yarn-common to yarn-server-common

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657783#comment-13657783
 ] 

Hadoop QA commented on YARN-681:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583254/YARN-681.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/929//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/929//console

This message is automatically generated.

 Move ContainerTokenIdentifier from yarn-common to yarn-server-common
 

 Key: YARN-681
 URL: https://issues.apache.org/jira/browse/YARN-681
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-681.1.patch


 Move ContainerTokenIdentifier from yarn-common to yarn-server-common, such 
 that the client will have have no way of interpreting the ByteBuffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-684) ContainerManager.startContainer needs to only have ContainerTokenIdentifier instead of the whole Container

2013-05-14 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-684:


 Summary: ContainerManager.startContainer needs to only have 
ContainerTokenIdentifier instead of the whole Container
 Key: YARN-684
 URL: https://issues.apache.org/jira/browse/YARN-684
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli


The NM only needs the token, the whole Container is unnecessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-366) Add a tracing async dispatcher to simplify debugging

2013-05-14 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657842#comment-13657842
 ] 

Sandy Ryza commented on YARN-366:
-

That seems reasonable to me, Vinod.  Uploading a patch that makes 
ContainerManagerImpl implement ResourceView and makes the config a class-name 
instead of a boolean.

 Add a tracing async dispatcher to simplify debugging
 

 Key: YARN-366
 URL: https://issues.apache.org/jira/browse/YARN-366
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager, resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-366-1.patch, YARN-366-2.patch, YARN-366.patch


 Exceptions thrown in YARN/MR code with asynchronous event handling do not 
 contain informative stack traces, as all handle() methods sit directly under 
 the dispatcher thread's loop.
 This makes errors very difficult to debug for those who are not intimately 
 familiar with the code, as it is difficult to see which chain of events 
 caused a particular outcome.
 I propose adding an AsyncDispatcher that instruments events with tracing 
 information.  Whenever an event is dispatched during the handling of another 
 event, the dispatcher would annotate that event with a pointer to its parent. 
  When the dispatcher catches an exception, it could reconstruct a stack 
 trace of the chain of events that led to it, and be able to log something 
 informative.
 This would be an experimental feature, off by default, unless extensive 
 testing showed that it did not have a significant performance impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-366) Add a tracing async dispatcher to simplify debugging

2013-05-14 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-366:


Attachment: YARN-366-3.patch

 Add a tracing async dispatcher to simplify debugging
 

 Key: YARN-366
 URL: https://issues.apache.org/jira/browse/YARN-366
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager, resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-366-1.patch, YARN-366-2.patch, YARN-366-3.patch, 
 YARN-366.patch


 Exceptions thrown in YARN/MR code with asynchronous event handling do not 
 contain informative stack traces, as all handle() methods sit directly under 
 the dispatcher thread's loop.
 This makes errors very difficult to debug for those who are not intimately 
 familiar with the code, as it is difficult to see which chain of events 
 caused a particular outcome.
 I propose adding an AsyncDispatcher that instruments events with tracing 
 information.  Whenever an event is dispatched during the handling of another 
 event, the dispatcher would annotate that event with a pointer to its parent. 
  When the dispatcher catches an exception, it could reconstruct a stack 
 trace of the chain of events that led to it, and be able to log something 
 informative.
 This would be an experimental feature, off by default, unless extensive 
 testing showed that it did not have a significant performance impact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-129) Simplify classpath construction for mini YARN tests

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657857#comment-13657857
 ] 

Hudson commented on YARN-129:
-

Integrated in hbase-0.95-on-hadoop2 #99 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/99/])
HBASE-8528 [hadoop2] TestMultiTableInputFormat always on hadoop with 
YARN-129 applied (with Gary Helmling) (Revision 1482563)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


 Simplify classpath construction for mini YARN tests
 ---

 Key: YARN-129
 URL: https://issues.apache.org/jira/browse/YARN-129
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.3-alpha

 Attachments: YARN-129.patch, YARN-129.patch, YARN-129.patch


 The test classpath includes a special file called 'mrapp-generated-classpath' 
 (or similar in distributed shell) that is constructed at build time, and 
 whose contents are a classpath with all the dependencies needed to run the 
 tests. When the classpath for a container (e.g. the AM) is constructed the 
 contents of mrapp-generated-classpath is read and added to the classpath, and 
 the file itself is then added to the classpath so that later when the AM 
 constructs a classpath for a task container it can propagate the test 
 classpath correctly.
 This mechanism can be drastically simplified by propagating the system 
 classpath of the current JVM (read from the java.class.path property) to a 
 launched JVM, but only if running in the context of the mini YARN cluster. 
 Any tests that use the mini YARN cluster will automatically work with this 
 change. Although any that explicitly deal with mrapp-generated-classpath can 
 be simplified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-666) [Umbrella] Support rolling upgrades in YARN

2013-05-14 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657864#comment-13657864
 ] 

Vinod Kumar Vavilapalli commented on YARN-666:
--

bq. Just to toss one more idea around: I think that an HDFS-based shuffle (we 
are playing around with it and performance are much better than expected) 
Carlo, it will be great if you share some numbers :)

 [Umbrella] Support rolling upgrades in YARN
 ---

 Key: YARN-666
 URL: https://issues.apache.org/jira/browse/YARN-666
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
 Attachments: YARN_Rolling_Upgrades.pdf, YARN_Rolling_Upgrades_v2.pdf


 Jira to track changes required in YARN to allow rolling upgrades, including 
 documentation and possible upgrade routes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-670) Add an Exception to indicate 'Maintenance' for NMs and add this to the JavaDoc for appropriate protocols

2013-05-14 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned YARN-670:


Assignee: Siddharth Seth

 Add an Exception to indicate 'Maintenance' for NMs and add this to the 
 JavaDoc for appropriate protocols
 

 Key: YARN-670
 URL: https://issues.apache.org/jira/browse/YARN-670
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Siddharth Seth
Assignee: Siddharth Seth



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-671) Add an interface on the RM to move NMs into a maintenance state

2013-05-14 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned YARN-671:


Assignee: Siddharth Seth

 Add an interface on the RM to move NMs into a maintenance state
 ---

 Key: YARN-671
 URL: https://issues.apache.org/jira/browse/YARN-671
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-129) Simplify classpath construction for mini YARN tests

2013-05-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657876#comment-13657876
 ] 

Hudson commented on YARN-129:
-

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #530 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/530/])
HBASE-8528 [hadoop2] TestMultiTableInputFormat always hadoop with YARN-129 
applied (with Gary Helmling) (Revision 1482561)

 Result = FAILURE
jmhsieh : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


 Simplify classpath construction for mini YARN tests
 ---

 Key: YARN-129
 URL: https://issues.apache.org/jira/browse/YARN-129
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.3-alpha

 Attachments: YARN-129.patch, YARN-129.patch, YARN-129.patch


 The test classpath includes a special file called 'mrapp-generated-classpath' 
 (or similar in distributed shell) that is constructed at build time, and 
 whose contents are a classpath with all the dependencies needed to run the 
 tests. When the classpath for a container (e.g. the AM) is constructed the 
 contents of mrapp-generated-classpath is read and added to the classpath, and 
 the file itself is then added to the classpath so that later when the AM 
 constructs a classpath for a task container it can propagate the test 
 classpath correctly.
 This mechanism can be drastically simplified by propagating the system 
 classpath of the current JVM (read from the java.class.path property) to a 
 launched JVM, but only if running in the context of the mini YARN cluster. 
 Any tests that use the mini YARN cluster will automatically work with this 
 change. Although any that explicitly deal with mrapp-generated-classpath can 
 be simplified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-628) Fix YarnException unwrapping

2013-05-14 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated YARN-628:


Attachment: YARN-628.txt.2

Updated patch to handle client side exceptions from the RPC layer.

 Fix YarnException unwrapping
 

 Key: YARN-628
 URL: https://issues.apache.org/jira/browse/YARN-628
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: YARN-628.txt, YARN-628.txt.2


 Unwrapping of YarnRemoteExceptions (currently in YarnRemoteExceptionPBImpl, 
 RPCUtil post YARN-625) is broken, and often ends up throwin 
 UndeclaredThrowableException. This needs to be fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-628) Fix YarnException unwrapping

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13657913#comment-13657913
 ] 

Hadoop QA commented on YARN-628:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583279/YARN-628.txt.2
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests:

  org.apache.hadoop.yarn.TestContainerLaunchRPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/930//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/930//console

This message is automatically generated.

 Fix YarnException unwrapping
 

 Key: YARN-628
 URL: https://issues.apache.org/jira/browse/YARN-628
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: YARN-628.txt, YARN-628.txt.2


 Unwrapping of YarnRemoteExceptions (currently in YarnRemoteExceptionPBImpl, 
 RPCUtil post YARN-625) is broken, and often ends up throwin 
 UndeclaredThrowableException. This needs to be fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-628) Fix YarnException unwrapping

2013-05-14 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated YARN-628:


Attachment: YARN-628.txt

Fixes the unit test failure.

 Fix YarnException unwrapping
 

 Key: YARN-628
 URL: https://issues.apache.org/jira/browse/YARN-628
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: YARN-628.txt, YARN-628.txt, YARN-628.txt.2


 Unwrapping of YarnRemoteExceptions (currently in YarnRemoteExceptionPBImpl, 
 RPCUtil post YARN-625) is broken, and often ends up throwin 
 UndeclaredThrowableException. This needs to be fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-685) Capacity Scheduler is not distributing the reducers tasks across the cluster

2013-05-14 Thread Devaraj K (JIRA)
Devaraj K created YARN-685:
--

 Summary: Capacity Scheduler is not distributing the reducers tasks 
across the cluster
 Key: YARN-685
 URL: https://issues.apache.org/jira/browse/YARN-685
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.0.4-alpha
Reporter: Devaraj K


If we have reducers whose total memory required to complete is less than the 
total cluster memory, it is not assigning the reducers to all the nodes 
uniformly(~uniformly). Also at that time there are no other jobs or job tasks 
running in the cluster.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-628) Fix YarnException unwrapping

2013-05-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13658087#comment-13658087
 ] 

Hadoop QA commented on YARN-628:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12583281/YARN-628.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/931//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/931//console

This message is automatically generated.

 Fix YarnException unwrapping
 

 Key: YARN-628
 URL: https://issues.apache.org/jira/browse/YARN-628
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: YARN-628.txt, YARN-628.txt, YARN-628.txt.2


 Unwrapping of YarnRemoteExceptions (currently in YarnRemoteExceptionPBImpl, 
 RPCUtil post YARN-625) is broken, and often ends up throwin 
 UndeclaredThrowableException. This needs to be fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira