[jira] [Created] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-19 Thread Chuan Liu (JIRA)
Chuan Liu created YARN-852:
--

 Summary: TestAggregatedLogFormat.testContainerLogsFileAccess fails 
on Windows
 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


The YARN unit test case fails on Windows when comparing expected message with 
log message in the file. The expected message constructed in the test case has 
two problems: 1) it uses Path.separator to concatenate path string. 
Path.separator is always a forward slash, which does not match the backslash 
used in the log message. 2) On Windows, the default file owner is 
Administrators group if the file is created by an Administrators user. The test 
expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-19 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated YARN-852:
---

Attachment: YARN-852-trunk.patch

Attaching a patch that fixes the two problems mentioned in the description.

 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687663#comment-13687663
 ] 

Hadoop QA commented on YARN-553:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588532/yarn-553-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1343//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1343//console

This message is automatically generated.

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-553:
---

Attachment: yarn-553-7.patch

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687677#comment-13687677
 ] 

Hadoop QA commented on YARN-553:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588532/yarn-553-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1344//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1344//console

This message is automatically generated.

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687702#comment-13687702
 ] 

Hadoop QA commented on YARN-852:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588544/YARN-852-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1346//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1346//console

This message is automatically generated.

 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687708#comment-13687708
 ] 

Hudson commented on YARN-553:
-

Integrated in Hadoop-trunk-Commit #3979 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3979/])
YARN-553. Replaced YarnClient.getNewApplication with 
YarnClient.createApplication which provides a directly usable 
ApplicationSubmissionContext to simplify the api. Contributed by Karthik 
Kambatla. (Revision 1494476)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494476
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClientApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-425) coverage fix for yarn api

2013-06-19 Thread Aleksey Gorshkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Gorshkov updated YARN-425:
--

Attachment: YARN-425-trunk-v1.patch
YARN-425-branch-2-v1.patch
YARN-425-branch-0.23-v1.patch

 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.1.0-beta
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23-d.patch, 
 YARN-425-branch-0.23.patch, YARN-425-branch-0.23-v1.patch, 
 YARN-425-branch-2-b.patch, YARN-425-branch-2-c.patch, 
 YARN-425-branch-2.patch, YARN-425-branch-2-v1.patch, YARN-425-trunk-a.patch, 
 YARN-425-trunk-b.patch, YARN-425-trunk-c.patch, YARN-425-trunk-d.patch, 
 YARN-425-trunk.patch, YARN-425-trunk-v1.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-425) coverage fix for yarn api

2013-06-19 Thread Aleksey Gorshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687715#comment-13687715
 ] 

Aleksey Gorshkov commented on YARN-425:
---

patches were changed.
patch YARN-425-trunk-v1.patch for trunk 
patch YARN-425-branch-2-v1.patch for branch-2
patch YARN-425-branch-0.23-v1.patch for branch-0.23

 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.1.0-beta
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23-d.patch, 
 YARN-425-branch-0.23.patch, YARN-425-branch-0.23-v1.patch, 
 YARN-425-branch-2-b.patch, YARN-425-branch-2-c.patch, 
 YARN-425-branch-2.patch, YARN-425-branch-2-v1.patch, YARN-425-trunk-a.patch, 
 YARN-425-trunk-b.patch, YARN-425-trunk-c.patch, YARN-425-trunk-d.patch, 
 YARN-425-trunk.patch, YARN-425-trunk-v1.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-425) coverage fix for yarn api

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687731#comment-13687731
 ] 

Hadoop QA commented on YARN-425:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588555/YARN-425-trunk-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1348//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1348//console

This message is automatically generated.

 coverage fix for yarn api
 -

 Key: YARN-425
 URL: https://issues.apache.org/jira/browse/YARN-425
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 3.0.0, 0.23.7, 2.1.0-beta
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-425-branch-0.23-d.patch, 
 YARN-425-branch-0.23.patch, YARN-425-branch-0.23-v1.patch, 
 YARN-425-branch-2-b.patch, YARN-425-branch-2-c.patch, 
 YARN-425-branch-2.patch, YARN-425-branch-2-v1.patch, YARN-425-trunk-a.patch, 
 YARN-425-trunk-b.patch, YARN-425-trunk-c.patch, YARN-425-trunk-d.patch, 
 YARN-425-trunk.patch, YARN-425-trunk-v1.patch


 coverage fix for yarn api
 patch YARN-425-trunk-a.patch for trunk
 patch YARN-425-branch-2.patch for branch-2
 patch YARN-425-branch-0.23.patch for branch-0.23

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687736#comment-13687736
 ] 

Hadoop QA commented on YARN-553:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588549/yarn-553-7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1345//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1345//console

This message is automatically generated.

 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-19 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-791:


Attachment: YARN-791-4.patch

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687802#comment-13687802
 ] 

Sandy Ryza commented on YARN-791:
-

Attaching patch that makes both the REST API and the RPC API accept multiple 
node states.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687838#comment-13687838
 ] 

Hadoop QA commented on YARN-791:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588564/YARN-791-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.client.cli.TestYarnCLI
  org.apache.hadoop.yarn.client.api.impl.TestNMClient

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1349//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1349//console

This message is automatically generated.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687847#comment-13687847
 ] 

Hudson commented on YARN-848:
-

Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
YARN-848. Fix NodeManager to register with RM using the fully qualified 
hostname. Contributed by Hitesh Shah. (Revision 1494385)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494385
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Fix For: 2.1.0-beta

 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename getClusterAvailableResources to getAvailableResources in AMRMClients

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687850#comment-13687850
 ] 

Hudson commented on YARN-850:
-

Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
YARN-850. Rename getClusterAvailableResources to getAvailableResources in 
AMRMClients (Jian He via bikas) (Revision 1494309)

 Result = FAILURE
bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494309
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java


 Rename getClusterAvailableResources to getAvailableResources in AMRMClients
 ---

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687851#comment-13687851
 ] 

Hudson commented on YARN-694:
-

Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
YARN-694. Starting to use NMTokens to authenticate all communication with 
NodeManagers. Contributed by Omkar Vinit Joshi. (Revision 1494369)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494369
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java
* 

[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687848#comment-13687848
 ] 

Hudson commented on YARN-553:
-

Integrated in Hadoop-Yarn-trunk #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/245/])
YARN-553. Replaced YarnClient.getNewApplication with 
YarnClient.createApplication which provides a directly usable 
ApplicationSubmissionContext to simplify the api. Contributed by Karthik 
Kambatla. (Revision 1494476)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494476
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClientApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-850) Rename getClusterAvailableResources to getAvailableResources in AMRMClients

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687951#comment-13687951
 ] 

Hudson commented on YARN-850:
-

Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
YARN-850. Rename getClusterAvailableResources to getAvailableResources in 
AMRMClients (Jian He via bikas) (Revision 1494309)

 Result = FAILURE
bikas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494309
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/AMRMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/AMRMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestAMRMClientAsync.java


 Rename getClusterAvailableResources to getAvailableResources in AMRMClients
 ---

 Key: YARN-850
 URL: https://issues.apache.org/jira/browse/YARN-850
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Jian He
Assignee: Jian He
 Fix For: 2.1.0-beta

 Attachments: YARN-850.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687949#comment-13687949
 ] 

Hudson commented on YARN-553:
-

Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
YARN-553. Replaced YarnClient.getNewApplication with 
YarnClient.createApplication which provides a directly usable 
ApplicationSubmissionContext to simplify the api. Contributed by Karthik 
Kambatla. (Revision 1494476)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494476
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClientApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687948#comment-13687948
 ] 

Hudson commented on YARN-848:
-

Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
YARN-848. Fix NodeManager to register with RM using the fully qualified 
hostname. Contributed by Hitesh Shah. (Revision 1494385)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494385
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Fix For: 2.1.0-beta

 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687952#comment-13687952
 ] 

Hudson commented on YARN-694:
-

Integrated in Hadoop-Hdfs-trunk #1435 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1435/])
YARN-694. Starting to use NMTokens to authenticate all communication with 
NodeManagers. Contributed by Omkar Vinit Joshi. (Revision 1494369)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494369
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java
* 

[jira] [Created] (YARN-853) maximum-am-resource-percent doesn't work consistently with refreshQueues

2013-06-19 Thread Devaraj K (JIRA)
Devaraj K created YARN-853:
--

 Summary: maximum-am-resource-percent doesn't work consistently 
with refreshQueues
 Key: YARN-853
 URL: https://issues.apache.org/jira/browse/YARN-853
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 3.0.0, 2.1.0-beta, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K


If we update yarn.scheduler.capacity.maximum-am-resource-percent / 
yarn.scheduler.capacity.queue-path.maximum-am-resource-percent configuration 
and then do the refreshNodes, it uses the new config value to calculate Max 
Active Applications and Max Active Application Per User. If we add new node 
after issuing  'rmadmin -refreshQueues' command, it uses the old 
maximum-am-resource-percent config value to calculate Max Active Applications 
and Max Active Application Per User. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687997#comment-13687997
 ] 

Hudson commented on YARN-694:
-

Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
YARN-694. Starting to use NMTokens to authenticate all communication with 
NodeManagers. Contributed by Omkar Vinit Joshi. (Revision 1494369)

 Result = FAILURE
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494369
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/AppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MockAppContext.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFail.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRuntimeEstimators.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncher.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/launcher/TestContainerLauncherImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/JobHistory.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/ContainerManagerSecurityInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/NMTokenSelector.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/BaseNMTokenSecretManager.java
* 

[jira] [Commented] (YARN-848) Nodemanager does not register with RM using the fully qualified hostname

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687993#comment-13687993
 ] 

Hudson commented on YARN-848:
-

Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
YARN-848. Fix NodeManager to register with RM using the fully qualified 
hostname. Contributed by Hitesh Shah. (Revision 1494385)

 Result = FAILURE
sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494385
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java


 Nodemanager does not register with RM using the fully qualified hostname
 

 Key: YARN-848
 URL: https://issues.apache.org/jira/browse/YARN-848
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Fix For: 2.1.0-beta

 Attachments: YARN-848.1.patch, YARN-848.3.patch


 If the hostname is misconfigured to not be fully qualified ( i.e. hostname 
 returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering 
 with the RM using only foo. This can create problems if DNS cannot resolve 
 the hostname properly. 
 Furthermore, HDFS uses fully qualified hostnames which can end up affecting 
 locality matches when allocating containers based on block locations. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-553) Have YarnClient generate a directly usable ApplicationSubmissionContext

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687994#comment-13687994
 ] 

Hudson commented on YARN-553:
-

Integrated in Hadoop-Mapreduce-trunk #1462 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1462/])
YARN-553. Replaced YarnClient.getNewApplication with 
YarnClient.createApplication which provides a directly usable 
ApplicationSubmissionContext to simplify the api. Contributed by Karthik 
Kambatla. (Revision 1494476)

 Result = FAILURE
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494476
Files : 
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/ResourceMgrDelegate.java
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClientApplication.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClient.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java


 Have YarnClient generate a directly usable ApplicationSubmissionContext
 ---

 Key: YARN-553
 URL: https://issues.apache.org/jira/browse/YARN-553
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.0.3-alpha
Reporter: Harsh J
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: yarn-553-1.patch, yarn-553-2.patch, yarn-553-3.patch, 
 yarn-553-4.patch, yarn-553-5.patch, yarn-553-6.patch, yarn-553-7.patch


 Right now, we're doing multiple steps to create a relevant 
 ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationId appId = newApp.getApplicationId();
 ApplicationSubmissionContext appContext = 
 Records.newRecord(ApplicationSubmissionContext.class);
 appContext.setApplicationId(appId);
 {code}
 A simplified way may be to have the GetNewApplicationResponse itself provide 
 a helper method that builds a usable ApplicationSubmissionContext for us. 
 Something like:
 {code}
 GetNewApplicationResponse newApp = yarnClient.getNewApplication();
 ApplicationSubmissionContext appContext = 
 newApp.generateApplicationSubmissionContext();
 {code}
 [The above method can also take an arg for the container launch spec, or 
 perhaps pre-load defaults like min-resource, etc. in the returned object, 
 aside of just associating the application ID automatically.]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688083#comment-13688083
 ] 

Jason Lowe commented on YARN-694:
-

This broke TestContainerLauncherImpl -- it's now hanging and failing builds.  
Jenkins gave a +1 to the patch because it ignores build failures, search for 
FAIL in the console output at 
https://builds.apache.org/job/PreCommit-YARN-Build/1334//console.  As an aside, 
sure wish someone would commit HADOOP-9583 or an equivalent fix so we can avoid 
accidentally checking in test timeouts that Jenkins should have caught.

Please reopen and fix or I can file a new JIRA if desired.  While we're at it, 
since this test in particular has failed a number of times, we should add a 
timeout argument to the @Test annotation to help avoid build failures if/when 
the test breaks in the future.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch, YARN-694-20130618.patch.branch-2, 
 YARN-694-20130618.patch.yarn-694-branch-2.1-beta


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-19 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688182#comment-13688182
 ] 

Alejandro Abdelnur commented on YARN-791:
-

I think we should have consistency between the HTTP and JAVA API (and at 
ProtoBuffer if possible):

Over HTTP if we don't specify a state we get RUNNING only and we can specify 
multiple state values separated by commas

Over Java we can achieve the exact same behavior by using a a var arg (State 
... states), if called with () would be RUNNING only.

And, as ProtoBuffer is taking a list, so if the list is empty we get RUNNING.




 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-736) Add a multi-resource fair sharing metric

2013-06-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688189#comment-13688189
 ] 

Karthik Kambatla commented on YARN-736:
---

Thanks Sandy. Yes, it should be okay to handle #2 in a separate JIRA.

Nits:
# AppSchedulable#getMaxShare(): instead of creating a resource with max values 
every time, we should create a Resource object on the first call and return it 
from then on.
# There seem to be several constructors for FakeSchedulable. Can we git rid of 
any that are not used anywhere. Ignore if all of them are used.

 Add a multi-resource fair sharing metric
 

 Key: YARN-736
 URL: https://issues.apache.org/jira/browse/YARN-736
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-736-1.patch, YARN-736-2.patch, YARN-736.patch


 Currently, at a regular interval, the fair scheduler computes a fair memory 
 share for each queue and application inside it.  This fair share is not used 
 for scheduling decisions, but is displayed in the web UI, exposed as a 
 metric, and used for preemption decisions.
 With DRF and multi-resource scheduling, assigning a memory share as the fair 
 share metric to every queue no longer makes sense.  It's not obvious what the 
 replacement should be, but probably something like fractional fairness within 
 a queue, or distance from an ideal cluster state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-19 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-727:
---

Attachment: YARN-727.14.patch

1. getApplications API accept a set of appTypes as a parameter
2. parse the command line -list --appTypes=appType1,appType2 to display 
application reports for different appTypes
3. we get null value if we parse commandLine -list, so at ApplicationCLI.java 
we still check null condition

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.1.patch, YARN-727.2.patch, 
 YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, 
 YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688200#comment-13688200
 ] 

Bikas Saha commented on YARN-851:
-

If an interface method is not going to be used then we are better of not 
exposing it.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688199#comment-13688199
 ] 

Bikas Saha commented on YARN-851:
-

This loop changes doesnt quite look the same as earlier. The old code is 
looping until the token identifier matches. The token would get updated when 
the map would get updated. The new code will also loop until the token 
identifier matches but the token object reference is not being updated. So it 
the identifier does not match then the loop will not exit. Which one is correct?
{code}
-
+Token token = NMTokenCache.getNMToken(containerManagerBindAddr);
+
+if (token == null) {
+  throw new InvalidToken(No NMToken sent for 
+  + containerManagerBindAddr);
+}
+
 while (proxy != null
- !proxy.token.getIdentifier().equals(
-nmTokens.get(containerManagerBindAddr).getIdentifier())) {
+ !proxy.token.getIdentifier().equals(token.getIdentifier())) {
{code}

Why are tokens being removed from the TokenCache in the test?
{code}
-  nodeI = receivedNMTokens.keySet().iterator();
-  while (nodeI.hasNext()) {
-nmTokens.remove(nodeI.next());
+NMTokenCache.removeNMToken(nodeID);
+receivedNMTokens.put(nodeID, token.getToken());
{code}

Overall comment of NMTokenCache is that its mixing up static and singleton for 
full functionality. If it has an internal private single object then that 
object should take care of any required synchronization etc and not the static 
class method. Alternatively, one can make the internal map a private class 
object and not have an internal new NMTokenCache object. 
We might be better off using a concurrent map implementation instead of coarse 
synchronization on the NMTokenCache method level.

Are these API meant to be public and for users? What scenario? Typo in comment.
{code}
+  /**
+   * Removes NMToken for specified node manger
+   * @param nodeAddr node address (host:port)
+   */
+  public static synchronized void removeNMToken(String nodeAddr) {
+instance.nmTokens.remove(nodeAddr);
+  }
+  
+  /**
+   * It will remove all the nm tokens from its cache
+   */
+  public static synchronized void clearCache() {
+instance.nmTokens.clear();
+  }
{code}

Is this meant to be public for users? It should probably just return a java 
unmodifiable map wrapped around the internal and mention in the java doc that 
it returns an unmodifiable map. Unless its really required to create a snapshot 
of the token cache at that point in time.
{code}
+  /**
+   * It returns all NMTokens present in cache.
+   */
+  public static synchronized MapString, Token getAllNMTokens() {
+MapString, Token nmTokens = new HashMapString, Token();
+for (String key : instance.nmTokens.keySet()) {
+  nmTokens.put(key, instance.nmTokens.get(key));
+}
+return nmTokens;
+  }
{code}

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-736) Add a multi-resource fair sharing metric

2013-06-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688204#comment-13688204
 ] 

Sandy Ryza commented on YARN-736:
-

All the FakeSchedulable constructors appear to be used.

Uploaded a patch in which AppSchedulable#getMaxShare does not create a new 
Resource each time.

 Add a multi-resource fair sharing metric
 

 Key: YARN-736
 URL: https://issues.apache.org/jira/browse/YARN-736
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-736-1.patch, YARN-736-2.patch, YARN-736-3.patch, 
 YARN-736.patch


 Currently, at a regular interval, the fair scheduler computes a fair memory 
 share for each queue and application inside it.  This fair share is not used 
 for scheduling decisions, but is displayed in the web UI, exposed as a 
 metric, and used for preemption decisions.
 With DRF and multi-resource scheduling, assigning a memory share as the fair 
 share metric to every queue no longer makes sense.  It's not obvious what the 
 replacement should be, but probably something like fractional fairness within 
 a queue, or distance from an ideal cluster state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-736) Add a multi-resource fair sharing metric

2013-06-19 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-736:


Attachment: YARN-736-3.patch

 Add a multi-resource fair sharing metric
 

 Key: YARN-736
 URL: https://issues.apache.org/jira/browse/YARN-736
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-736-1.patch, YARN-736-2.patch, YARN-736-3.patch, 
 YARN-736.patch


 Currently, at a regular interval, the fair scheduler computes a fair memory 
 share for each queue and application inside it.  This fair share is not used 
 for scheduling decisions, but is displayed in the web UI, exposed as a 
 metric, and used for preemption decisions.
 With DRF and multi-resource scheduling, assigning a memory share as the fair 
 share metric to every queue no longer makes sense.  It's not obvious what the 
 replacement should be, but probably something like fractional fairness within 
 a queue, or distance from an ideal cluster state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688233#comment-13688233
 ] 

Bikas Saha commented on YARN-569:
-

bq. You're saying the block updating the responseMap probably belongs just 
before the return? That makes sense, though I haven't traced it explicitly.
Yes.

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.8.patch, 
 YARN-569.patch, YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations from the most recently assigned app until the amount 
 of resource to reclaim is obtained, or until no more reservations exits
 # (if not enough) it issues preemptions for containers from the same 
 applications (reverse chronological order, last assigned container first) 
 again until necessary or until no containers except the AM container are left,
 # (if not enough) it moves onto unreserve and preempt from the next 
 application. 
 # containers that have been asked to preempt are tracked across executions. 
 If a containers is among the one to be preempted for more than a certain 
 time, the container is moved in a the list of containers to be forcibly 
 killed. 
 Notes:
 (*) at the moment, in order to avoid double-counting of the requests, we only 
 look at the ANY part of pending resource requests, which means we might not 
 preempt on behalf of AMs that ask only for specific locations but not any. 
 (**) The ideal balance state is one in which each queue has at least its 
 guaranteed capacity, and the spare capacity is distributed among queues (that 
 wants some) as a weighted fair share. Where the weighting is based on the 
 guaranteed capacity of a queue, and the function runs to a fix point.  
 Tunables of the ProportionalCapacityPreemptionPolicy:
 # observe-only mode (i.e., log the actions it would take, but behave as 
 read-only)
 # how frequently to run the policy
 # how long to wait between preemption and kill of a container
 # which fraction of the containers 

[jira] [Commented] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688237#comment-13688237
 ] 

Bikas Saha commented on YARN-569:
-

Since configuring this involves more than 1 config file, as long as its clear 
which file to change for which config, its all good. 

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.8.patch, 
 YARN-569.patch, YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations from the most recently assigned app until the amount 
 of resource to reclaim is obtained, or until no more reservations exits
 # (if not enough) it issues preemptions for containers from the same 
 applications (reverse chronological order, last assigned container first) 
 again until necessary or until no containers except the AM container are left,
 # (if not enough) it moves onto unreserve and preempt from the next 
 application. 
 # containers that have been asked to preempt are tracked across executions. 
 If a containers is among the one to be preempted for more than a certain 
 time, the container is moved in a the list of containers to be forcibly 
 killed. 
 Notes:
 (*) at the moment, in order to avoid double-counting of the requests, we only 
 look at the ANY part of pending resource requests, which means we might not 
 preempt on behalf of AMs that ask only for specific locations but not any. 
 (**) The ideal balance state is one in which each queue has at least its 
 guaranteed capacity, and the spare capacity is distributed among queues (that 
 wants some) as a weighted fair share. Where the weighting is based on the 
 guaranteed capacity of a queue, and the function runs to a fix point.  
 Tunables of the ProportionalCapacityPreemptionPolicy:
 # observe-only mode (i.e., log the actions it would take, but behave as 
 read-only)
 # how frequently to run the policy
 # how long to wait between preemption and kill of a container
 # which fraction of the containers I would like to obtain 

[jira] [Updated] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated YARN-597:
---

Fix Version/s: 2.1.0-beta
 Hadoop Flags: Reviewed

I merged this to branch-2 and branch-2.1-beta and updated CHANGES.txt in trunk 
to move attribution from the trunk section to release 2.1.0-beta.

 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-736) Add a multi-resource fair sharing metric

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688244#comment-13688244
 ] 

Hadoop QA commented on YARN-736:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588643/YARN-736-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1351//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1351//console

This message is automatically generated.

 Add a multi-resource fair sharing metric
 

 Key: YARN-736
 URL: https://issues.apache.org/jira/browse/YARN-736
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-736-1.patch, YARN-736-2.patch, YARN-736-3.patch, 
 YARN-736.patch


 Currently, at a regular interval, the fair scheduler computes a fair memory 
 share for each queue and application inside it.  This fair share is not used 
 for scheduling decisions, but is displayed in the web UI, exposed as a 
 metric, and used for preemption decisions.
 With DRF and multi-resource scheduling, assigning a memory share as the fair 
 share metric to every queue no longer makes sense.  It's not obvious what the 
 replacement should be, but probably something like fractional fairness within 
 a queue, or distance from an ideal cluster state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-597) TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688253#comment-13688253
 ] 

Hudson commented on YARN-597:
-

Integrated in Hadoop-trunk-Commit #3983 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3983/])
YARN-597. Change attribution of YARN-597 from trunk to release 2.1.0-beta 
in CHANGES.txt. (cnauroth) (Revision 1494717)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494717
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt


 TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools
 -

 Key: YARN-597
 URL: https://issues.apache.org/jira/browse/YARN-597
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: YARN-597.patch


 {{testDownloadArchive}}, {{testDownloadPatternJar}} and 
 {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:
 {code}
 testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time 
 elapsed: 480 sec   ERROR!
 org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: 
 /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload:
  No such file or directory
 gzip: 1: No such file or directory
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)
   at org.apache.hadoop.util.Shell.run(Shell.java:292)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)
   at 
 org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-736) Add a multi-resource fair sharing metric

2013-06-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688254#comment-13688254
 ] 

Karthik Kambatla commented on YARN-736:
---

Having an UNBOUNDED resource in Resources can be handy, even though having a 
local copy in AppSchedulable should be enough for this JIRA.

+1 to the yarn-736-3.patch.

 Add a multi-resource fair sharing metric
 

 Key: YARN-736
 URL: https://issues.apache.org/jira/browse/YARN-736
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-736-1.patch, YARN-736-2.patch, YARN-736-3.patch, 
 YARN-736.patch


 Currently, at a regular interval, the fair scheduler computes a fair memory 
 share for each queue and application inside it.  This fair share is not used 
 for scheduling decisions, but is displayed in the web UI, exposed as a 
 metric, and used for preemption decisions.
 With DRF and multi-resource scheduling, assigning a memory share as the fair 
 share metric to every queue no longer makes sense.  It's not obvious what the 
 replacement should be, but probably something like fractional fairness within 
 a queue, or distance from an ideal cluster state.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-19 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated YARN-852:
---

Target Version/s: 3.0.0, 2.1.0-beta
Hadoop Flags: Reviewed

+1 for the patch.  I verified the test on Mac and Windows.  Thanks, Chuan!  
I'll commit this shortly.

 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688272#comment-13688272
 ] 

Bikas Saha commented on YARN-569:
-

One other thing to check would be if the preemption policy will use refreshed 
values when the capacity scheduler config is refreshed on the fly. Looks like 
cloneQueues() will take the absolute used and guaranteed numbers on every 
clone. So we should be good wrt that. Would be good to check other values the 
policy looks at.
Noticed formatting issues with spaces in the patch. eg. cloneQueues()

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.8.patch, 
 YARN-569.patch, YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations from the most recently assigned app until the amount 
 of resource to reclaim is obtained, or until no more reservations exits
 # (if not enough) it issues preemptions for containers from the same 
 applications (reverse chronological order, last assigned container first) 
 again until necessary or until no containers except the AM container are left,
 # (if not enough) it moves onto unreserve and preempt from the next 
 application. 
 # containers that have been asked to preempt are tracked across executions. 
 If a containers is among the one to be preempted for more than a certain 
 time, the container is moved in a the list of containers to be forcibly 
 killed. 
 Notes:
 (*) at the moment, in order to avoid double-counting of the requests, we only 
 look at the ANY part of pending resource requests, which means we might not 
 preempt on behalf of AMs that ask only for specific locations but not any. 
 (**) The ideal balance state is one in which each queue has at least its 
 guaranteed capacity, and the spare capacity is distributed among queues (that 
 wants some) as a weighted fair share. Where the weighting is based on the 
 guaranteed capacity of a queue, and the function runs to a fix point.  
 Tunables of the 

[jira] [Commented] (YARN-852) TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688315#comment-13688315
 ] 

Hudson commented on YARN-852:
-

Integrated in Hadoop-trunk-Commit #3984 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3984/])
YARN-852. TestAggregatedLogFormat.testContainerLogsFileAccess fails on 
Windows. Contributed by Chuan Liu. (Revision 1494733)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494733
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogFormat.java


 TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows
 

 Key: YARN-852
 URL: https://issues.apache.org/jira/browse/YARN-852
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-852-trunk.patch, YARN-852-trunk.patch


 The YARN unit test case fails on Windows when comparing expected message with 
 log message in the file. The expected message constructed in the test case 
 has two problems: 1) it uses Path.separator to concatenate path string. 
 Path.separator is always a forward slash, which does not match the backslash 
 used in the log message. 2) On Windows, the default file owner is 
 Administrators group if the file is created by an Administrators user. The 
 test expect the user to be the current user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688334#comment-13688334
 ] 

Vinod Kumar Vavilapalli commented on YARN-694:
--

I'm opening a new ticket for the test. Will also take care of HADOOP-9583.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch, YARN-694-20130618.patch.branch-2, 
 YARN-694-20130618.patch.yarn-694-branch-2.1-beta


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688338#comment-13688338
 ] 

Hadoop QA commented on YARN-727:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588638/YARN-727.14.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1350//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1350//console

This message is automatically generated.

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.1.patch, YARN-727.2.patch, 
 YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, 
 YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-694) Start using NMTokens to authenticate all communication with NM

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688375#comment-13688375
 ] 

Vinod Kumar Vavilapalli commented on YARN-694:
--

bq. I'm opening a new ticket for the test.
MAPREDUCE-5334.

 Start using NMTokens to authenticate all communication with NM
 --

 Key: YARN-694
 URL: https://issues.apache.org/jira/browse/YARN-694
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta

 Attachments: YARN-694-20130613.patch, YARN-694-20130617.1.patch, 
 YARN-694-20130617.2.patch, YARN-694-20130617.patch, 
 YARN-694-20130618.1.patch, YARN-694-20130618.2.patch, 
 YARN-694-20130618.3.patch, YARN-694-20130618.4.patch, 
 YARN-694-20130618.5.patch, YARN-694-20130618.patch.branch-2, 
 YARN-694-20130618.patch.yarn-694-branch-2.1-beta


 AM uses the NMToken to authenticate all the AM-NM communication.
 NM will validate NMToken in below manner
 * If NMToken is using current or previous master key then the NMToken is 
 valid. In this case it will update its cache with this key corresponding to 
 appId.
 * If NMToken is using the master key which is present in NM's cache 
 corresponding to AM's appId then it will be validated based on this.
 * If NMToken is invalid then NM will reject AM calls.
 Modification for ContainerToken
 * At present RPC validates AM-NM communication based on ContainerToken. It 
 will be replaced with NMToken. Also now onwards AM will use NMToken per NM 
 (replacing earlier behavior of ContainerToken per container per NM).
 * startContainer in case of Secured environment is using ContainerToken from 
 UGI YARN-617; however after this it will use it from the payload (Container).
 * ContainerToken will exist and it will only be used to validate the AM's 
 container start request.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-19 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated YARN-569:
---

Attachment: YARN-569.9.patch

bq. One other thing to check would be if the preemption policy will use 
refreshed values when the capacity scheduler config is refreshed on the fly. 
Looks like cloneQueues() will take the absolute used and guaranteed numbers on 
every clone. So we should be good wrt that. Would be good to check other values 
the policy looks at.

*nod* Right now, the policy rebuilds its view of the scheduler at every pass, 
but it doesn't refresh its own config parameters.

bq. Noticed formatting issues with spaces in the patch. eg. cloneQueues()

Did another pass over the patch, fixed up spacing, formatting, and removed 
obvious whitespace changes. Sorry, did a few of these already, but missed a few.

Also moved the check in the {{ApplicationMasterService}} as part of this patch.

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.8.patch, 
 YARN-569.9.patch, YARN-569.patch, YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations from the most recently assigned app until the amount 
 of resource to reclaim is obtained, or until no more reservations exits
 # (if not enough) it issues preemptions for containers from the same 
 applications (reverse chronological order, last assigned container first) 
 again until necessary or until no containers except the AM container are left,
 # (if not enough) it moves onto unreserve and preempt from the next 
 application. 
 # containers that have been asked to preempt are tracked across executions. 
 If a containers is among the one to be preempted for more than a certain 
 time, the container is moved in a the list of containers to be forcibly 
 killed. 
 Notes:
 (*) at the moment, in order to avoid double-counting of the requests, we only 
 look at the ANY part of pending resource requests, which means we might not 
 preempt on behalf of AMs that ask only 

[jira] [Created] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Ramya Sunil (JIRA)
Ramya Sunil created YARN-854:


 Summary: App submission fails on secure deploy
 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
 Fix For: 2.1.0-beta


App submission on secure cluster fails with the following exception:

{noformat}
INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
applicationID failed 2 times due to AM Container for appattemptID exited with  
exitCode: -1000 due to: App initialization failed (255) with output: main : 
command provided 0
main : user is qa_user
javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
violation. Mismatched response. [Caused by 
org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
DIGEST-MD5: digest response format violation. Mismatched response.]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
Caused by: 
org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
DIGEST-MD5: digest response format violation. Mismatched response.
at org.apache.hadoop.ipc.Client.call(Client.java:1298)
at org.apache.hadoop.ipc.Client.call(Client.java:1250)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
at $Proxy7.heartbeat(Unknown Source)
at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
... 3 more

.Failing this attempt.. Failing the application.

{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-854:
-

Priority: Blocker  (was: Major)

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned YARN-854:


Assignee: Omkar Vinit Joshi

Omkar, can you please check? Tx!

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
 Fix For: 2.1.0-beta


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-19 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688446#comment-13688446
 ] 

Hitesh Shah commented on YARN-727:
--

Comments:

ResourceMgrDelegate.java:
  - In getAllJobs(), why is this not matching against only the MAPREDUCE 
application type?
  - Why does ResourceMgrDelegate extent YarnClient? I dont believe it should. 
Could you file a jira for this.

yarn_service_protos.proto:
  - please look at the coding conventions followed in this file - its lowercase 
with _ separators

ApplicationCLI.java:
  - code should use Option instance for appTypes and not rely on parsing the 
string. Let the GnuParser do it.
  - Look to making use of Option.setValueSeparator, and Option.getValues
  - Current code seems like using --list --foobar=type1,type2 will work.

Does the webservice also handle multiple app types or does that need to be 
changed too?
   




 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.1.patch, YARN-727.2.patch, 
 YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, 
 YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-569) CapacityScheduler: support for preemption (using a capacity monitor)

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688467#comment-13688467
 ] 

Hadoop QA commented on YARN-569:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588685/YARN-569.9.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1352//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1352//console

This message is automatically generated.

 CapacityScheduler: support for preemption (using a capacity monitor)
 

 Key: YARN-569
 URL: https://issues.apache.org/jira/browse/YARN-569
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: 3queues.pdf, CapScheduler_with_preemption.pdf, 
 preemption.2.patch, YARN-569.1.patch, YARN-569.2.patch, YARN-569.3.patch, 
 YARN-569.4.patch, YARN-569.5.patch, YARN-569.6.patch, YARN-569.8.patch, 
 YARN-569.9.patch, YARN-569.patch, YARN-569.patch


 There is a tension between the fast-pace reactive role of the 
 CapacityScheduler, which needs to respond quickly to 
 applications resource requests, and node updates, and the more introspective, 
 time-based considerations 
 needed to observe and correct for capacity balance. To this purpose we opted 
 instead of hacking the delicate
 mechanisms of the CapacityScheduler directly to add support for preemption by 
 means of a Capacity Monitor,
 which can be run optionally as a separate service (much like the 
 NMLivelinessMonitor).
 The capacity monitor (similarly to equivalent functionalities in the fairness 
 scheduler) operates running on intervals 
 (e.g., every 3 seconds), observe the state of the assignment of resources to 
 queues from the capacity scheduler, 
 performs off-line computation to determine if preemption is needed, and how 
 best to edit the current schedule to 
 improve capacity, and generates events that produce four possible actions:
 # Container de-reservations
 # Resource-based preemptions
 # Container-based preemptions
 # Container killing
 The actions listed above are progressively more costly, and it is up to the 
 policy to use them as desired to achieve the rebalancing goals. 
 Note that due to the lag in the effect of these actions the policy should 
 operate at the macroscopic level (e.g., preempt tens of containers
 from a queue) and not trying to tightly and consistently micromanage 
 container allocations. 
 - Preemption policy  (ProportionalCapacityPreemptionPolicy): 
 - 
 Preemption policies are by design pluggable, in the following we present an 
 initial policy (ProportionalCapacityPreemptionPolicy) we have been 
 experimenting with.  The ProportionalCapacityPreemptionPolicy behaves as 
 follows:
 # it gathers from the scheduler the state of the queues, in particular, their 
 current capacity, guaranteed capacity and pending requests (*)
 # if there are pending requests from queues that are under capacity it 
 computes a new ideal balanced state (**)
 # it computes the set of preemptions needed to repair the current schedule 
 and achieve capacity balance (accounting for natural completion rates, and 
 respecting bounds on the amount of preemption we allow for each round)
 # it selects which applications to preempt from each over-capacity queue (the 
 last one in the FIFO order)
 # it remove reservations from the most recently assigned app until the amount 
 of resource to reclaim is obtained, or until no more reservations exits
 # (if not enough) it issues preemptions for 

[jira] [Created] (YARN-855) YarnClient.init should ensure that yarn parameters are present

2013-06-19 Thread Siddharth Seth (JIRA)
Siddharth Seth created YARN-855:
---

 Summary: YarnClient.init should ensure that yarn parameters are 
present
 Key: YARN-855
 URL: https://issues.apache.org/jira/browse/YARN-855
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Siddharth Seth


It currently accepts a Configuration object in init and doesn't check whether 
it contains yarn parameters or is a YarnConfiguration. Should either accept 
YarnConfiguration, check existence of parameters or create a YarnConfiguration 
based on the configuration passed to it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-791) Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API

2013-06-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688500#comment-13688500
 ] 

Sandy Ryza commented on YARN-791:
-

[~vinodkv], does Alejandro's suggestion seem acceptable to you?  If so, I will 
make those changes.

 Ensure that RM RPC APIs that return nodes are consistent with /nodes REST API
 -

 Key: YARN-791
 URL: https://issues.apache.org/jira/browse/YARN-791
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-791-1.patch, YARN-791-2.patch, YARN-791-3.patch, 
 YARN-791-4.patch, YARN-791.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-854:
---

Attachment: YARN-854.20130619.patch

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-856) Clicking on 'Tracking UI History' link leads to server error 500

2013-06-19 Thread Kam Kasravi (JIRA)
Kam Kasravi created YARN-856:


 Summary: Clicking on 'Tracking UI History' link leads to server 
error 500
 Key: YARN-856
 URL: https://issues.apache.org/jira/browse/YARN-856
 Project: Hadoop YARN
  Issue Type: Bug
  Components: site
Affects Versions: 2.0.3-alpha
 Environment: browser - chrome 27.x
Reporter: Kam Kasravi


1. Browse to http://localhost:8088/cluster/apps
2. Select History link on the far right
3. See error page below

HTTP ERROR 500

Problem accessing /proxy/application_1371597044883_0001/. Reason:

java.net.URISyntaxException: Expected authority at index 7: http://
Caused by:

java.io.IOException: java.net.URISyntaxException: Expected authority at index 
7: http://
at 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:347)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1069)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: java.net.URISyntaxException: Expected authority at index 7: http://
at java.net.URI$Parser.fail(URI.java:2810)
at java.net.URI$Parser.failExpecting(URI.java:2816)
at java.net.URI$Parser.parseHierarchical(URI.java:3064)
at java.net.URI$Parser.parse(URI.java:3015)
at java.net.URI.init(URI.java:577)
at 
org.apache.hadoop.yarn.server.webproxy.ProxyUriUtils.getUriFromAMUrl(ProxyUriUtils.java:146)
at 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:292)
... 36 more


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-857:


 Summary: Errors when localizing end up with the localization 
failure not being seen by the NM
 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah


at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)


Traced this down to both DefaultExecutor and LinuxExecutor's startLocalizer 
functions both of which do not look at the exit code for the localizer.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688586#comment-13688586
 ] 

Hadoop QA commented on YARN-854:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588705/YARN-854.20130619.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1353//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1353//console

This message is automatically generated.

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
 

[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688588#comment-13688588
 ] 

Omkar Vinit Joshi commented on YARN-854:


Fixing one more thing.. making sure LocalizerTokenSecretManager is used 
irrespective of security.

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-857:
-

Description: 
at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)


Traced this down to DefaultExecutor which does not look at the exit code for 
the localizer.


  was:
at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)


Traced this down to both DefaultExecutor and LinuxExecutor's startLocalizer 
functions both of which do not look at the exit code for the localizer.



 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah

 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah reassigned YARN-857:


Assignee: Hitesh Shah

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah

 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to both DefaultExecutor and LinuxExecutor's startLocalizer 
 functions both of which do not look at the exit code for the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-858) Move Resources and the caculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)
Jian He created YARN-858:


 Summary: Move Resources and the caculators to a new package 
util.resource in yarn-common 
 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-858:


Summary: Move Resources and the calculators to a new package util.resource 
in yarn-common   (was: Move Resources and the caculators to a new package 
util.resource in yarn-common )

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He

 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-858) Move Resources and the caculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-858:
-

Description: For userland use

 Move Resources and the caculators to a new package util.resource in 
 yarn-common 
 

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He

 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-858:
-

Attachment: YARN-858.patch

Moved DefaultResourceCalculator, DominantResourceCalculator, 
ResourceCalculator, Resources to yarn-common

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-857:
-

Attachment: YARN-857.1.patch

Unfortunately could not come up with a unit test which tests that exact bit 
where a -1 is returned by causing a throw in localizeFiles. Suggestions 
welcome. 

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688664#comment-13688664
 ] 

Sandy Ryza commented on YARN-858:
-

The JIRA description says for userland use, but the classes are marked 
@Private.  Is this intentional?  As discussed in YARN-827, these APIs are not 
yet mature and should not yet be made public.

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688669#comment-13688669
 ] 

Jian He commented on YARN-858:
--

Looks like this is duplicative as YARN-827

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688674#comment-13688674
 ] 

Jian He commented on YARN-858:
--

close this

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-858.
--

Resolution: Duplicate

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688678#comment-13688678
 ] 

Hadoop QA commented on YARN-857:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588726/YARN-857.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1355//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1355//console

This message is automatically generated.

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-858) Move Resources and the calculators to a new package util.resource in yarn-common

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688682#comment-13688682
 ] 

Hadoop QA commented on YARN-858:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588723/YARN-858.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 22 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
  
org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing
  
org.apache.hadoop.yarn.server.resourcemanager.TestResourceManager
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup
  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.security.TestAMRMTokens
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationACLs
  
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService
  
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher
  org.apache.hadoop.yarn.server.resourcemanager.TestRM
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
  
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCResponseId
  
org.apache.hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens
  
org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestNodesPage
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebApp
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.TestSchedulerUtils
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1354//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1354//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1354//console

This message is automatically generated.

 Move Resources and the calculators to a new package util.resource in 
 yarn-common 
 -

 Key: YARN-858
 URL: https://issues.apache.org/jira/browse/YARN-858
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He
 Attachments: YARN-858.patch


 For userland use

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-827) Need to make Resource arithmetic methods public

2013-06-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-827:
-

Attachment: YARN-827.patch

upload a patch to attract attentions.

Moved DefaultResourceCalculator, DominantResourceCalculator, 
ResourceCalculator, Resources to yarn-common

 Need to make Resource arithmetic methods public
 ---

 Key: YARN-827
 URL: https://issues.apache.org/jira/browse/YARN-827
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Bikas Saha
Assignee: Zhijie Shen
Priority: Critical
 Attachments: YARN-827.patch


 org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like 
 Resources and Calculators that help compare/add resources etc. Without these 
 users will be forced to replicate the logic, potentially incorrectly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-727) ClientRMProtocol.getAllApplications should accept ApplicationType as a parameter

2013-06-19 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688710#comment-13688710
 ] 

Xuan Gong commented on YARN-727:


bq.Does the webservice also handle multiple app types or does that need to be 
changed too?

[~hitesh]
I do not think we have filter with parameter app_types in current webservice. 
There are filters we are using to get appInfo right now:
1. filter with parameter app_id
2. filter with parameters: state, finalStatus, user, queue, limit, 
startedTimeBegin, startedTimeEnd, finishedTimeBegin, finishedTimeEnd

We can add a new parameter named app_types in filter 2. Do you want me to fix 
it here or open another ticket ?

 ClientRMProtocol.getAllApplications should accept ApplicationType as a 
 parameter
 

 Key: YARN-727
 URL: https://issues.apache.org/jira/browse/YARN-727
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.1.0-beta
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: YARN-727.10.patch, YARN-727.11.patch, YARN-727.12.patch, 
 YARN-727.13.patch, YARN-727.14.patch, YARN-727.1.patch, YARN-727.2.patch, 
 YARN-727.3.patch, YARN-727.4.patch, YARN-727.5.patch, YARN-727.6.patch, 
 YARN-727.7.patch, YARN-727.8.patch, YARN-727.9.patch


 Now that an ApplicationType is registered on ApplicationSubmission, 
 getAllApplications should be able to use this string to query for a specific 
 application type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688717#comment-13688717
 ] 

Vinod Kumar Vavilapalli commented on YARN-857:
--

I just ran these tests locally and are already failing. The failing tests 
should pass after YARN-854 changes.

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688723#comment-13688723
 ] 

Vinod Kumar Vavilapalli commented on YARN-857:
--

I applied the patch locally too and can validate that tests like 
TestContainerManager now failing instead of timing out.

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-857:
-

Attachment: YARN-857.2.patch

Here's a patch with a test-case which fails(by timeout) without the code 
changes and passes with..

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch, YARN-857.2.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-854:
---

Attachment: YARN-854.20130619.1.patch

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-857) Errors when localizing end up with the localization failure not being seen by the NM

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688785#comment-13688785
 ] 

Hadoop QA commented on YARN-857:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12588734/YARN-857.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
  
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown
  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1356//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1356//console

This message is automatically generated.

 Errors when localizing end up with the localization failure not being seen by 
 the NM
 

 Key: YARN-857
 URL: https://issues.apache.org/jira/browse/YARN-857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-857.1.patch, YARN-857.2.patch


 at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
 at 
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
 at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
 Traced this down to DefaultExecutor which does not look at the exit code for 
 the localizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-854:
---

Attachment: YARN-854.20130619.2.patch

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-851:
---

Attachment: YARN-851-20130619.patch

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-19 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688812#comment-13688812
 ] 

Omkar Vinit Joshi commented on YARN-851:


Fixed above comments. removed getAllNMTokens().

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-851) Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688833#comment-13688833
 ] 

Hadoop QA commented on YARN-851:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588749/YARN-851-20130619.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client:

  
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1360//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1360//console

This message is automatically generated.

 Share NMTokens using NMTokenCache (api-based) instead of memory based 
 approach which is used currently.
 ---

 Key: YARN-851
 URL: https://issues.apache.org/jira/browse/YARN-851
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Omkar Vinit Joshi
Assignee: Omkar Vinit Joshi
 Attachments: YARN-851-20130618.patch, YARN-851-20130619.patch


 It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688834#comment-13688834
 ] 

Vinod Kumar Vavilapalli commented on YARN-854:
--

Looks good to me. +1. TestContainerManager is happening irrespective of this - 
caused by YARN-848.

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688835#comment-13688835
 ] 

Hadoop QA commented on YARN-854:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12588747/YARN-854.20130619.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1359//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1359//console

This message is automatically generated.

 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-854) App submission fails on secure deploy

2013-06-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13688844#comment-13688844
 ] 

Hudson commented on YARN-854:
-

Integrated in Hadoop-trunk-Commit #3987 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3987/])
YARN-854. Fixing YARN bugs that are failing applications in secure 
environment. Contributed by Omkar Vinit Joshi. (Revision 1494845)

 Result = SUCCESS
vinodkv : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1494845
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/RegisterApplicationMasterResponsePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java


 App submission fails on secure deploy
 -

 Key: YARN-854
 URL: https://issues.apache.org/jira/browse/YARN-854
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Ramya Sunil
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: YARN-854.20130619.1.patch, YARN-854.20130619.2.patch, 
 YARN-854.20130619.patch


 App submission on secure cluster fails with the following exception:
 {noformat}
 INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application 
 applicationID failed 2 times due to AM Container for appattemptID exited with 
  exitCode: -1000 due to: App initialization failed (255) with output: main : 
 command provided 0
 main : user is qa_user
 javax.security.sasl.SaslException: DIGEST-MD5: digest response format 
 violation. Mismatched response. [Caused by 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.]
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
   at 
 org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)
 Caused by: 
 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): 
 DIGEST-MD5: digest response format violation. Mismatched response.
   at org.apache.hadoop.ipc.Client.call(Client.java:1298)
   at org.apache.hadoop.ipc.Client.call(Client.java:1250)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)
   at $Proxy7.heartbeat(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
   ... 3 more
 .Failing this attempt.. Failing the application.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-859) Applications Per User is giving ambiguous values in scheduler UI

2013-06-19 Thread Nishan Shetty (JIRA)
Nishan Shetty created YARN-859:
--

 Summary: Applications Per User is giving ambiguous values in 
scheduler UI 
 Key: YARN-859
 URL: https://issues.apache.org/jira/browse/YARN-859
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.0.1-alpha
Reporter: Nishan Shetty
Priority: Minor


1.Configure yarn.scheduler.capacity.root.default.user-limit-factor as 2
Observe that Applications Per User values in scheuler UI is giving ambiguous 
values 

Max applications per user cannot be more than that of cluster

Max Applications:  1000
Max Applications Per User:  2000
Max Active Applications:  5
Max Active Applications Per User:  10 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira