[jira] [Commented] (YARN-1081) Minor improvement to output header for $ yarn node -list

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752139#comment-13752139
 ] 

Hudson commented on YARN-1081:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4335 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4335/])
YARN-1081. Made a trivial change to YARN node CLI header to avoid potential 
confusion. Contributed by Akira AJISAKA. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518080)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 Minor improvement to output header for $ yarn node -list
 

 Key: YARN-1081
 URL: https://issues.apache.org/jira/browse/YARN-1081
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1081.2.patch, YARN-1081.patch


 Output of $ yarn node -list shows number of running containers at each node. 
 I found a case when new user of YARN thinks that this is container ID, use it 
 later in other YARN commands and find an error due to misunderstanding.
 {code:title=current output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}
 {code:title=proposed output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Number-of-Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1001) YARN should provide per application-type and state statistics

2013-08-28 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-1001:
--

Attachment: YARN-1001.1.patch

I created a patch, which add a new RESTful API:

{code}
http://rm http 
address:port/ws/v1/cluster/appscount?states=state1,state2types=type1,type2
{code}

Bellow is the example response in JSON format:

{code}
{
  appsCount:
  {
countItems:
[
  {state:accepted,type:other,count:1},
  {state:accepted,type:mapreduce,count:1},
  {state:finished,type:other,count:0},
  {state:finished,type:mapreduce,count:1}
]
  }
}
{code}

Three aspects need to be clarified:

* The combination bucket that has 0 apps is listed as well
* State and type matching ignores the case.
* Two forms of params are allowed for both types and states: 
states=state1states=state2,state3types=type1types=type2,type3

 YARN should provide per application-type and state statistics
 -

 Key: YARN-1001
 URL: https://issues.apache.org/jira/browse/YARN-1001
 Project: Hadoop YARN
  Issue Type: Task
  Components: api
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi
Assignee: Zhijie Shen
 Attachments: YARN-1001.1.patch


 In Ambari we plan to show for MR2 the number of applications finished, 
 running, waiting, etc. It would be efficient if YARN could provide per 
 application-type and state aggregated counts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1001) YARN should provide per application-type and state statistics

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752203#comment-13752203
 ] 

Hadoop QA commented on YARN-1001:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600347/YARN-1001.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1780//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1780//console

This message is automatically generated.

 YARN should provide per application-type and state statistics
 -

 Key: YARN-1001
 URL: https://issues.apache.org/jira/browse/YARN-1001
 Project: Hadoop YARN
  Issue Type: Task
  Components: api
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi
Assignee: Zhijie Shen
 Attachments: YARN-1001.1.patch


 In Ambari we plan to show for MR2 the number of applications finished, 
 running, waiting, etc. It would be efficient if YARN could provide per 
 application-type and state aggregated counts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (YARN-1112) MR AppMaster command options does not replace @taskid@ with the current task ID.

2013-08-28 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S moved MAPREDUCE-5460 to YARN-1112:


  Component/s: (was: applicationmaster)
   (was: mrv2)
 Assignee: (was: Rohith Sharma K S)
 Target Version/s:   (was: 3.0.0, 2.1.1-beta)
Affects Version/s: (was: 2.1.1-beta)
   (was: 3.0.0)
   2.1.1-beta
   3.0.0
  Key: YARN-1112  (was: MAPREDUCE-5460)
  Project: Hadoop YARN  (was: Hadoop Map/Reduce)

 MR AppMaster command options does not replace @taskid@ with the current task 
 ID.
 

 Key: YARN-1112
 URL: https://issues.apache.org/jira/browse/YARN-1112
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Chris Nauroth

 The description of {{yarn.app.mapreduce.am.command-opts}} in 
 mapred-default.xml states that occurrences of {{@taskid@}} will be replaced 
 by the current task ID.  This substitution is not happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1112) MR AppMaster command options does not replace @taskid@ with the current task ID.

2013-08-28 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-1112:


Attachment: YARN-1112.patch

Attaching patch for replacement of @appid@ in am.command_opts. @appid@ is 
replaced with app attempt id.

 MR AppMaster command options does not replace @taskid@ with the current task 
 ID.
 

 Key: YARN-1112
 URL: https://issues.apache.org/jira/browse/YARN-1112
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Chris Nauroth
 Attachments: YARN-1112.patch


 The description of {{yarn.app.mapreduce.am.command-opts}} in 
 mapred-default.xml states that occurrences of {{@taskid@}} will be replaced 
 by the current task ID.  This substitution is not happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752294#comment-13752294
 ] 

Hudson commented on YARN-981:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #315 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/315/])
YARN-981. Fixed YARN webapp so that /logs servlet works like before. 
Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518030)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java


 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1081) Minor improvement to output header for $ yarn node -list

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752290#comment-13752290
 ] 

Hudson commented on YARN-1081:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #315 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/315/])
YARN-1081. Made a trivial change to YARN node CLI header to avoid potential 
confusion. Contributed by Akira AJISAKA. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518080)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 Minor improvement to output header for $ yarn node -list
 

 Key: YARN-1081
 URL: https://issues.apache.org/jira/browse/YARN-1081
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1081.2.patch, YARN-1081.patch


 Output of $ yarn node -list shows number of running containers at each node. 
 I found a case when new user of YARN thinks that this is container ID, use it 
 later in other YARN commands and find an error due to misunderstanding.
 {code:title=current output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}
 {code:title=proposed output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Number-of-Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-602) NodeManager should mandatorily set some Environment variables into every containers that it launches

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752293#comment-13752293
 ] 

Hudson commented on YARN-602:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #315 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/315/])
YARN-602. Fixed NodeManager to not let users override some mandatory 
environmental variables. Contributed by Kenji Kikushima. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518077)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches
 

 Key: YARN-602
 URL: https://issues.apache.org/jira/browse/YARN-602
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Xuan Gong
Assignee: Kenji Kikushima
 Fix For: 2.1.1-beta

 Attachments: YARN-602-2.patch, YARN-602-3.patch, YARN-602.patch


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches, such as Environment.user, Environment.pwd. If 
 both users and NodeManager set those variables, the value set by NM should be 
 used 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1083) ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms is set less than heartbeat interval

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752289#comment-13752289
 ] 

Hudson commented on YARN-1083:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #315 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/315/])
YARN-1083. Changed ResourceManager to fail when the expiry interval is less 
than the configured node-heartbeat interval. Contributed by Zhijie Shen. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518036)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java


 ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms 
 is set less than heartbeat interval
 

 Key: YARN-1083
 URL: https://issues.apache.org/jira/browse/YARN-1083
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Zhijie Shen
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1083.1.patch, YARN-1083.2.patch


 if 'yarn.nm.liveness-monitor.expiry-interval-ms' is set to less than 
 heartbeat iterval, all the node managers will be added in 'Lost Nodes'
 Instead, Resource Manager should validate these property and It should fail 
 to start if combination of such property is invalid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1113) Job failing when one of the NM local dir got filled

2013-08-28 Thread Nishan Shetty (JIRA)
Nishan Shetty created YARN-1113:
---

 Summary: Job failing when one of the NM local dir got filled
 Key: YARN-1113
 URL: https://issues.apache.org/jira/browse/YARN-1113
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Nishan Shetty


1.In NodeManager only one disk is configured for NM local dir
2.Make that disk full 
3.Run job

Problems
-Tasks assigned to that disk filled NM is waiting for container expiry 
time(10min)
-After expiry time that containers will be killed and new task attempt spawned
-All the other tasks attempts are getting assigned to same node only and 
failing the tasks 4 times intern job fails

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1081) Minor improvement to output header for $ yarn node -list

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752385#comment-13752385
 ] 

Hudson commented on YARN-1081:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1505 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1505/])
YARN-1081. Made a trivial change to YARN node CLI header to avoid potential 
confusion. Contributed by Akira AJISAKA. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518080)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 Minor improvement to output header for $ yarn node -list
 

 Key: YARN-1081
 URL: https://issues.apache.org/jira/browse/YARN-1081
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1081.2.patch, YARN-1081.patch


 Output of $ yarn node -list shows number of running containers at each node. 
 I found a case when new user of YARN thinks that this is container ID, use it 
 later in other YARN commands and find an error due to misunderstanding.
 {code:title=current output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}
 {code:title=proposed output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Number-of-Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752389#comment-13752389
 ] 

Hudson commented on YARN-981:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1505 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1505/])
YARN-981. Fixed YARN webapp so that /logs servlet works like before. 
Contributed by Jian He. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518030)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java


 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-602) NodeManager should mandatorily set some Environment variables into every containers that it launches

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752388#comment-13752388
 ] 

Hudson commented on YARN-602:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1505 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1505/])
YARN-602. Fixed NodeManager to not let users override some mandatory 
environmental variables. Contributed by Kenji Kikushima. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518077)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches
 

 Key: YARN-602
 URL: https://issues.apache.org/jira/browse/YARN-602
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Xuan Gong
Assignee: Kenji Kikushima
 Fix For: 2.1.1-beta

 Attachments: YARN-602-2.patch, YARN-602-3.patch, YARN-602.patch


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches, such as Environment.user, Environment.pwd. If 
 both users and NodeManager set those variables, the value set by NM should be 
 used 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1083) ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms is set less than heartbeat interval

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752384#comment-13752384
 ] 

Hudson commented on YARN-1083:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1505 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1505/])
YARN-1083. Changed ResourceManager to fail when the expiry interval is less 
than the configured node-heartbeat interval. Contributed by Zhijie Shen. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518036)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java


 ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms 
 is set less than heartbeat interval
 

 Key: YARN-1083
 URL: https://issues.apache.org/jira/browse/YARN-1083
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Zhijie Shen
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1083.1.patch, YARN-1083.2.patch


 if 'yarn.nm.liveness-monitor.expiry-interval-ms' is set to less than 
 heartbeat iterval, all the node managers will be added in 'Lost Nodes'
 Instead, Resource Manager should validate these property and It should fail 
 to start if combination of such property is invalid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-707:
---

Priority: Blocker  (was: Major)

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1114) Resource Manager Failure Due to Unreachable DNS

2013-08-28 Thread Ed Kohlwey (JIRA)
Ed Kohlwey created YARN-1114:


 Summary: Resource Manager Failure Due to Unreachable DNS
 Key: YARN-1114
 URL: https://issues.apache.org/jira/browse/YARN-1114
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
 Environment: Centos 6.3, Hortonworks vendor distro based on Hadoop 2.1
Reporter: Ed Kohlwey


We encountered an issue last night where DNS was not resolvable on our cluster 
briefly.

Our resource manager appears to have crashed due to an unresolvable hostname 
for a node manager. This is definitely not the right behavior since anyone can 
crash the resource manager by advertising a node manager with an unresolvable 
hostname. It also makes the RM non-very-robust to transient network issues that 
may arise. 

Here is the stack trace:
{noformat}
2013-08-28 05:06:24,703 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type NODE_UPDATE to the scheduler
java.lang.IllegalArgumentException: java.net.UnknownHostException: hostname 
removed
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
at 
org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:243)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:195)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.createContainer(AppSchedulable.java:160)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:237)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:338)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:364)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:160)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:149)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:907)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:980)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:110)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:413)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.net.UnknownHostException: hostname removed
... 14 more
{noformat}

The following is our version information (from the hortonworks distro):
{noformat}
Hadoop 2.1.0.2.0.4.0-38
Subversion g...@github.com:hortonworks/hadoop.git -r 
1c6feea9d537846789eb3337dc5b1a8911cfd60a
Compiled by jenkins on 2013-07-08T10:29Z
From source with checksum d1403d7842ef98c85d5f3d1332fa4
This command was run using /usr/lib/hadoop/hadoop-common-2.1.0.2.0.4.0-38.jar
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1113) Job failing when one of the NM local dir got filled

2013-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752417#comment-13752417
 ] 

Jason Lowe commented on YARN-1113:
--

This is related to, and possibly just a duplicate of, YARN-257.

 Job failing when one of the NM local dir got filled
 ---

 Key: YARN-1113
 URL: https://issues.apache.org/jira/browse/YARN-1113
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Nishan Shetty

 1.In NodeManager only one disk is configured for NM local dir
 2.Make that disk full 
 3.Run job
 Problems
 -Tasks assigned to that disk filled NM is waiting for container expiry 
 time(10min)
 -After expiry time that containers will be killed and new task attempt spawned
 -All the other tasks attempts are getting assigned to same node only and 
 failing the tasks 4 times intern job fails

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-1114) Resource Manager Failure Due to Unreachable DNS

2013-08-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-1114.
--

Resolution: Duplicate

This is a duplicate of YARN-713.

 Resource Manager Failure Due to Unreachable DNS
 ---

 Key: YARN-1114
 URL: https://issues.apache.org/jira/browse/YARN-1114
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
 Environment: Centos 6.3, Hortonworks vendor distro based on Hadoop 2.1
Reporter: Ed Kohlwey

 We encountered an issue last night where DNS was not resolvable on our 
 cluster briefly.
 Our resource manager appears to have crashed due to an unresolvable hostname 
 for a node manager. This is definitely not the right behavior since anyone 
 can crash the resource manager by advertising a node manager with an 
 unresolvable hostname. It also makes the RM non-very-robust to transient 
 network issues that may arise. 
 Here is the stack trace:
 {noformat}
 2013-08-28 05:06:24,703 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
 handling event type NODE_UPDATE to the scheduler
 java.lang.IllegalArgumentException: java.net.UnknownHostException: hostname 
 removed
 at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
 at 
 org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:243)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:195)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.createContainer(AppSchedulable.java:160)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:237)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:338)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:364)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:160)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java:149)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:907)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:980)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:110)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:413)
 at java.lang.Thread.run(Thread.java:724)
 Caused by: java.net.UnknownHostException: hostname removed
 ... 14 more
 {noformat}
 The following is our version information (from the hortonworks distro):
 {noformat}
 Hadoop 2.1.0.2.0.4.0-38
 Subversion g...@github.com:hortonworks/hadoop.git -r 
 1c6feea9d537846789eb3337dc5b1a8911cfd60a
 Compiled by jenkins on 2013-07-08T10:29Z
 From source with checksum d1403d7842ef98c85d5f3d1332fa4
 This command was run using /usr/lib/hadoop/hadoop-common-2.1.0.2.0.4.0-38.jar
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752443#comment-13752443
 ] 

Daryn Sharp commented on YARN-707:
--

Technically you should be bumping the token ident's version number and using 
that to determine if the app submitter is in the ident.  Otherwise, decoding of 
prior tokens will attempt to read the missing app submitter from the next 
serialized object and eventually fail spectacularly.

{{RmAppImpl#createAndGetApplicationReport}}
Using checks on {{UserGroupInformation.isSecurityEnabled()}} here and elsewhere 
will cause future incompatibility to require tokens w/o security which is the 
direction yarn has been moving in.  It would be better to check if the secret 
manager is not null.

It's just logging if it cannot create a token?  This _shouldn't_ happen, but 
_if/when_ it does it's going to lead to more difficult after the fact errors in 
the client.  It's unfortunate you cannot throw the checked exception 
{{IOException}}, so I think you need to change the method signature or throw 
whatever you can, like a {{YarnException}}, to fail the request.

App attempting storing/restoring appears asymmetric.  Storing saves off the 
whole credentials in the attempt, whereas restoring appears to just pluck out 
the amrm token and the new persisted secret?

Minor:
Methods using the term Token, ex. {{recoverAppAttemptTokens}} and 
{{getTokensFromAppAttempt}} are misleading since it's Credentials.  Vinod had 
me make a similar change to the method names in the AM.

{{AM_CLIENT_TOKEN_MASTER_KEY_NAME}} is better defined in {{RMAppAttempt}}, 
rather than in the {{RMStateStore}}.  Otherwise the import dependency seems 
backwards.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1083) ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms is set less than heartbeat interval

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752450#comment-13752450
 ] 

Hudson commented on YARN-1083:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1532 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1532/])
YARN-1083. Changed ResourceManager to fail when the expiry interval is less 
than the configured node-heartbeat interval. Contributed by Zhijie Shen. 
(vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518036)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java


 ResourceManager should fail when yarn.nm.liveness-monitor.expiry-interval-ms 
 is set less than heartbeat interval
 

 Key: YARN-1083
 URL: https://issues.apache.org/jira/browse/YARN-1083
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Zhijie Shen
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1083.1.patch, YARN-1083.2.patch


 if 'yarn.nm.liveness-monitor.expiry-interval-ms' is set to less than 
 heartbeat iterval, all the node managers will be added in 'Lost Nodes'
 Instead, Resource Manager should validate these property and It should fail 
 to start if combination of such property is invalid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1081) Minor improvement to output header for $ yarn node -list

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752451#comment-13752451
 ] 

Hudson commented on YARN-1081:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1532 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1532/])
YARN-1081. Made a trivial change to YARN node CLI header to avoid potential 
confusion. Contributed by Akira AJISAKA. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518080)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


 Minor improvement to output header for $ yarn node -list
 

 Key: YARN-1081
 URL: https://issues.apache.org/jira/browse/YARN-1081
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.1.1-beta

 Attachments: YARN-1081.2.patch, YARN-1081.patch


 Output of $ yarn node -list shows number of running containers at each node. 
 I found a case when new user of YARN thinks that this is container ID, use it 
 later in other YARN commands and find an error due to misunderstanding.
 {code:title=current output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}
 {code:title=proposed output}
 2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
 2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
 2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id   Node-State  
 Node-Http-Address   Number-of-Running-Containers
 2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454 RUNNING  
 myhost:50060   2
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-602) NodeManager should mandatorily set some Environment variables into every containers that it launches

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752454#comment-13752454
 ] 

Hudson commented on YARN-602:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1532 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1532/])
YARN-602. Fixed NodeManager to not let users override some mandatory 
environmental variables. Contributed by Kenji Kikushima. (vinodkv: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518077)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches
 

 Key: YARN-602
 URL: https://issues.apache.org/jira/browse/YARN-602
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: Xuan Gong
Assignee: Kenji Kikushima
 Fix For: 2.1.1-beta

 Attachments: YARN-602-2.patch, YARN-602-3.patch, YARN-602.patch


 NodeManager should mandatorily set some Environment variables into every 
 containers that it launches, such as Environment.user, Environment.pwd. If 
 both users and NodeManager set those variables, the value set by NM should be 
 used 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-08-28 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated YARN-696:


Attachment: (was: YARN-696.diff)

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer

 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-08-28 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated YARN-696:


Attachment: YARN-696.diff

Refactored code, changed JUnit test to have 2 apps in different states.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-696) Enable multiple states to to be specified in Resource Manager apps REST call

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752562#comment-13752562
 ] 

Hadoop QA commented on YARN-696:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600410/YARN-696.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site:

  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1781//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1781//console

This message is automatically generated.

 Enable multiple states to to be specified in Resource Manager apps REST call
 

 Key: YARN-696
 URL: https://issues.apache.org/jira/browse/YARN-696
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Affects Versions: 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: YARN-696.diff


 Within the YARN Resource Manager REST API the GET call which returns all 
 Applications can be filtered by a single State query parameter (http://rm 
 http address:port/ws/v1/cluster/apps). 
 There are 8 possible states (New, Submitted, Accepted, Running, Finishing, 
 Finished, Failed, Killed), if no state parameter is specified all states are 
 returned, however if a sub-set of states is required then multiple REST calls 
 are required (max. of 7).
 The proposal is to be able to specify multiple states in a single REST call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752573#comment-13752573
 ] 

Jason Lowe commented on YARN-707:
-

Thanks for the review, Daryn.

bq. Technically you should be bumping the token ident's version number and 
using that to determine if the app submitter is in the ident. Otherwise, 
decoding of prior tokens will attempt to read the missing app submitter from 
the next serialized object and eventually fail spectacularly.

Talked with [~daryn] offline, there isn't a version ID in the token to bump.  
Will file a followup JIRA.  I do not know how to avoid the issue with the 
deserialization of the old format given there is no way to detect it.

bq. Using checks on UserGroupInformation.isSecurityEnabled() here and elsewhere 
will cause future incompatibility to require tokens w/o security which is the 
direction yarn has been moving in. It would be better to check if the secret 
manager is not null.

Filed YARN-1108 to track the change where we always require client AM tokens.  
I'd rather not make that change as part of this JIRA.  Given that there is 
always a client-to-AM secret manager even when security is not enabled, I'd 
rather defer that change to YARN-1108.

bq. It's just logging if it cannot create a token? This shouldn't happen, but 
if/when it does it's going to lead to more difficult after the fact errors in 
the client. It's unfortunate you cannot throw the checked exception 
IOException, so I think you need to change the method signature or throw 
whatever you can, like a YarnException, to fail the request.

I had it log a message since that's what the existing code already does below 
in the same method when it cannot determine the current user.  There are 
already other, legitimate scenarios in which the client will not receive an AM 
token (i.e.: it does not have VIEW_JOB access), and the client will not 
necessarily want to connect to the AM even if it does have access.  It could be 
getting the report just to track the app at a high level, and I thought it was 
a bit extreme to fail the entire request just because a small part of it that 
may not even be used by the client cannot be generated. If others feel this 
should be fatal to the request, I can be convinced to change it.

bq. App attempting storing/restoring appears asymmetric. Storing saves off the 
whole credentials in the attempt, whereas restoring appears to just pluck out 
the amrm token and the new persisted secret?

The Credentials are just a bag to hold the token and key, so it fills out an 
empty one with those two items and plucks those two back out when it gets the 
bag of stuff back.  The Credentials is just a transport mechanism in the code.  
I agree it's a bit odd that the RMAppAttemptImpl does some of this work and 
RMStateStore does the other, but I'm just preserving the existing architecture. 
 Changing that is outside the scope of this JIRA, IMHO.

bq. Methods using the term Token, ex. recoverAppAttemptTokens and 
getTokensFromAppAttempt are misleading since it's Credentials

Given that it used to be just tokens before this change, I'll change Tokens 
to Credentials in methods to better reflect what was going on.

bq. AM_CLIENT_TOKEN_MASTER_KEY_NAME is better defined in RMAppAttempt, rather 
than in the RMStateStore.

I put it in RMStateStore since that's where AM_RM_TOKEN_SERVICE already existed 
and it's a similar concept -- naming something that needs to be stored.  I'll 
move this to RMAppAttemptImpl if others feel that's a better place for it.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-173) Page navigation support for container logs page

2013-08-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752616#comment-13752616
 ] 

Steve Loughran commented on YARN-173:
-

I'm marking as relates YARN- rather than duplicates it, because that JIRA 
notes how performance of the servlet collapses if the file is big -almost a 
minute to get that trailing few lines. Even if the GUI adds start and end 
points, the servlet needs to handle paging into a file more efficiently. 

 Page navigation support for container logs page
 ---

 Key: YARN-173
 URL: https://issues.apache.org/jira/browse/YARN-173
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 0.23.3
Reporter: Jason Lowe
Assignee: Omkar Vinit Joshi
  Labels: usability

 ContainerLogsPage and AggregatedLogsBlock both support {{start}} and {{end}} 
 parameters which are a big help when trying to sift through a huge log.  
 However it's annoying to have to manually edit the URL to go through a giant 
 log page-by-page.  It would be very handy if the web page also provided page 
 navigation links so flipping to the next/previous/first/last chunk of log is 
 a simple click away.  Bonus points for providing a way to easily change the 
 size of the log chunk shown per page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1001) YARN should provide per application-type and state statistics

2013-08-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752612#comment-13752612
 ] 

Steve Loughran commented on YARN-1001:
--

I'd recommend using {{toLowerCase(EN_US)}} in case conversion so it works 
consistently everywhere

 YARN should provide per application-type and state statistics
 -

 Key: YARN-1001
 URL: https://issues.apache.org/jira/browse/YARN-1001
 Project: Hadoop YARN
  Issue Type: Task
  Components: api
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi
Assignee: Zhijie Shen
 Attachments: YARN-1001.1.patch


 In Ambari we plan to show for MR2 the number of applications finished, 
 running, waiting, etc. It would be efficient if YARN could provide per 
 application-type and state aggregated counts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-707:


Attachment: YARN-707-20130828.txt

Updated patch to change Tokens to Credentials in method names.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1063) Winutils needs ability to create task as domain user

2013-08-28 Thread Kyle Leckie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752656#comment-13752656
 ] 

Kyle Leckie commented on YARN-1063:
---

Thanks Chuan,
A rebase into my branch fixed the patch issue.


 Winutils needs ability to create task as domain user
 

 Key: YARN-1063
 URL: https://issues.apache.org/jira/browse/YARN-1063
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: trunk-win
 Environment: Windows
Reporter: Kyle Leckie
  Labels: security
 Fix For: trunk-win

 Attachments: YARN-1063.patch


 h1. Summary:
 Securing a Hadoop cluster requires constructing some form of security 
 boundary around the processes executed in YARN containers. Isolation based on 
 Windows user isolation seems most feasible. This approach is similar to the 
 approach taken by the existing LinuxContainerExecutor. The current patch to 
 winutils.exe adds the ability to create a process as a domain user. 
 h1. Alternative Methods considered:
 h2. Process rights limited by security token restriction:
 On Windows access decisions are made by examining the security token of a 
 process. It is possible to spawn a process with a restricted security token. 
 Any of the rights granted by SIDs of the default token may be restricted. It 
 is possible to see this in action by examining the security tone of a 
 sandboxed process launch be a web browser. Typically the launched process 
 will have a fully restricted token and need to access machine resources 
 through a dedicated broker process that enforces a custom security policy. 
 This broker process mechanism would break compatibility with the typical 
 Hadoop container process. The Container process must be able to utilize 
 standard function calls for disk and network IO. I performed some work 
 looking at ways to ACL the local files to the specific launched without 
 granting rights to other processes launched on the same machine but found 
 this to be an overly complex solution. 
 h2. Relying on APP containers:
 Recent versions of windows have the ability to launch processes within an 
 isolated container. Application containers are supported for execution of 
 WinRT based executables. This method was ruled out due to the lack of 
 official support for standard windows APIs. At some point in the future 
 windows may support functionality similar to BSD jails or Linux containers, 
 at that point support for containers should be added.
 h1. Create As User Feature Description:
 h2. Usage:
 A new sub command was added to the set of task commands. Here is the syntax:
 winutils task createAsUser [TASKNAME] [USERNAME] [COMMAND_LINE]
 Some notes:
 * The username specified is in the format of user@domain
 * The machine executing this command must be joined to the domain of the user 
 specified
 * The domain controller must allow the account executing the command access 
 to the user information. For this join the account to the predefined group 
 labeled Pre-Windows 2000 Compatible Access
 * The account running the command must have several rights on the local 
 machine. These can be managed manually using secpol.msc: 
 ** Act as part of the operating system - SE_TCB_NAME
 ** Replace a process-level token - SE_ASSIGNPRIMARYTOKEN_NAME
 ** Adjust memory quotas for a process - SE_INCREASE_QUOTA_NAME
 * The launched process will not have rights to the desktop so will not be 
 able to display any information or create UI.
 * The launched process will have no network credentials. Any access of 
 network resources that requires domain authentication will fail.
 h2. Implementation:
 Winutils performs the following steps:
 # Enable the required privileges for the current process.
 # Register as a trusted process with the Local Security Authority (LSA).
 # Create a new logon for the user passed on the command line.
 # Load/Create a profile on the local machine for the new logon.
 # Create a new environment for the new logon.
 # Launch the new process in a job with the task name specified and using the 
 created logon.
 # Wait for the JOB to exit.
 h2. Future work:
 The following work was scoped out of this check in:
 * Support for non-domain users or machine that are not domain joined.
 * Support for privilege isolation by running the task launcher in a high 
 privilege service with access over an ACLed named pipe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1115) Provide optional means for a scheduler to check real user ACLs

2013-08-28 Thread Eric Payne (JIRA)
Eric Payne created YARN-1115:


 Summary: Provide optional means for a scheduler to check real user 
ACLs
 Key: YARN-1115
 URL: https://issues.apache.org/jira/browse/YARN-1115
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 0.23.9, 2.1.0-beta
Reporter: Eric Payne


In the framework for secure implementation using UserGroupInformation.doAs 
(http://hadoop.apache.org/docs/stable/Secure_Impersonation.html), a trusted 
superuser can submit jobs on behalf of another user in a secure way. In this 
framework, the superuser is referred to as the real user and the proxied user 
is referred to as the effective user.

Currently when a job is submitted as an effective user, the ACLs for the 
effective user are checked against the queue on which the job is to be run. 
Depending on an optional configuration, the scheduler should also check the 
ACLs of the real user if the configuration to do so is set.

For example, suppose my superuser name is super, and super is configured to 
securely proxy as joe. Also suppose there is a Hadoop queue named ops which 
only allows ACLs for super, not for joe.

When super proxies to joe in order to submit a job to the ops queue, it will 
fail because joe, as the effective user, does not have ACLs on the ops queue.

In many cases this is what you want, in order to protect queues that joe should 
not be using.

However, there are times when super may need to proxy to many users, and the 
client running as super just wants to use the ops queue because the ops queue 
is already dedicated to the client's purpose, and, to keep the ops queue 
dedicated to that purpose, super doesn't want to open up ACLs to joe in general 
on the ops queue. Without this functionality, in this case, the client running 
as super needs to figure out which queue each user has ACLs opened up for, and 
then coordinate with other tasks using those queues.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-979) [YARN-321] Adding application attempt and container to ApplicationHistoryProtocol

2013-08-28 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-979:
-

Assignee: Zhijie Shen  (was: Mayank Bansal)

 [YARN-321] Adding application attempt and container to 
 ApplicationHistoryProtocol
 -

 Key: YARN-979
 URL: https://issues.apache.org/jira/browse/YARN-979
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Zhijie Shen
 Attachments: YARN-979-1.patch


  Adding application attempt and container to ApplicationHistoryProtocol
 Thanks,
 Mayank

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1116) Populate AMRMTokens back to AMRMTokenSecretManager after RM restarts

2013-08-28 Thread Jian He (JIRA)
Jian He created YARN-1116:
-

 Summary: Populate AMRMTokens back to AMRMTokenSecretManager after 
RM restarts
 Key: YARN-1116
 URL: https://issues.apache.org/jira/browse/YARN-1116
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He


The AMRMTokens are now only saved in RMStateStore and not populated back to 
AMRMTokenSecretManager after RM restarts. This is more needed now since 
AMRMToken also becomes used in non-secure env.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-979) [YARN-321] Adding application attempt and container to ApplicationHistoryProtocol

2013-08-28 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752669#comment-13752669
 ] 

Zhijie Shen commented on YARN-979:
--

Take it over. Thanks!

 [YARN-321] Adding application attempt and container to 
 ApplicationHistoryProtocol
 -

 Key: YARN-979
 URL: https://issues.apache.org/jira/browse/YARN-979
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Mayank Bansal
Assignee: Zhijie Shen
 Attachments: YARN-979-1.patch


  Adding application attempt and container to ApplicationHistoryProtocol
 Thanks,
 Mayank

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752673#comment-13752673
 ] 

Gopal V commented on YARN-981:
--

HADOOP-9784 needs to be integrated into branch-2.1-beta for this to build 
correctly.

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752694#comment-13752694
 ] 

Hadoop QA commented on YARN-707:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600422/YARN-707-20130828.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1782//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1782//console

This message is automatically generated.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752707#comment-13752707
 ] 

Xuan Gong commented on YARN-1080:
-

Command : yarn logs
output:
Retrieve logs for completed/killed YARN application.
usage: yarn logs -applicationId application ID [OPTIONS]

general options are:
 -appOwner Application Owner   AppOwner (assumed to be current user if
 not specified)
 -containerId Container ID ContainerId (must be specified if node
 address is specified)
 -nodeAddress Node Address NodeAddress in the format nodename:port
 (must be specified if container id is
 specified)

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.3.0

 Attachments: YARN-1080.1.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1080:


Attachment: YARN-1080.1.patch

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.3.0

 Attachments: YARN-1080.1.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752740#comment-13752740
 ] 

Hadoop QA commented on YARN-1080:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600435/YARN-1080.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1784//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1784//console

This message is automatically generated.

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.3.0

 Attachments: YARN-1080.1.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-771) AMRMClient support for resource blacklisting

2013-08-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-771:


Attachment: YARN-771-v3.patch

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, YARN-771-v3.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752755#comment-13752755
 ] 

Hadoop QA commented on YARN-353:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600180/YARN-353.15.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1783//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1783//console

This message is automatically generated.

 Add Zookeeper-based store implementation for RMStateStore
 -

 Key: YARN-353
 URL: https://issues.apache.org/jira/browse/YARN-353
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Hitesh Shah
Assignee: Karthik Kambatla
 Attachments: YARN-353.10.patch, YARN-353.11.patch, YARN-353.12.patch, 
 yarn-353-12-wip.patch, YARN-353.13.patch, YARN-353.14.patch, 
 YARN-353.15.patch, YARN-353.1.patch, YARN-353.2.patch, YARN-353.3.patch, 
 YARN-353.4.patch, YARN-353.5.patch, YARN-353.6.patch, YARN-353.7.patch, 
 YARN-353.8.patch, YARN-353.9.patch


 Add store that write RM state data to ZK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-771) AMRMClient support for resource blacklisting

2013-08-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752754#comment-13752754
 ] 

Junping Du commented on YARN-771:
-

[~bikassaha], Thanks for review! I address all your comments in v3 patch. 
Please help to look it again. Thx!

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, YARN-771-v3.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-257) NM should gracefully handle a full local disk

2013-08-28 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752764#comment-13752764
 ] 

Eli Collins commented on YARN-257:
--

This seems like a blocker for GA given that MR1 handles disk failures.

 NM should gracefully handle a full local disk
 -

 Key: YARN-257
 URL: https://issues.apache.org/jira/browse/YARN-257
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.2-alpha, 0.23.5
Reporter: Jason Lowe

 When a local disk becomes full, the node will fail every container launched 
 on it because the container is unable to localize.  It tries to create an 
 app-specific directory for each local and log directories.  If any of those 
 directory creates fail (due to lack of free space) the container fails.
 It would be nice if the node could continue to launch containers using the 
 space available on other disks rather than failing all containers trying to 
 launch on the node.
 This is somewhat related to YARN-91 but is centered around the disk becoming 
 full rather than the disk failing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752765#comment-13752765
 ] 

Jason Lowe commented on YARN-707:
-

bq. there isn't a version ID in the token to bump. Will file a followup JIRA.

YARN-668 already covers versioning for YARN tokens, including the ClientAMToken.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-91) DFIP aka 'NodeManager should handle Disk-Failures In Place'

2013-08-28 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752763#comment-13752763
 ] 

Eli Collins commented on YARN-91:
-

This seems like a blocker for GA given that MR1 handles disk failures.

  DFIP aka 'NodeManager should handle Disk-Failures In Place' 
 -

 Key: YARN-91
 URL: https://issues.apache.org/jira/browse/YARN-91
 Project: Hadoop YARN
  Issue Type: Task
  Components: nodemanager
Reporter: Vinod Kumar Vavilapalli

 Moving stuff over from the MAPREDUCE JIRA: MAPREDUCE-3121

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1118) Improve help message for $ yarn node

2013-08-28 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1118:
-

 Summary: Improve help message for $ yarn node
 Key: YARN-1118
 URL: https://issues.apache.org/jira/browse/YARN-1118
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya


There is standardization of help message in YARN-1080. It is nice to have 
similar changes for $ yarn node

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1117) Improve help message for $ yarn applications

2013-08-28 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1117:
-

 Summary: Improve help message for $ yarn applications
 Key: YARN-1117
 URL: https://issues.apache.org/jira/browse/YARN-1117
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya


There is standardization of help message in YARN-1080. It is nice to have 
similar changes for $ yarn appications

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-962) Update application_history_service.proto

2013-08-28 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752833#comment-13752833
 ] 

Zhijie Shen commented on YARN-962:
--

[~vinodkv], modifications on application_history_service.proto and 
application_history_client.proto haven't been checked in.

 Update application_history_service.proto
 

 Key: YARN-962
 URL: https://issues.apache.org/jira/browse/YARN-962
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: YARN-321

 Attachments: YARN-962.1.patch


 1. Change it's name to application_history_client.proto
 2. Fix the incorrect proto reference.
 3. Correct the dir in pom.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1101) Active nodes can be decremented below 0

2013-08-28 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752843#comment-13752843
 ] 

Thomas Graves commented on YARN-1101:
-

thanks Rob. +1 Patch looks good.  We should file a separate jira to add in 
tests for the metrics the healthy  state transitions.

 Active nodes can be decremented below 0
 ---

 Key: YARN-1101
 URL: https://issues.apache.org/jira/browse/YARN-1101
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 0.23.9, 2.0.6-alpha
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: YARN-1101_b0.23_v1.patch, YARN-1101_v1.patch


 The issue is in RMNodeImpl where both RUNNING and UNHEALTHY states that 
 transition to a deactive state (LOST, DECOMMISSIONED, REBOOTED) use the same 
 DeactivateNodeTransition class.  The DeactivateNodeTransition class naturally 
 decrements the active node, however the in cases where the node has 
 transition to UNHEALTHY the active count has already been decremented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-91) DFIP aka 'NodeManager should handle Disk-Failures In Place'

2013-08-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-91?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752844#comment-13752844
 ] 

Vinod Kumar Vavilapalli commented on YARN-91:
-

We already have the feature in. This was create for tracking some miscellaneous 
things that aren't handled, none of which I believe are regressions from 
Hadoop-1.

  DFIP aka 'NodeManager should handle Disk-Failures In Place' 
 -

 Key: YARN-91
 URL: https://issues.apache.org/jira/browse/YARN-91
 Project: Hadoop YARN
  Issue Type: Task
  Components: nodemanager
Reporter: Vinod Kumar Vavilapalli

 Moving stuff over from the MAPREDUCE JIRA: MAPREDUCE-3121

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1119) Add ClusterMetrics checks to tho TestRMNodeTransitions tests

2013-08-28 Thread Robert Parker (JIRA)
Robert Parker created YARN-1119:
---

 Summary: Add ClusterMetrics checks to tho TestRMNodeTransitions 
tests
 Key: YARN-1119
 URL: https://issues.apache.org/jira/browse/YARN-1119
 Project: Hadoop YARN
  Issue Type: Test
  Components: resourcemanager
Affects Versions: 2.0.6-alpha, 0.23.9, 3.0.0
Reporter: Robert Parker
Assignee: Robert Parker


YARN-1101 identified an issue where UNHEALTHY nodes could double decrement the 
active nodes. We should add checks for RUNNING node transitions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1101) Active nodes can be decremented below 0

2013-08-28 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752852#comment-13752852
 ] 

Robert Parker commented on YARN-1101:
-

Tom, Thanks for the review. I have added YARN-1119 to address improving the 
other tests in this Test Class.

 Active nodes can be decremented below 0
 ---

 Key: YARN-1101
 URL: https://issues.apache.org/jira/browse/YARN-1101
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 0.23.9, 2.0.6-alpha
Reporter: Robert Parker
Assignee: Robert Parker
 Attachments: YARN-1101_b0.23_v1.patch, YARN-1101_v1.patch


 The issue is in RMNodeImpl where both RUNNING and UNHEALTHY states that 
 transition to a deactive state (LOST, DECOMMISSIONED, REBOOTED) use the same 
 DeactivateNodeTransition class.  The DeactivateNodeTransition class naturally 
 decrements the active node, however the in cases where the node has 
 transition to UNHEALTHY the active count has already been decremented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1101) Active nodes can be decremented below 0

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752855#comment-13752855
 ] 

Hudson commented on YARN-1101:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4340 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4340/])
YARN-1101. Active nodes can be decremented below 0 (Robert Parker via tgraves_ 
(tgraves: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518384)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java


 Active nodes can be decremented below 0
 ---

 Key: YARN-1101
 URL: https://issues.apache.org/jira/browse/YARN-1101
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 0.23.9, 2.0.6-alpha
Reporter: Robert Parker
Assignee: Robert Parker
 Fix For: 3.0.0, 2.3.0, 0.23.10, 2.1.1-beta

 Attachments: YARN-1101_b0.23_v1.patch, YARN-1101_v1.patch


 The issue is in RMNodeImpl where both RUNNING and UNHEALTHY states that 
 transition to a deactive state (LOST, DECOMMISSIONED, REBOOTED) use the same 
 DeactivateNodeTransition class.  The DeactivateNodeTransition class naturally 
 decrements the active node, however the in cases where the node has 
 transition to UNHEALTHY the active count has already been decremented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-990) YARN REST api needs filtering capability

2013-08-28 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-990.
--

Resolution: Duplicate

Yes, this is already in there, you have to use 'state' and 'applicationTypes' 
filters. Closing this as duplicate of MAPREDUCE-2863 and YARN-865.

 YARN REST api needs filtering capability
 

 Key: YARN-990
 URL: https://issues.apache.org/jira/browse/YARN-990
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api
Affects Versions: 2.1.0-beta
Reporter: Srimanth Gunturi

 We wanted to find the MR2 apps which were running/finished/etc. There was no 
 filtering capability of the /apps endpoint.
 [http://dev01:8088/ws/v1/cluster/apps?applicationType=MAPREDUCEstate=RUNNING]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened YARN-981:
-


branch-2.1 doesn't compile after this patch is merged in
{code}
[ERROR] 
hadoop-trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java:[246,34]
 cannot find symbol
[ERROR] symbol  : method getWebAppContext()
[ERROR] location: class org.apache.hadoop.http.HttpServer
{code}

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1077) TestContainerLaunch fails on Windows

2013-08-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated YARN-1077:


Attachment: YARN-1077.3.patch

Attaching a new patch.

bq. Should we just change ExitCode.TERMINATED.getExitCode() to return the 
correct code depending on the OS? That way all future callers can simply work. 
I've seen that pattern in other patches too, so saying.

This is a really good idea! Can we deal with this in a separate JIRA. I don't 
want to increase the scope of the current patch.

 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testDelayedKill(TestContainerLaunch.java:601)
 ...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752943#comment-13752943
 ] 

Hadoop QA commented on YARN-707:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12600476/YARN-707-20130828-2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1785//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1785//console

This message is automatically generated.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828-2.txt, YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1077) TestContainerLaunch fails on Windows

2013-08-28 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752950#comment-13752950
 ] 

Chuan Liu commented on YARN-1077:
-

In the new patch, I removed the following code because writer is writing to 
shellFile but later the file outputsteam opens the file without append mode. So 
the code is simply dead code.

{code:java}
   shellFile = Shell.appendScriptExtension(tmpDir, hello);
-  String timeoutCommand = Shell.WINDOWS ? @echo \hello\ :
-echo \hello\;
-  PrintWriter writer = new PrintWriter(new FileOutputStream(shellFile));
-  FileUtil.setExecutable(shellFile, true);
-  writer.println(timeoutCommand);
-  writer.close();
   MapPath, ListString resources =
   new HashMapPath, ListString();
   FileOutputStream fos = new FileOutputStream(shellFile);
{code}

 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testDelayedKill(TestContainerLaunch.java:601)
 ...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent 

[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752953#comment-13752953
 ] 

Jian He commented on YARN-981:
--

This patch also breaks web service, needs more fix.

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1077) TestContainerLaunch fails on Windows

2013-08-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated YARN-1077:


Attachment: YARN-1077.4.patch

Attach a new patch. Forget to set the file executable in the old patch. 

 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.4.patch, 
 YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 testDelayedKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 2744 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testDelayedKill(TestContainerLaunch.java:601)
 ...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752978#comment-13752978
 ] 

Jian He commented on YARN-981:
--

In fact, we have 3 contexts in HttpServer:
webAppContext: /logs/ - points to the log directory
staticContext:  /static/ - points to common static files (src/webapps/static)
logContext:  / - the jsp server code from (src/webapps/name)

In YARN, we should only apply GuiceFilter to the webAppContext

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752980#comment-13752980
 ] 

Jian He commented on YARN-981:
--

To clarify: In YARN, we should apply GuiceFilter to ALL paths of webAppContext

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-981:
-

Attachment: YARN-981.3.patch

upload a new patch to fix the Web Service issue

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752988#comment-13752988
 ] 

Vinod Kumar Vavilapalli commented on YARN-981:
--

Reverted the original patch on branch-2.1 for fixing build failures.

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-08-28 Thread Chuan Liu (JIRA)
Chuan Liu created YARN-1120:
---

 Summary: Make ApplicationConstants.Environment.USER definition OS 
neutral
 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor


In YARN-557, we added some code to make {{ 
ApplicationConstants.Environment.USER}} has OS-specific definition in order to 
fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test code 
was corrected. In YARN-602, we actually will explicitly set the environment 
variables for the child containers. With these changes, I think we can revert 
the YARN-557 change to make {{ ApplicationConstants.Environment.USER}} OS 
neutral. The main benefit is that we can use the same method over the Enum 
constants. This should also fix the 
TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-981) YARN/MR2/Job-history /logs link does not have correct content

2013-08-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752989#comment-13752989
 ] 

Jian He commented on YARN-981:
--

sorry, the previous mapping is wrong, should be this:
logContext: /logs/ - points to the log directory
staticContext: /static/ - points to common static files (src/webapps/static)
webAppContext: / - the jsp server code from (src/webapps/name)

 YARN/MR2/Job-history /logs link does not have correct content
 -

 Key: YARN-981
 URL: https://issues.apache.org/jira/browse/YARN-981
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Jian He
 Fix For: 2.1.1-beta

 Attachments: YARN-981.1.patch, YARN-981.2.patch, YARN-981.3.patch, 
 YARN-981.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1120) Make ApplicationConstants.Environment.USER definition OS neutral

2013-08-28 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated YARN-1120:


Attachment: YARN-1120.patch

Attach a patch. I verified that both TestUnmanagedAMLauncher and 
TestContainerLaunch are passing with this patch on Windows.

 Make ApplicationConstants.Environment.USER definition OS neutral
 

 Key: YARN-1120
 URL: https://issues.apache.org/jira/browse/YARN-1120
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1120.patch


 In YARN-557, we added some code to make {{ 
 ApplicationConstants.Environment.USER}} has OS-specific definition in order 
 to fix the unit test TestUnmanagedAMLauncher. In YARN-571, the relevant test 
 code was corrected. In YARN-602, we actually will explicitly set the 
 environment variables for the child containers. With these changes, I think 
 we can revert the YARN-557 change to make {{ 
 ApplicationConstants.Environment.USER}} OS neutral. The main benefit is that 
 we can use the same method over the Enum constants. This should also fix the 
 TestContainerLaunch#testContainerEnvVariables failure on Windows. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1077) TestContainerLaunch fails on Windows

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13752993#comment-13752993
 ] 

Hadoop QA commented on YARN-1077:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600489/YARN-1077.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1786//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1786//console

This message is automatically generated.

 TestContainerLaunch fails on Windows
 

 Key: YARN-1077
 URL: https://issues.apache.org/jira/browse/YARN-1077
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Attachments: YARN-1077.2.patch, YARN-1077.3.patch, YARN-1077.4.patch, 
 YARN-1077.5.patch, YARN-1077.patch


 Several cases in this unit tests fail on Windows. (Append error log at the 
 end.)
 testInvalidEnvSyntaxDiagnostics fails because the difference between cmd and 
 bash script error handling. If some command fails in the cmd script, cmd will 
 continue execute the the rest of the script command. Error handling needs to 
 be explicitly carried out in the script file. The error code of the last 
 command will be returned as the error code of the whole script. In this test, 
 some error happened in the middle of the cmd script, the test expect an 
 exception and non-zero error code. In the cmd script, the intermediate errors 
 are ignored. The last command call succeeded and there is no exception.
 testContainerLaunchStdoutAndStderrDiagnostics fails due to wrong cmd commands 
 used by the test.
 testContainerEnvVariables and testDelayedKill fail due to a regression from 
 YARN-906.
 {noformat}
 ---
 Test set: 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch
 ---
 Tests run: 7, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 11.526 sec 
  FAILURE!
 testInvalidEnvSyntaxDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 583 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testInvalidEnvSyntaxDiagnostics(TestContainerLaunch.java:269)
 ...
 testContainerLaunchStdoutAndStderrDiagnostics(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 561 sec   FAILURE!
 junit.framework.AssertionFailedError: Should catch exception
   at junit.framework.Assert.fail(Assert.java:50)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerLaunchStdoutAndStderrDiagnostics(TestContainerLaunch.java:314)
 ...
 testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)
   Time elapsed: 4136 sec   FAILURE!
 junit.framework.AssertionFailedError: expected:137 but was:143
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at junit.framework.Assert.assertEquals(Assert.java:199)
   at junit.framework.Assert.assertEquals(Assert.java:205)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:500)
 ...
 

[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753017#comment-13753017
 ] 

Vinod Kumar Vavilapalli commented on YARN-707:
--

Re. the UGI issue, you can change RMApp.createAndGetApplicationReport() to take 
in the incoming UGI or user-name. That way you can avoid the exception 
altogether.

Patch looks fine overall to me otherwise. But a little nervous given the past 
week's commits. Can you update the JIRA with any manual tests? If 
MAPREDUCE-5475 is ready, I can give it a try.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828-2.txt, YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1098) Separate out stateless services from stateful services in the RM

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1098:
---

Attachment: yarn-1098-approach.patch

Attaching a first-cut patch that separates out the services using approach 2 
from my earlier comment. It doesn't work yet - working on issues with some 
dependencies.

Appreciate any early feedback on whether the approach makes any sense in the 
first place.

 Separate out stateless services from stateful services in the RM
 

 Key: YARN-1098
 URL: https://issues.apache.org/jira/browse/YARN-1098
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: ha
 Attachments: yarn-1098-approach.patch


 From discussion on YARN-1027, it makes sense to separate out services that 
 are stateful and stateless. The stateless services can be HA-agnostic and be 
 run perennially irrespective of whether the RM is in Active/Standby state, 
 while the stateful services need to be aware of HA and be started on 
 transitionToActive() and completely shutdown on transitionToStandby().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1057) Add mechanism to check validity of a Node to be Added/Excluded

2013-08-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1057:


Attachment: YARN-1057.1.patch

Add checking function for the hostName

 Add mechanism to check validity of a Node to be Added/Excluded
 --

 Key: YARN-1057
 URL: https://issues.apache.org/jira/browse/YARN-1057
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Xuan Gong
 Attachments: YARN-1057.1.patch


 Yarn does not complain while passing an invalid hostname like 
 'invalidhost.com' inside include/exclude node file. (specified by 
 'yarn.resourcemanager.nodes.include-path' or 
 'yarn.resourcemanager.nodes.exclude-path').
 Need to add a mechanism to check the validity of the hostname before 
 including or excluding from cluster. It should throw an error / exception 
 while adding/removing an invalid node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-707) Add user info in the YARN ClientToken

2013-08-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753029#comment-13753029
 ] 

Jason Lowe commented on YARN-707:
-

Thanks for the review, Vinod.

I manually tested this on a secure 4-node cluster with MAPREDUCE-5475 on top of 
the patch.  I verified that a user could submit jobs and the submit client 
could continue to monitor them.  Also verified that another user with VIEW 
access but not MODIFY access could not kill the job due to the ACL checks added 
in MAPREDUCE-5475.  I also verified via enabling debug logging in the AM that 
the user name as seen by the MRAppMaster for the connecting client was the name 
of the connecting client instead of the app submitter or appId.

As far as the UGI thing goes, I thought about adding it as a parameter.  
However not all callers have a UGI so it pushes the problem upwards.  I can 
still make that change if desired.

 Add user info in the YARN ClientToken
 -

 Key: YARN-707
 URL: https://issues.apache.org/jira/browse/YARN-707
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Jason Lowe
Priority: Blocker
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: YARN-707-20130822.txt, YARN-707-20130827.txt, 
 YARN-707-20130828-2.txt, YARN-707-20130828.txt


 If user info is present in the client token then it can be used to do limited 
 authz in the AM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1057) Add mechanism to check validity of a Node to be Added/Excluded

2013-08-28 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753048#comment-13753048
 ] 

Hitesh Shah commented on YARN-1057:
---

[~yeshavora] Could you give more details on the use-case you are trying to 
address? I am inclined to mark this as an invalid/wontfix jira as there is no 
clear objective on how the RM should handle an invalid entry in the 
include/exclude files? Is the expectation that the RM shutdown when an invalid 
entry is added to the file and the refresh command invoked? The use-case that 
may make more sense is that a connection from a nodemanager, whose reverse dns 
lookup fails, should be rejected. 

[~xgong] Regarding the patch, what is the objective of filtering the in-memory 
list of included/excluded nodes? If there is a transient dns issue, would this 
mean that the NM will never be allowed to register with the RM? 

 Add mechanism to check validity of a Node to be Added/Excluded
 --

 Key: YARN-1057
 URL: https://issues.apache.org/jira/browse/YARN-1057
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Xuan Gong
 Attachments: YARN-1057.1.patch


 Yarn does not complain while passing an invalid hostname like 
 'invalidhost.com' inside include/exclude node file. (specified by 
 'yarn.resourcemanager.nodes.include-path' or 
 'yarn.resourcemanager.nodes.exclude-path').
 Need to add a mechanism to check the validity of the hostname before 
 including or excluding from cluster. It should throw an error / exception 
 while adding/removing an invalid node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1057) Add mechanism to check validity of a Node to be Added/Excluded

2013-08-28 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753050#comment-13753050
 ] 

Hitesh Shah commented on YARN-1057:
---

[~xgong] Likewise, from the patch, it looks like if a hostname fails to 
resolve, it would not be added to the exclude list. If the include list is 
empty, it means the host in question can easily register later on as the 
exclude list would not have the entry? 

 Add mechanism to check validity of a Node to be Added/Excluded
 --

 Key: YARN-1057
 URL: https://issues.apache.org/jira/browse/YARN-1057
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Xuan Gong
 Attachments: YARN-1057.1.patch


 Yarn does not complain while passing an invalid hostname like 
 'invalidhost.com' inside include/exclude node file. (specified by 
 'yarn.resourcemanager.nodes.include-path' or 
 'yarn.resourcemanager.nodes.exclude-path').
 Need to add a mechanism to check the validity of the hostname before 
 including or excluding from cluster. It should throw an error / exception 
 while adding/removing an invalid node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-771) AMRMClient support for resource blacklisting

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753055#comment-13753055
 ] 

Hadoop QA commented on YARN-771:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600445/YARN-771-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1787//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1787//console

This message is automatically generated.

 AMRMClient  support for resource blacklisting
 -

 Key: YARN-771
 URL: https://issues.apache.org/jira/browse/YARN-771
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Junping Du
 Attachments: YARN-771-v1.0.patch, YARN-771-v2.patch, YARN-771-v3.patch


 After YARN-750 AMRMClient should support blacklisting via the new YARN API's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1098) Separate out stateless services from stateful services in the RM

2013-08-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753060#comment-13753060
 ] 

Karthik Kambatla commented on YARN-1098:


While working on the patch, realized that the stateless services still need to 
be aware of HA. When HA is enabled and the RM is in Standby mode, the 
client-facing services among the stateless services should respond to the 
client that they are not the Active.

 Separate out stateless services from stateful services in the RM
 

 Key: YARN-1098
 URL: https://issues.apache.org/jira/browse/YARN-1098
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: ha
 Attachments: yarn-1098-approach.patch, yarn-1098-approach.patch


 From discussion on YARN-1027, it makes sense to separate out services that 
 are stateful and stateless. The stateless services can be HA-agnostic and be 
 run perennially irrespective of whether the RM is in Active/Standby state, 
 while the stateful services need to be aware of HA and be started on 
 transitionToActive() and completely shutdown on transitionToStandby().

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1098) Separate out stateless services from stateful services in the RM

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1098:
---

Description: 
From discussion on YARN-1027, it makes sense to separate out services that are 
stateful and stateless. The stateless services can  run perennially 
irrespective of whether the RM is in Active/Standby state, while the stateful 
services need to  be started on transitionToActive() and completely shutdown 
on transitionToStandby().

The external-facing stateless services should respond to the client/AM/NM 
requests depending on whether the RM is Active/Standby.


  was:
From discussion on YARN-1027, it makes sense to separate out services that are 
stateful and stateless. The stateless services can be HA-agnostic and be run 
perennially irrespective of whether the RM is in Active/Standby state, while 
the stateful services need to be aware of HA and be started on 
transitionToActive() and completely shutdown on transitionToStandby().



 Separate out stateless services from stateful services in the RM
 

 Key: YARN-1098
 URL: https://issues.apache.org/jira/browse/YARN-1098
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: ha
 Attachments: yarn-1098-approach.patch, yarn-1098-approach.patch


 From discussion on YARN-1027, it makes sense to separate out services that 
 are stateful and stateless. The stateless services can  run perennially 
 irrespective of whether the RM is in Active/Standby state, while the stateful 
 services need to  be started on transitionToActive() and completely shutdown 
 on transitionToStandby().
 The external-facing stateless services should respond to the client/AM/NM 
 requests depending on whether the RM is Active/Standby.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-886) make APPLICATION_STOP consistent with APPLICATION_INIT

2013-08-28 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753074#comment-13753074
 ] 

Siddharth Seth commented on YARN-886:
-

Essentially, APPLICATION_INIT should only be sent to Auxiliary services 
specified by the user in the startContainer requests. Similarly 
APPLICATION_STOP should only be sent to Auxiliary services specified by the 
user during the startContainer call.

 make APPLICATION_STOP consistent with APPLICATION_INIT
 --

 Key: YARN-886
 URL: https://issues.apache.org/jira/browse/YARN-886
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, nodemanager
Affects Versions: 2.0.4-alpha
Reporter: Avner BenHanoch

 Currently, there is inconsistency between the start/stop behaviour.
 See Siddharth's comment in MAPREDUCE-5329: The start/stop behaviour should 
 be consistent. We shouldn't send the stop to all service.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1075) AsyncDispatcher and ResourceTrackerService violate serviceStart() semantics

2013-08-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753076#comment-13753076
 ] 

Karthik Kambatla commented on YARN-1075:


[~vinodkv], [~ste...@apache.org]: what do you think? Should I still go ahead 
and close this as invalid?

 AsyncDispatcher and ResourceTrackerService violate serviceStart() semantics
 ---

 Key: YARN-1075
 URL: https://issues.apache.org/jira/browse/YARN-1075
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: yarn-1075-1.patch


 IIUC, the serviceStart() implementation of services should start local 
 threads/services first before calling super.serviceStart(). Certain services 
 have this reversed as below - leading to possibilities where the service 
 would be in state STARTED, but in reality might not have started yet.
 {code}
 void serviceStart() {
   super.serviceStart()
   // service sepecific logic and start operations
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1057) Add mechanism to check validity of a Node to be Added/Excluded

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753089#comment-13753089
 ] 

Hadoop QA commented on YARN-1057:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600503/YARN-1057.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.util.TestHostsFileReader

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1788//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/1788//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1788//console

This message is automatically generated.

 Add mechanism to check validity of a Node to be Added/Excluded
 --

 Key: YARN-1057
 URL: https://issues.apache.org/jira/browse/YARN-1057
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.1.0-beta
Reporter: yeshavora
Assignee: Xuan Gong
 Attachments: YARN-1057.1.patch


 Yarn does not complain while passing an invalid hostname like 
 'invalidhost.com' inside include/exclude node file. (specified by 
 'yarn.resourcemanager.nodes.include-path' or 
 'yarn.resourcemanager.nodes.exclude-path').
 Need to add a mechanism to check the validity of the hostname before 
 including or excluding from cluster. It should throw an error / exception 
 while adding/removing an invalid node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-659) RMStateStore's removeApplication APIs should just take an applicationId

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned YARN-659:
-

Assignee: (was: Karthik Kambatla)

 RMStateStore's removeApplication APIs should just take an applicationId
 ---

 Key: YARN-659
 URL: https://issues.apache.org/jira/browse/YARN-659
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Vinod Kumar Vavilapalli

 There is no need to give in the whole state for removal - just an ID should 
 be enough when an app finishes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1034:
---

Description: The YARN Fair Scheduler is largely stable now, and should no 
longer be declared experimental.

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
  Labels: doc

 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1034:
---

Environment: (was: The YARN Fair Scheduler is largely stable now, and 
should no longer be declared experimental.)

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
  Labels: doc



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1034:
---

Attachment: yarn-1034-1.patch

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-1034:
---

Priority: Trivial  (was: Major)

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1107) Restart secure RM with recovery enabled while oozie jobs are running causes the RM to fail during startup

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753129#comment-13753129
 ] 

Omkar Vinit Joshi commented on YARN-1107:
-

The underlying problem is in below code. Here we are bypassing the rpc call if 
it is a local call. However we were updating the (localServiceAddress  
localSecretManager) in ClientRMService.startService call. To fix this we are 
doing this updation call inside serviceInit. Now here we are making a 
reasonable assumption that rm port will be static (specified in config 
..specifically port).

{code}
private static ApplicationClientProtocol getRmClient(Token? token,
Configuration conf) {
  InetSocketAddress addr = SecurityUtil.getTokenServiceAddr(token);
  if (localSecretManager != null) {
// return null if it's our token
if (localServiceAddress.getAddress().isAnyLocalAddress()) {
if (NetUtils.isLocalAddress(addr.getAddress()) 
addr.getPort() == localServiceAddress.getPort()) {
  return null;
}
} else if (addr.equals(localServiceAddress)) {
  return null;
}
  }
  final YarnRPC rpc = YarnRPC.create(conf);
  return 
(ApplicationClientProtocol)rpc.getProxy(ApplicationClientProtocol.class, addr, 
conf);
}
{code}

 Restart secure RM with recovery enabled while oozie jobs are running causes 
 the RM to fail during startup
 -

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Attachments: rm.log


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753131#comment-13753131
 ] 

Hadoop QA commented on YARN-1034:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600517/yarn-1034-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1789//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1789//console

This message is automatically generated.

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1090) Job does not get into Pending State

2013-08-28 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-1090:
--

Attachment: YARN-1090.patch

upload a patch that renames 'Active Apps' to 'Schedulable Apps' and 'Pending 
Apps' to 'Non-Schedulable' Apps in CapacitySchedulerPage as the definition of 
Active Apps is actually 'Schedulable' and the definition of Pending Apps here 
is 'Non-Schedulable' due to queue and per user limit.
Also add two more metrics specifically for 'Running Apps' and 'Pending Apps' 
for both queue and user on the scheduler UI.

 Job does not get into Pending State
 ---

 Key: YARN-1090
 URL: https://issues.apache.org/jira/browse/YARN-1090
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: yeshavora
Assignee: Jian He
 Attachments: YARN-1090.patch


 When there is no resource available to run a job, next job should go in 
 pending state. RM UI should show next job as pending app and the counter for 
 the pending app should be incremented.
 But Currently. Next job stays in ACCEPTED state and No AM has been assigned 
 to this job.Though Pending App count is not incremented. 
 Running 'job status nextjob' shows job state=PREP. 
 $ mapred job -status job_1377122233385_0002
 13/08/21 21:59:23 INFO client.RMProxy: Connecting to ResourceManager at 
 host1/ip1
 Job: job_1377122233385_0002
 Job File: /ABC/.staging/job_1377122233385_0002/job.xml
 Job Tracking URL : http://host1:port1/application_1377122233385_0002/
 Uber job : false
 Number of maps: 0
 Number of reduces: 0
 map() completion: 0.0
 reduce() completion: 0.0
 Job state: PREP
 retired: false
 reason for failure:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-1107) Restart secure RM with recovery enabled while oozie jobs are running causes the RM to fail during startup

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi reassigned YARN-1107:
---

Assignee: Omkar Vinit Joshi  (was: Vinod Kumar Vavilapalli)

 Restart secure RM with recovery enabled while oozie jobs are running causes 
 the RM to fail during startup
 -

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753194#comment-13753194
 ] 

Sandy Ryza commented on YARN-1034:
--

+1

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-1034:
-

Fix Version/s: 2.1.1-beta

 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Fix For: 2.1.1-beta

 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1080:


Attachment: YARN-1080.2.patch

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Attachments: YARN-1080.1.patch, YARN-1080.2.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753222#comment-13753222
 ] 

Xuan Gong commented on YARN-1080:
-

bq.You should use Option.setRequired() instead of hardcoding app-id.

You are saying the line ?
{code}
formatter.printHelp(yarn logs -applicationId application ID [OPTIONS], 
new Options());
{code}

This is just for printing message. (Try to print the help message as similar as 
proposed help message).
But for the option which will be actually used in command, the input parameter 
does be required.
{code}
opts.addOption(APPLICATION_ID_OPTION, true, ApplicationId (required));
{code}

Added test case to test help message in the new patch

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Attachments: YARN-1080.1.patch, YARN-1080.2.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1107) Restart secure RM with recovery enabled while oozie jobs are running causes the RM to fail during startup

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1107:


Attachment: YARN-1107.20130828.1.patch

 Restart secure RM with recovery enabled while oozie jobs are running causes 
 the RM to fail during startup
 -

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log, YARN-1107.20130828.1.patch


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1034) Remove experimental in the Fair Scheduler documentation

2013-08-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753230#comment-13753230
 ] 

Hudson commented on YARN-1034:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4342 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4342/])
YARN-1034. Remove experimental in the Fair Scheduler documentation. (Karthik 
Kambatla via Sandy Ryza) (sandy: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1518444)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm


 Remove experimental in the Fair Scheduler documentation
 -

 Key: YARN-1034
 URL: https://issues.apache.org/jira/browse/YARN-1034
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation, scheduler
Affects Versions: 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Karthik Kambatla
Priority: Trivial
  Labels: doc
 Fix For: 2.1.1-beta

 Attachments: yarn-1034-1.patch


 The YARN Fair Scheduler is largely stable now, and should no longer be 
 declared experimental.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1107) Job submitted with Delegation tokenin secured environment causes RM to fail during RM restart

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1107:


Summary: Job submitted with Delegation tokenin secured environment causes 
RM to fail during RM restart  (was: Restart secure RM with recovery enabled 
while oozie jobs are running causes the RM to fail during startup)

 Job submitted with Delegation tokenin secured environment causes RM to fail 
 during RM restart
 -

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log, YARN-1107.20130828.1.patch


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1107) Job submitted with Delegation token in secured environment causes RM to fail during RM restart

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753232#comment-13753232
 ] 

Omkar Vinit Joshi commented on YARN-1107:
-

updating title..

 Job submitted with Delegation token in secured environment causes RM to fail 
 during RM restart
 --

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log, YARN-1107.20130828.1.patch


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1107) Job submitted with Delegation token in secured environment causes RM to fail during RM restart

2013-08-28 Thread Omkar Vinit Joshi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omkar Vinit Joshi updated YARN-1107:


Summary: Job submitted with Delegation token in secured environment causes 
RM to fail during RM restart  (was: Job submitted with Delegation tokenin 
secured environment causes RM to fail during RM restart)

 Job submitted with Delegation token in secured environment causes RM to fail 
 during RM restart
 --

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log, YARN-1107.20130828.1.patch


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1080) Improve help message for $ yarn logs

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753237#comment-13753237
 ] 

Hadoop QA commented on YARN-1080:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12600529/YARN-1080.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1791//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1791//console

This message is automatically generated.

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Attachments: YARN-1080.1.patch, YARN-1080.2.patch


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 {code:title=proposed help message}
 -bash-4.1$ yarn logs
 usage: yarn logs -applicationId application ID [OPTIONS]
 general options are:
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node address is
 specified)
  -nodeAddress arg NodeAddress in the format nodename:port (must be
 specified if container id is specified)
 {code}
 2. Add description for help command. As far as I know, a user cannot get logs 
 for running job. Since I spent some time trying to get logs of running 
 applications, it should be nice to say this in command description.
 {code:title=proposed help}
 Retrieve logs for completed/killed YARN application
 usage: general options are...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1107) Job submitted with Delegation token in secured environment causes RM to fail during RM restart

2013-08-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753242#comment-13753242
 ] 

Hadoop QA commented on YARN-1107:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12600530/YARN-1107.20130828.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/1790//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/1790//console

This message is automatically generated.

 Job submitted with Delegation token in secured environment causes RM to fail 
 during RM restart
 --

 Key: YARN-1107
 URL: https://issues.apache.org/jira/browse/YARN-1107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.1.0-beta
Reporter: Arpit Gupta
Assignee: Omkar Vinit Joshi
Priority: Blocker
 Attachments: rm.log, YARN-1107.20130828.1.patch


 If secure RM with recovery enabled is restarted while oozie jobs are running 
 rm fails to come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >