[jira] [Commented] (YARN-333) On app submission, have RM ask scheduler for queue name

2013-01-22 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559548#comment-13559548
 ] 

Tom White commented on YARN-333:


This is not a problem in MR is it, since the queue is always set? But I can see 
that it would be needed in general.

The approach looks fine, although I think it would be simpler just to have a 
getDefaultQueueName() method on YarnScheduler.


 On app submission, have RM ask scheduler for queue name
 ---

 Key: YARN-333
 URL: https://issues.apache.org/jira/browse/YARN-333
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Attachments: YARN-333.patch


 Currently, if an app is submitted without a queue, RMAppManager sets the 
 RMApp's queue to default.
 A scheduler may wish to make its own decision on which queue to place an app 
 in if none is specified. For example, when the fair scheduler 
 user-as-default-queue config option is set to true, and an app is submitted 
 with no queue specified, the fair scheduler should assign the app to a queue 
 with the user's name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-319) Submit a job to a queue that not allowed in fairScheduler, client will hold forever.

2013-01-22 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559583#comment-13559583
 ] 

Tom White commented on YARN-319:


 When waiting for the final application status to be failed, you can use a 
 smaller sleep inside a loop. TestNodeManagerShutdown has something like this 
 on line 141.

Would it be possible to use a synchronous event handler in the tests so that we 
don't have to poll?

 Submit a job to a queue that not allowed in fairScheduler, client will hold 
 forever.
 

 Key: YARN-319
 URL: https://issues.apache.org/jira/browse/YARN-319
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.2-alpha
Reporter: shenhong
Assignee: shenhong
 Fix For: 2.0.3-alpha

 Attachments: YARN-319-1.patch, YARN-319-2.patch, YARN-319.patch


 RM use fairScheduler, when client submit a job to a queue, but the queue do 
 not allow the user to submit job it, in this case, client  will hold forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-231) Add persistent store implementation for RMStateStore

2013-01-22 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559824#comment-13559824
 ] 

Bikas Saha commented on YARN-231:
-

Hitesh, createConnection() is private and called from within synchronized 
functions. ZKAction are invoked from within synchronized functions. Let me know 
if you see that not happening.

 Add persistent store implementation for RMStateStore
 

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add stores that write RM state data to ZooKeeper and FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-231) Add persistent store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559952#comment-13559952
 ] 

Hitesh Shah commented on YARN-231:
--

A couple of minor comments on the fs patch:

FSRMStateStore class:
  - needs a default value for fsWorkingPath in case 
YarnConfiguration.FS_RM_STATE_STORE_URI is not defined
  - should FileStatus[] childNodes = fs.listStatus(fsRootDirPath); use the 
listStatus function that accepts a path filter?



 Add persistent store implementation for RMStateStore
 

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add stores that write RM state data to ZooKeeper and FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-277) Use AMRMClient in DistributedShell to exemplify the approach

2013-01-22 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560007#comment-13560007
 ] 

Hitesh Shah commented on YARN-277:
--

@Bikas, thanks for the patch. @Sid, thanks for the review. Committed to trunk 
and branch-2. 

 Use AMRMClient in DistributedShell to exemplify the approach
 

 Key: YARN-277
 URL: https://issues.apache.org/jira/browse/YARN-277
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 3.0.0

 Attachments: YARN-277.1.patch, YARN-277.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-277) Use AMRMClient in DistributedShell to exemplify the approach

2013-01-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560009#comment-13560009
 ] 

Hudson commented on YARN-277:
-

Integrated in Hadoop-trunk-Commit #3271 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3271/])
YARN-277. Use AMRMClient in DistributedShell to exemplify the approach. 
Contributed by Bikas Saha (Revision 1437156)

 Result = SUCCESS
hitesh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1437156
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java


 Use AMRMClient in DistributedShell to exemplify the approach
 

 Key: YARN-277
 URL: https://issues.apache.org/jira/browse/YARN-277
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 2.0.3-alpha

 Attachments: YARN-277.1.patch, YARN-277.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-231) Add persistent store implementation for RMStateStore

2013-01-22 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560042#comment-13560042
 ] 

Bikas Saha commented on YARN-231:
-

There is a default value in yarn-defaults.xml
{noformat}
+nameyarn.resourcemanager.fs.rm-state-store.uri/name
+value${hadoop.tmp.dir}/yarn/system/rmstore/value
{noformat}

Not quite sure about the listStatus API

 Add persistent store implementation for RMStateStore
 

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add stores that write RM state data to ZooKeeper and FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-351) ResourceManager NPE during allocateNodeLocal

2013-01-22 Thread Lohit Vijayarenu (JIRA)
Lohit Vijayarenu created YARN-351:
-

 Summary: ResourceManager NPE during allocateNodeLocal
 Key: YARN-351
 URL: https://issues.apache.org/jira/browse/YARN-351
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Lohit Vijayarenu
Priority: Critical


ResourceManager seem to die due to NPE shown below on FairScheduler.
This is easily reproduced on a cluster with multiple racks and nodes within 
each rack. Simple job with multiple tasks on each node triggers NPE in RM.

Without understanding actual workings, I tried to do a null check which looked 
like it solved problem. But I am not sure if that is the right behavior yet.

I feel this is serious enough to be marked as blocker, what do you guys think?

{noformat}
2013-01-22 20:07:45,073 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: 
allocate: applicationId=application_1358885180585_0001 
container=container_1358885180585_0001_01_000830 host=x.x.x.x:36186
2013-01-22 20:07:45,074 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type NODE_UPDATE to the scheduler
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocateNodeLocal(AppSchedulingInfo.java:259)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:220)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp.allocate(FSSchedulerApp.java:544)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:250)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:318)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:180)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:796)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:859)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:375)
at java.lang.Thread.run(Thread.java:662)
2013-01-22 20:07:45,075 INFO 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-351) ResourceManager NPE during allocateNodeLocal

2013-01-22 Thread Lohit Vijayarenu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lohit Vijayarenu resolved YARN-351.
---

Resolution: Duplicate

Thanks [~sandyr]. It does looks like it is solved in YARN-335. I was running 
one or two day's earlier build than your fix.

 ResourceManager NPE during allocateNodeLocal
 

 Key: YARN-351
 URL: https://issues.apache.org/jira/browse/YARN-351
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Lohit Vijayarenu
Priority: Critical

 ResourceManager seem to die due to NPE shown below on FairScheduler.
 This is easily reproduced on a cluster with multiple racks and nodes within 
 each rack. Simple job with multiple tasks on each node triggers NPE in RM.
 Without understanding actual workings, I tried to do a null check which 
 looked like it solved problem. But I am not sure if that is the right 
 behavior yet.
 I feel this is serious enough to be marked as blocker, what do you guys think?
 {noformat}
 2013-01-22 20:07:45,073 DEBUG 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: 
 allocate: applicationId=application_1358885180585_0001 
 container=container_1358885180585_0001_01_000830 host=x.x.x.x:36186
 2013-01-22 20:07:45,074 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
 handling event type NODE_UPDATE to the scheduler
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocateNodeLocal(AppSchedulingInfo.java:259)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:220)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp.allocate(FSSchedulerApp.java:544)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:250)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AppSchedulable.assignContainer(AppSchedulable.java:318)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.assignContainer(FSLeafQueue.java:180)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:796)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:859)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:98)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:375)
 at java.lang.Thread.run(Thread.java:662)
 2013-01-22 20:07:45,075 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-352) Inconsistent picture of how a container was killed when querying RM and NM in case of preemption

2013-01-22 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-352:


 Summary: Inconsistent picture of how a container was killed when 
querying RM and NM in case of preemption
 Key: YARN-352
 URL: https://issues.apache.org/jira/browse/YARN-352
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah


When the RM preempts a container, it records the exit status as -100. However, 
at the NM, it registers the preempted container's exit status as simply killed 
by an external via SIGTERM or SIGKILL.

When the AM queries the RM and NM for the same container's status, it will get 
2 different values. 

When killing a container, the exit reason should likely be more defined via an 
exit status code for the AM to act on in addition to providing of the 
diagnostic messages that can contain more detailed information ( though 
probably not programmatically interpret-able by the AM ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-352) Inconsistent picture of how a container was killed when querying RM and NM in case of preemption

2013-01-22 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560192#comment-13560192
 ] 

Sandy Ryza commented on YARN-352:
-

Perhaps the exit code should not be overloaded to contain this kind of 
information, and a ContainerStatus should contain a separate enum to report on 
why the container was killed, as opposed to what it returned when it died?

 Inconsistent picture of how a container was killed when querying RM and NM in 
 case of preemption
 

 Key: YARN-352
 URL: https://issues.apache.org/jira/browse/YARN-352
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah

 When the RM preempts a container, it records the exit status as -100. 
 However, at the NM, it registers the preempted container's exit status as 
 simply killed by an external via SIGTERM or SIGKILL.
 When the AM queries the RM and NM for the same container's status, it will 
 get 2 different values. 
 When killing a container, the exit reason should likely be more defined via 
 an exit status code for the AM to act on in addition to providing of the 
 diagnostic messages that can contain more detailed information ( though 
 probably not programmatically interpret-able by the AM ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-231) Add FS-based persistent store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-231:
-

Summary: Add FS-based persistent store implementation for RMStateStore  
(was: Add persistent store implementation for RMStateStore)

 Add FS-based persistent store implementation for RMStateStore
 -

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add stores that write RM state data to ZooKeeper and FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-231) Add FS-based persistent store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-231:
-

Description: Add stores that write RM state data to FileSystem   (was: Add 
stores that write RM state data to ZooKeeper and FileSystem )

 Add FS-based persistent store implementation for RMStateStore
 -

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add stores that write RM state data to FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-231) Add FS-based persistent store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-231:
-

Description: Add store that write RM state data to FileSystem   (was: Add 
stores that write RM state data to FileSystem )

 Add FS-based persistent store implementation for RMStateStore
 -

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add store that write RM state data to FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-353) Add Zookeeper-based store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-353:


 Summary: Add Zookeeper-based store implementation for RMStateStore
 Key: YARN-353
 URL: https://issues.apache.org/jira/browse/YARN-353
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Hitesh Shah


Add store that write RM state data to ZK


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-231) Add FS-based persistent store implementation for RMStateStore

2013-01-22 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560267#comment-13560267
 ] 

Hitesh Shah commented on YARN-231:
--

FYI, filed YARN-353 for ZK based implementation.

 Add FS-based persistent store implementation for RMStateStore
 -

 Key: YARN-231
 URL: https://issues.apache.org/jira/browse/YARN-231
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: YARN-231.1.patch, YARN-231.2.patch, YARN-231.3.FS.patch


 Add store that write RM state data to FileSystem 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-319) Submit a job to a queue that not allowed in fairScheduler, client will hold forever.

2013-01-22 Thread shenhong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560312#comment-13560312
 ] 

shenhong commented on YARN-319:
---

  Would it be possible to use a synchronous event handler in the tests so that 
  we don't have to poll?
 I don't know how to do that.

 Submit a job to a queue that not allowed in fairScheduler, client will hold 
 forever.
 

 Key: YARN-319
 URL: https://issues.apache.org/jira/browse/YARN-319
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.2-alpha
Reporter: shenhong
Assignee: shenhong
 Fix For: 2.0.3-alpha

 Attachments: YARN-319-1.patch, YARN-319-2.patch, YARN-319-3.patch, 
 YARN-319.patch


 RM use fairScheduler, when client submit a job to a queue, but the queue do 
 not allow the user to submit job it, in this case, client  will hold forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-319) Submit a job to a queue that not allowed in fairScheduler, client will hold forever.

2013-01-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560323#comment-13560323
 ] 

Hadoop QA commented on YARN-319:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566075/YARN-319-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/362//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/362//console

This message is automatically generated.

 Submit a job to a queue that not allowed in fairScheduler, client will hold 
 forever.
 

 Key: YARN-319
 URL: https://issues.apache.org/jira/browse/YARN-319
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.2-alpha
Reporter: shenhong
Assignee: shenhong
 Fix For: 2.0.3-alpha

 Attachments: YARN-319-1.patch, YARN-319-2.patch, YARN-319-3.patch, 
 YARN-319.patch


 RM use fairScheduler, when client submit a job to a queue, but the queue do 
 not allow the user to submit job it, in this case, client  will hold forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-354) add join in WebAppProxyServer

2013-01-22 Thread liang xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang xie updated YARN-354:
---

Attachment: YARN-354.txt

Attached patch is against trunk, please let me know if need to rebase

 add join in WebAppProxyServer
 -

 Key: YARN-354
 URL: https://issues.apache.org/jira/browse/YARN-354
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 0.23.6, 0.23.7
Reporter: liang xie
Priority: Critical
 Attachments: YARN-354.txt


 Please see HDFS-4426 for detail, i found the yarn WebAppProxyServer is broken 
 by HADOOP-9181 as well, here's the hot fix, and i verified manually in our 
 test cluster.
 I'm really applogized for bring about such trouble...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-354) add join in WebAppProxyServer

2013-01-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560445#comment-13560445
 ] 

Hadoop QA commented on YARN-354:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12566100/YARN-354.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/363//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/363//console

This message is automatically generated.

 add join in WebAppProxyServer
 -

 Key: YARN-354
 URL: https://issues.apache.org/jira/browse/YARN-354
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.3-alpha, 0.23.6, 0.23.7
Reporter: liang xie
Priority: Critical
 Attachments: YARN-354.txt


 Please see HDFS-4426 for detail, i found the yarn WebAppProxyServer is broken 
 by HADOOP-9181 as well, here's the hot fix, and i verified manually in our 
 test cluster.
 I'm really applogized for bring about such trouble...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira