[jira] [Created] (YARN-391) detabify LCEResourcesHandler classes

2013-02-11 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-391:
---

 Summary: detabify LCEResourcesHandler classes
 Key: YARN-391
 URL: https://issues.apache.org/jira/browse/YARN-391
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.3-alpha
Reporter: Steve Loughran
Priority: Trivial


the LCEResourcesHandler classes from YARN-3 have had some tab chars that have 
snuck into the source tree. fix this before that code starts getting branched 
off and it's too late

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-391) detabify LCEResourcesHandler classes

2013-02-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-391:


Attachment: YARN-391.patch

s/tab/r/2 spaces/ with a strip of trailing tabs and lines containing nothing 
but tabs and spaces

 detabify LCEResourcesHandler classes
 

 Key: YARN-391
 URL: https://issues.apache.org/jira/browse/YARN-391
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.3-alpha
Reporter: Steve Loughran
Priority: Trivial
 Attachments: YARN-391.patch


 the LCEResourcesHandler classes from YARN-3 have had some tab chars that have 
 snuck into the source tree. fix this before that code starts getting branched 
 off and it's too late

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-391) detabify LCEResourcesHandler classes

2013-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13575794#comment-13575794
 ] 

Hadoop QA commented on YARN-391:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568819/YARN-391.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/400//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/400//console

This message is automatically generated.

 detabify LCEResourcesHandler classes
 

 Key: YARN-391
 URL: https://issues.apache.org/jira/browse/YARN-391
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.0.3-alpha
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 2.0.4-beta

 Attachments: YARN-391.patch


 the LCEResourcesHandler classes from YARN-3 have had some tab chars that have 
 snuck into the source tree. fix this before that code starts getting branched 
 off and it's too late

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM.

2013-02-11 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-196:
---

Attachment: YARN-196.3.patch

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch, 
 YARN-196.2.patch, YARN-196.3.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
   at 
 org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)
   at 
 org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)
   at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)
   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)
   at org.apache.hadoop.ipc.Client.call(Client.java:1117)
   ... 9 more
 2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: 
 AsyncDispatcher thread interrupted
 java.lang.InterruptedException
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
   at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
   at 
 java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)
   at java.lang.Thread.run(Thread.java:619)
 2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: 
 Service:Dispatcher is stopped.
 2012-01-16 15:04:13,392 INFO org.mortbay.log: Stopped 
 SelectChannelConnector@0.0.0.0:
 2012-01-16 

[jira] [Commented] (YARN-196) Nodemanager if started before starting Resource manager is getting shutdown.But if both RM and NM are started and then after if RM is going down,NM is retrying for the RM

2013-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576156#comment-13576156
 ] 

Hadoop QA commented on YARN-196:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12568891/YARN-196.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/401//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/401//console

This message is automatically generated.

 Nodemanager if started before starting Resource manager is getting 
 shutdown.But if both RM and NM are started and then after if RM is going 
 down,NM is retrying for the RM.
 ---

 Key: YARN-196
 URL: https://issues.apache.org/jira/browse/YARN-196
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0, 2.0.0-alpha
Reporter: Ramgopal N
Assignee: Xuan Gong
 Attachments: MAPREDUCE-3676.patch, YARN-196.1.patch, 
 YARN-196.2.patch, YARN-196.3.patch


 If NM is started before starting the RM ,NM is shutting down with the 
 following error
 {code}
 ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting 
 services org.apache.hadoop.yarn.server.nodemanager.NodeManager
 org.apache.avro.AvroRuntimeException: 
 java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)
   at 
 org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)
 Caused by: java.lang.reflect.UndeclaredThrowableException
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)
   ... 3 more
 Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: 
 Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on 
 connection exception: java.net.ConnectException: Connection refused; For more 
 details see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)
   at $Proxy23.registerNodeManager(Unknown Source)
   at 
 org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
   ... 5 more
 Caused by: java.net.ConnectException: Call From 
 HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection 
 exception: java.net.ConnectException: Connection refused; For more details 
 see:  http://wiki.apache.org/hadoop/ConnectionRefused
   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)
   at org.apache.hadoop.ipc.Client.call(Client.java:1141)
   at org.apache.hadoop.ipc.Client.call(Client.java:1100)
   at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)
   ... 7 more
 Caused by: java.net.ConnectException: Connection refused
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at 
 

[jira] [Commented] (YARN-365) Each NM heartbeat should not generate and event for the Scheduler

2013-02-11 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576208#comment-13576208
 ] 

Siddharth Seth commented on YARN-365:
-

Xuan, took a quick look at the patch. I'm not sure why the NodeStatusUpdate 
event needs to be generated in all the additional cases. It may just be 
sufficient to drop the stored node updates - or even let them be processed by 
the event which is already in the queue.
Also, within the schedulers - the events can be aggregated into a single list 
and processed, instead of processing them per heartbeat.

 Each NM heartbeat should not generate and event for the Scheduler
 -

 Key: YARN-365
 URL: https://issues.apache.org/jira/browse/YARN-365
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager, scheduler
Affects Versions: 0.23.5
Reporter: Siddharth Seth
Assignee: Xuan Gong
 Attachments: Prototype2.txt, Prototype3.txt, YARN-365.1.patch, 
 YARN-365.2.patch, YARN-365.3.patch, YARN-365.4.patch


 Follow up from YARN-275
 https://issues.apache.org/jira/secure/attachment/12567075/Prototype.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-02-11 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-392:
---

 Summary: Make it possible to schedule to specific nodes without 
dropping locality
 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha


Currently its not possible to specify scheduling requests for specific nodes 
and nowhere else. The RM automatically relaxes locality to rack and * and 
assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-393) * is overloaded for locality as well as resource demand

2013-02-11 Thread Bikas Saha (JIRA)
Bikas Saha created YARN-393:
---

 Summary: * is overloaded for locality as well as resource demand
 Key: YARN-393
 URL: https://issues.apache.org/jira/browse/YARN-393
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha


At present the RM scheduler uses * uses to detect if an app needs resources to 
be allocated. It also uses * as an indication of locality to schedule anywhere 
in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-393) In RM * is overloaded for locality as well as resource demand

2013-02-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-393:


Summary: In RM * is overloaded for locality as well as resource demand  
(was: * is overloaded for locality as well as resource demand)

 In RM * is overloaded for locality as well as resource demand
 -

 Key: YARN-393
 URL: https://issues.apache.org/jira/browse/YARN-393
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha

 At present the RM scheduler uses * uses to detect if an app needs resources 
 to be allocated. It also uses * as an indication of locality to schedule 
 anywhere in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-02-11 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza reassigned YARN-392:
---

Assignee: Sandy Ryza

Bikas, if you haven't started work on this, I'd be interested in taking a crack 
at it.


 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Sandy Ryza

 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-02-11 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576299#comment-13576299
 ] 

Bikas Saha commented on YARN-392:
-

Since you have already gone ahead and assigned it to yourself, why dont you 
take a shot.

If YARN-393 gets solved then my guess is that this jira might get automatically 
resolved. However, YARN-393 is even more complex and potentially hard to fix.

It would be good if you can post your ideas/approach first instead of a patch. 
This jira might be tricky and so discussing alternatives and agreeing on one of 
them would be a good exercise. I will try to post an alternative to this jira 
too.

 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Sandy Ryza

 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-386) [Umbrella] YARN API cleanup and improvements

2013-02-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated YARN-386:


Summary: [Umbrella] YARN API cleanup and improvements  (was: [Umbrella] 
YARN API cleanup)

 [Umbrella] YARN API cleanup and improvements
 

 Key: YARN-386
 URL: https://issues.apache.org/jira/browse/YARN-386
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli

 This is the umbrella ticket to capture any and every API cleanup that we wish 
 to do before YARN can be deemed beta/stable. Doing this API cleanup now and 
 ASAP will help us escape the pain of supporting bad APIs in beta/stable 
 releases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-395) RM should have a way to disable scheduling to a set of nodes

2013-02-11 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576302#comment-13576302
 ] 

Bikas Saha commented on YARN-395:
-

This might as simple as specifying a resource request with -1 count for that 
location. Or not :)

 RM should have a way to disable scheduling to a set of nodes
 

 Key: YARN-395
 URL: https://issues.apache.org/jira/browse/YARN-395
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha

 There should be a way to say schedule to A, B and C but never to D.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-393) In RM * is overloaded for locality as well as resource demand

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy reassigned YARN-393:
--

Assignee: Arun C Murthy

 In RM * is overloaded for locality as well as resource demand
 -

 Key: YARN-393
 URL: https://issues.apache.org/jira/browse/YARN-393
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Arun C Murthy

 At present the RM scheduler uses * uses to detect if an app needs resources 
 to be allocated. It also uses * as an indication of locality to schedule 
 anywhere in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (YARN-395) RM should have a way to disable scheduling to a set of nodes

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy reassigned YARN-395:
--

Assignee: Arun C Murthy

 RM should have a way to disable scheduling to a set of nodes
 

 Key: YARN-395
 URL: https://issues.apache.org/jira/browse/YARN-395
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Arun C Murthy

 There should be a way to say schedule to A, B and C but never to D.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-392:
---

Issue Type: Improvement  (was: Sub-task)
Parent: (was: YARN-386)

 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Sandy Ryza

 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-392) Make it possible to schedule to specific nodes without dropping locality

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-392:
---

Issue Type: Sub-task  (was: Improvement)
Parent: YARN-397

 Make it possible to schedule to specific nodes without dropping locality
 

 Key: YARN-392
 URL: https://issues.apache.org/jira/browse/YARN-392
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bikas Saha
Assignee: Sandy Ryza

 Currently its not possible to specify scheduling requests for specific nodes 
 and nowhere else. The RM automatically relaxes locality to rack and * and 
 assigns non-specified machines to the app.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-386) [Umbrella] YARN API cleanup

2013-02-11 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13576401#comment-13576401
 ] 

Arun C Murthy commented on YARN-386:


Let's keep limited to API cleanup, and use YARN-397 to track scheduler api 
enhancements.

 [Umbrella] YARN API cleanup
 ---

 Key: YARN-386
 URL: https://issues.apache.org/jira/browse/YARN-386
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli

 This is the umbrella ticket to capture any and every API cleanup that we wish 
 to do before YARN can be deemed beta/stable. Doing this API cleanup now and 
 ASAP will help us escape the pain of supporting bad APIs in beta/stable 
 releases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-393) In RM * is overloaded for locality as well as resource demand

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-393:
---

Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-386)

 In RM * is overloaded for locality as well as resource demand
 -

 Key: YARN-393
 URL: https://issues.apache.org/jira/browse/YARN-393
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Arun C Murthy

 At present the RM scheduler uses * uses to detect if an app needs resources 
 to be allocated. It also uses * as an indication of locality to schedule 
 anywhere in the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-394) RM should be able to return requests that it cannot fulfill

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-394:
---

Issue Type: Improvement  (was: Sub-task)
Parent: (was: YARN-386)

 RM should be able to return requests that it cannot fulfill
 ---

 Key: YARN-394
 URL: https://issues.apache.org/jira/browse/YARN-394
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Bikas Saha

 Currently, the RM has no way of returning requests that cannot be met. e.g. 
 if the app wants a specific node and that node dies, then the RM should 
 return that request instead of holding onto to it indefinitely. Currently, 
 since every request can be met at * locality such a situation is hard to 
 repro. It can however happen that all nodes in a cluster become unavailable. 
 At that point, there is no way the RM can inform the apps about its inability 
 to allocate requests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-397) RM Scheduler api enhancements

2013-02-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-397:
---

Summary: RM Scheduler api enhancements  (was: RM api enhancements)

 RM Scheduler api enhancements
 -

 Key: YARN-397
 URL: https://issues.apache.org/jira/browse/YARN-397
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun C Murthy

 Umbrella jira tracking enhancements to RM apis.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-398) Allow white-list and black-list of resources

2013-02-11 Thread Arun C Murthy (JIRA)
Arun C Murthy created YARN-398:
--

 Summary: Allow white-list and black-list of resources
 Key: YARN-398
 URL: https://issues.apache.org/jira/browse/YARN-398
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun C Murthy
Assignee: Arun C Murthy


Allow white-list and black-list of resources in scheduler api.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira