Yup, this the reason Myriad launches a medium size nodemanager by default
when you launch a resourcemanager.
Did this not happen?

Regards
Swapnil


On Fri, Jun 3, 2016 at 8:37 AM, Darin Johnson <dbjohnson1...@gmail.com>
wrote:

> That is correct you need at least one node manager with the minimum
> requirements to launch an ApplicationMaster.  Otherwise YARN will throw an
> exception.
>
> On Fri, Jun 3, 2016 at 10:52 AM, yuliya Feldman
> <yufeld...@yahoo.com.invalid
> > wrote:
>
> > I believe you need at least one NM that is not subject to fine grain
> > scaling.
> > So far if total resources on the cluster is less then a single container
> > needs for AM you won't be able to submit any app.As exception below tells
> > you.
> > (Invalid resource request, requested memory < 0, or requested memory >max
> > configured, requestedMemory=1536, maxMemory=0
> >         at)
> > I believe by default when starting Myriad cluster one NM with non 0
> > capacity should start by default.
> > In addition see in RM log whether offers with resources are coming to RM
> -
> > this info should be in the log.
> >
> >       From: Stephen Gran <stephen.g...@piksel.com>
> >  To: "dev@myriad.incubator.apache.org" <dev@myriad.incubator.apache.org>
> >  Sent: Friday, June 3, 2016 1:29 AM
> >  Subject: problem getting fine grained scaling workig
> >
> > Hi,
> >
> > I'm trying to get fine grained scaling going on a test mesos cluster.  I
> > have a single master and 2 agents.  I am running 2 node managers with
> > the zero profile, one per agent.  I can see both of them in the RM UI
> > reporting correctly as having 0 resources.
> >
> > I'm getting stack traces when I try to launch a sample application,
> > though.  I feel like I'm just missing something obvious somewhere - can
> > anyone shed any light?
> >
> > This is on a build of yesterday's git head.
> >
> > Cheers,
> >
> > root@master:/srv/apps/hadoop# bin/yarn jar
> > share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar teragen 10000
> > /outDir
> > 16/06/03 08:23:33 INFO client.RMProxy: Connecting to ResourceManager at
> > master.testing.local/10.0.5.3:8032
> > 16/06/03 08:23:34 INFO terasort.TeraSort: Generating 10000 using 2
> > 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: number of splits:2
> > 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1464902078156_0001
> > 16/06/03 08:23:35 INFO mapreduce.JobSubmitter: Cleaning up the staging
> > area /tmp/hadoop-yarn/staging/root/.staging/job_1464902078156_0001
> > java.io.IOException:
> > org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >         at
> >
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >         at
> >
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >         at
> > org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
> >         at
> >
> >
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
> >         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> >         at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >         at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> >         at
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> >         at
> > org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:301)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >         at
> > org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:305)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:497)
> >         at
> >
> >
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> >         at
> org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> >         at
> > org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:497)
> >         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> >         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> > Caused by:
> > org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >         at
> >
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >         at
> >
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >         at
> >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >         at
> > org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> >         at
> >
> >
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
> >         at
> >
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:239)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:497)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >         at com.sun.proxy.$Proxy13.submitApplication(Unknown Source)
> >         at
> >
> >
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:253)
> >         at
> >
> >
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:290)
> >         at
> > org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
> >         ... 24 more
> > Caused by:
> >
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >         at
> >
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >         at
> >
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >         at
> >
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1475)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> >         at com.sun.proxy.$Proxy12.submitApplication(Unknown Source)
> >         at
> >
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:236)
> >         ... 34 more
> >
> >
> > Cheers,
> > --
> > Stephen Gran
> > Senior Technical Architect
> >
> > picture the possibilities | piksel.com
> > This message is private and confidential. If you have received this
> > message in error, please notify the sender or serviced...@piksel.com and
> > remove it from your system.
> >
> > Piksel Inc is a company registered in the United States New York City,
> > 1250 Broadway, Suite 1902, New York, NY 10001. F No. = 2931986
> >
> >
> >
>

Reply via email to