[jira] [Resolved] (MYRIAD-198) Remove optionals when sane defaults are available

2016-06-08 Thread DarinJ (JIRA)

 [ 
https://issues.apache.org/jira/browse/MYRIAD-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DarinJ resolved MYRIAD-198.
---
   Resolution: Fixed
Fix Version/s: Myriad 0.3.0

MYRIAD-198

> Remove optionals when sane defaults are available
> -
>
> Key: MYRIAD-198
> URL: https://issues.apache.org/jira/browse/MYRIAD-198
> Project: Myriad
>  Issue Type: Bug
>  Components: Executor, Scheduler
>Affects Versions: Myriad 0.2.0
>Reporter: DarinJ
>Assignee: John Yost
>Priority: Minor
>  Labels: easyfix, newbie
> Fix For: Myriad 0.3.0
>
>
> Currently we overuse Optionals in the config and then use an or method in 
> various factories later.  In many cases having the configuration return a 
> default when the parameter was specified would create cleaner code.  For 
> instance:
> {quote}
> Optional getCgroups() {
>   Optional.fromNullable(cgroups);
> }
> {quote}
> vs
> {quote}
> Boolean getCgroups() {
>   return cgroups != null ? cgroups : false;
> }
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: problem getting fine grained scaling workig

2016-06-08 Thread Stephen Gran
Hi,

Thanks for doing the update.  Let's see if I contribute a few more times 
- if it becomes a pain for you / others to gatekeep me, we can revisit 
access then.

Cheers,

On 08/06/16 13:49, Darin Johnson wrote:
> Will do today, if you'd like to help with the documentation I could give
> you access.
>
> On Wed, Jun 8, 2016 at 3:14 AM, Stephen Gran 
> wrote:
>
>> Hi,
>>
>> Can someone with access please correct the screenshot here:
>> https://cwiki.apache.org/confluence/display/MYRIAD/Fine-grained+Scaling
>>
>> This gives the strong impression that you don't need an NM with non-zero
>> resources.  I think this is what initially steered me down the wrong path.
>>
>> Cheers,
>>
>> On 03/06/16 16:38, Darin Johnson wrote:
>>> That is correct you need at least one node manager with the minimum
>>> requirements to launch an ApplicationMaster.  Otherwise YARN will throw
>> an
>>> exception.
>>>
>>> On Fri, Jun 3, 2016 at 10:52 AM, yuliya Feldman
>> >>
 I believe you need at least one NM that is not subject to fine grain
 scaling.
 So far if total resources on the cluster is less then a single container
 needs for AM you won't be able to submit any app.As exception below
>> tells
 you.
 (Invalid resource request, requested memory < 0, or requested memory
>>> max
 configured, requestedMemory=1536, maxMemory=0
   at)
 I believe by default when starting Myriad cluster one NM with non 0
 capacity should start by default.
 In addition see in RM log whether offers with resources are coming to
>> RM -
 this info should be in the log.

 From: Stephen Gran 
To: "dev@myriad.incubator.apache.org" <
>> dev@myriad.incubator.apache.org>
Sent: Friday, June 3, 2016 1:29 AM
Subject: problem getting fine grained scaling workig

 Hi,

 I'm trying to get fine grained scaling going on a test mesos cluster.  I
 have a single master and 2 agents.  I am running 2 node managers with
 the zero profile, one per agent.  I can see both of them in the RM UI
 reporting correctly as having 0 resources.

 I'm getting stack traces when I try to launch a sample application,
 though.  I feel like I'm just missing something obvious somewhere - can
 anyone shed any light?

 This is on a build of yesterday's git head.

 Cheers,

 root@master:/srv/apps/hadoop# bin/yarn jar
 share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar teragen 1
 /outDir
 16/06/03 08:23:33 INFO client.RMProxy: Connecting to ResourceManager at
 master.testing.local/10.0.5.3:8032
 16/06/03 08:23:34 INFO terasort.TeraSort: Generating 1 using 2
 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: number of splits:2
 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: Submitting tokens for
 job: job_1464902078156_0001
 16/06/03 08:23:35 INFO mapreduce.JobSubmitter: Cleaning up the staging
 area /tmp/hadoop-yarn/staging/root/.staging/job_1464902078156_0001
 java.io.IOException:
 org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
 Invalid resource request, requested memory < 0, or requested memory >
 max configured, requestedMemory=1536, maxMemory=0
   at


>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
   at


>> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
   at


>> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
   at


>> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
   at


>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
   at 

Re: problem getting fine grained scaling workig

2016-06-08 Thread Darin Johnson
Will do today, if you'd like to help with the documentation I could give
you access.

On Wed, Jun 8, 2016 at 3:14 AM, Stephen Gran 
wrote:

> Hi,
>
> Can someone with access please correct the screenshot here:
> https://cwiki.apache.org/confluence/display/MYRIAD/Fine-grained+Scaling
>
> This gives the strong impression that you don't need an NM with non-zero
> resources.  I think this is what initially steered me down the wrong path.
>
> Cheers,
>
> On 03/06/16 16:38, Darin Johnson wrote:
> > That is correct you need at least one node manager with the minimum
> > requirements to launch an ApplicationMaster.  Otherwise YARN will throw
> an
> > exception.
> >
> > On Fri, Jun 3, 2016 at 10:52 AM, yuliya Feldman
>  >> wrote:
> >
> >> I believe you need at least one NM that is not subject to fine grain
> >> scaling.
> >> So far if total resources on the cluster is less then a single container
> >> needs for AM you won't be able to submit any app.As exception below
> tells
> >> you.
> >> (Invalid resource request, requested memory < 0, or requested memory
> >max
> >> configured, requestedMemory=1536, maxMemory=0
> >>  at)
> >> I believe by default when starting Myriad cluster one NM with non 0
> >> capacity should start by default.
> >> In addition see in RM log whether offers with resources are coming to
> RM -
> >> this info should be in the log.
> >>
> >>From: Stephen Gran 
> >>   To: "dev@myriad.incubator.apache.org" <
> dev@myriad.incubator.apache.org>
> >>   Sent: Friday, June 3, 2016 1:29 AM
> >>   Subject: problem getting fine grained scaling workig
> >>
> >> Hi,
> >>
> >> I'm trying to get fine grained scaling going on a test mesos cluster.  I
> >> have a single master and 2 agents.  I am running 2 node managers with
> >> the zero profile, one per agent.  I can see both of them in the RM UI
> >> reporting correctly as having 0 resources.
> >>
> >> I'm getting stack traces when I try to launch a sample application,
> >> though.  I feel like I'm just missing something obvious somewhere - can
> >> anyone shed any light?
> >>
> >> This is on a build of yesterday's git head.
> >>
> >> Cheers,
> >>
> >> root@master:/srv/apps/hadoop# bin/yarn jar
> >> share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar teragen 1
> >> /outDir
> >> 16/06/03 08:23:33 INFO client.RMProxy: Connecting to ResourceManager at
> >> master.testing.local/10.0.5.3:8032
> >> 16/06/03 08:23:34 INFO terasort.TeraSort: Generating 1 using 2
> >> 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: number of splits:2
> >> 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: Submitting tokens for
> >> job: job_1464902078156_0001
> >> 16/06/03 08:23:35 INFO mapreduce.JobSubmitter: Cleaning up the staging
> >> area /tmp/hadoop-yarn/staging/root/.staging/job_1464902078156_0001
> >> java.io.IOException:
> >> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
> >> Invalid resource request, requested memory < 0, or requested memory >
> >> max configured, requestedMemory=1536, maxMemory=0
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >>  at
> >>
> >>
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >>  at
> >>
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >>  at java.security.AccessController.doPrivileged(Native Method)
> >>  at javax.security.auth.Subject.doAs(Subject.java:422)
> >>  at
> >>
> >>
> 

Re: problem getting fine grained scaling workig

2016-06-08 Thread Stephen Gran
Hi,

Can someone with access please correct the screenshot here:
https://cwiki.apache.org/confluence/display/MYRIAD/Fine-grained+Scaling

This gives the strong impression that you don't need an NM with non-zero 
resources.  I think this is what initially steered me down the wrong path.

Cheers,

On 03/06/16 16:38, Darin Johnson wrote:
> That is correct you need at least one node manager with the minimum
> requirements to launch an ApplicationMaster.  Otherwise YARN will throw an
> exception.
>
> On Fri, Jun 3, 2016 at 10:52 AM, yuliya Feldman > wrote:
>
>> I believe you need at least one NM that is not subject to fine grain
>> scaling.
>> So far if total resources on the cluster is less then a single container
>> needs for AM you won't be able to submit any app.As exception below tells
>> you.
>> (Invalid resource request, requested memory < 0, or requested memory >max
>> configured, requestedMemory=1536, maxMemory=0
>>  at)
>> I believe by default when starting Myriad cluster one NM with non 0
>> capacity should start by default.
>> In addition see in RM log whether offers with resources are coming to RM -
>> this info should be in the log.
>>
>>From: Stephen Gran 
>>   To: "dev@myriad.incubator.apache.org" 
>>   Sent: Friday, June 3, 2016 1:29 AM
>>   Subject: problem getting fine grained scaling workig
>>
>> Hi,
>>
>> I'm trying to get fine grained scaling going on a test mesos cluster.  I
>> have a single master and 2 agents.  I am running 2 node managers with
>> the zero profile, one per agent.  I can see both of them in the RM UI
>> reporting correctly as having 0 resources.
>>
>> I'm getting stack traces when I try to launch a sample application,
>> though.  I feel like I'm just missing something obvious somewhere - can
>> anyone shed any light?
>>
>> This is on a build of yesterday's git head.
>>
>> Cheers,
>>
>> root@master:/srv/apps/hadoop# bin/yarn jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar teragen 1
>> /outDir
>> 16/06/03 08:23:33 INFO client.RMProxy: Connecting to ResourceManager at
>> master.testing.local/10.0.5.3:8032
>> 16/06/03 08:23:34 INFO terasort.TeraSort: Generating 1 using 2
>> 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: number of splits:2
>> 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: Submitting tokens for
>> job: job_1464902078156_0001
>> 16/06/03 08:23:35 INFO mapreduce.JobSubmitter: Cleaning up the staging
>> area /tmp/hadoop-yarn/staging/root/.staging/job_1464902078156_0001
>> java.io.IOException:
>> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
>> Invalid resource request, requested memory < 0, or requested memory >
>> max configured, requestedMemory=1536, maxMemory=0
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
>>  at
>>
>> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
>>  at
>>
>> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
>>  at
>>
>> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
>>  at
>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>>  at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:422)
>>  at
>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>>
>>  at
>> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
>>  at
>>
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
>>  at