Re: mesos-execute cmd

2018-06-12 Thread Abel Souza
Did you mean through ‘mesos-execute’ command or is it a Mesos general behavior?

Best,

/Abel Souza

> On Jun 13, 2018, at 02:04, Qian Zhang  wrote:
> 
> It is possible to use multiple offers from a single agent node to launch a 
> task, but I do not think you can use multiple offers from different agent 
> nodes to launch a task.
> 
> 
> Regards,
> Qian Zhang
> 
>> On Tue, Jun 12, 2018 at 9:12 PM, Abel Souza  wrote:
>> Hello,
>> 
>> I believe this question relates to the framework used by the mesos-execute 
>> command (available by default in Mesos installation):
>> 
>> When I request a number of cores greater than what is available in one 
>> single node, the mesos-execute automatically turn down all offers made by 
>> Mesos and hangs forever. E.g.: Each agent node in my cluster has 8 cores, 
>> and when I request 9 cores through mesos-execute --resources='cpus:9', the 
>> command waits forever. But If I execute mesos-execute --resources='cpus:8', 
>> tasks start execution right away.
>> 
>> So I would like to know if there is a way to enable the mesos-execute to 
>> handle situations where multiple nodes are needed to satisfy a resource 
>> request. If so, what would be needed?
>> Thank you,
>> 
>> /Abel Souza
> 


Re: mesos-execute cmd

2018-06-12 Thread Qian Zhang
It is possible to use multiple offers from a single agent node to launch a
task, but I do not think you can use multiple offers from different agent
nodes to launch a task.


Regards,
Qian Zhang

On Tue, Jun 12, 2018 at 9:12 PM, Abel Souza  wrote:

> Hello,
>
> I believe this question relates to the framework used by the mesos-execute
> command (available by default in Mesos installation):
>
> When I request a number of cores greater than what is available in one
> single node, the mesos-execute automatically turn down all offers made by
> Mesos and hangs forever. E.g.: Each agent node in my cluster has 8 cores,
> and when I request 9 cores through mesos-execute --resources='cpus:9',
> the command waits forever. But If I execute mesos-execute
> --resources='cpus:8', tasks start execution right away.
>
> So I would like to know if there is a way to enable the mesos-execute to
> handle situations where multiple nodes are needed to satisfy a resource
> request. If so, what would be needed?
>
> Thank you,
>
> /Abel Souza
>


Re: Proposing change to the allocatable check in the allocator

2018-06-12 Thread Greg Mann
Hi all,
We had a nice discussion about this in the API working group meeting today.
I agree that it's a good idea to do our best to make this change compatible
with future updates to the Request call and/or quota. I think it would be
beneficial to have a meeting in a few days to brainstorm some ideas; please
let me know if you would like to be included in that meeting and I will add
you to an invite!

Cheers,
Greg


On Tue, Jun 12, 2018 at 8:06 AM, Alex Rukletsov  wrote:

> Instead of the master flag, why not a master API call. This will allow to
> update the value without restarting the master.
>
> Another thought is that we should explain operators how and when to use
> this knob. For example, if they observe a behavioural pattern A, then it
> means B is happening, and tuning the knob to C might help.
>
> On Tue, Jun 12, 2018 at 7:36 AM, Jie Yu  wrote:
>
>> I would suggest we also consider the possibility of adding per framework
>> control on `min_allocatable_resources`.
>>
>> If we want to consider supporting per-framework setting, we should
>> probably
>> model this as a protobuf, rather than a free form JSON. The same protobuf
>> can be reused for both master flag, framework API, or even supporting
>> Resource Request in the future. Something like the following:
>>
>> message ResourceQuantityPredicate {
>>   enum Type {
>> SCALAR_GE,
>>   }
>>   optional Type type;
>>   optional Value.Scalar scalar;
>> }
>> message ResourceRequirement {
>>   required string resource_name;
>>   oneof predicates {
>> ResourceQuantityPredicate quantity;
>>   }
>> }
>> message ResourceRequirementList {
>>   // All requirements MUST be met.
>>   repeated ResourceRequirement requirements;
>> }
>>
>> // Resource request API.
>> message Request {
>>   repeated ResoruceRequrementList accepted;
>> }
>>
>> // `allocatable()`
>> message MinimalAllocatableResources {
>>   repeated ResoruceRequrementList accepted;
>> }
>>
>> On Mon, Jun 11, 2018 at 3:47 PM, Meng Zhu  wrote:
>>
>> > Hi:
>> >
>> > The allocatable
>> > > ator/mesos/hierarchical.cpp#L2471-L2479>
>> >  check in the allocator (shown below) was originally introduced to
>> >
>> > help alleviate the situation where a framework receives some resources,
>> > but no
>> >
>> > cpu/memory, thus cannot launch a task.
>> >
>> >
>> > constexpr double MIN_CPUS = 0.01;constexpr Bytes MIN_MEM =
>> Megabytes(32);
>> > bool HierarchicalAllocatorProcess::allocatable(
>> > const Resources& resources)
>> > {
>> >   Option cpus = resources.cpus();
>> >   Option mem = resources.mem();
>> >
>> >   return (cpus.isSome() && cpus.get() >= MIN_CPUS) ||
>> >  (mem.isSome() && mem.get() >= MIN_MEM);
>> > }
>> >
>> >
>> > Issues
>> >
>> > However, there has been a couple of issues surfacing lately surrounding
>> > the check.
>> >
>> >-
>> >- - MESOS-8935 Quota limit "chopping" can lead to cpu-only and
>>
>> >memory-only offers.
>> >
>> > We introduced fined-grained quota-allocation (MESOS-7099) in Mesos 1.5.
>> > When we
>> >
>> > allocate resources to a role, we'll "chop" the available resources of
>> the
>> > agent up to the
>> >
>> > quota limit for the role. However, this has the unintended consequence
>> of
>> > creating
>> >
>> > cpu-only and memory-only offers, even though there might be other agents
>> > with both
>> >
>> > cpu and memory resources available in the cluster.
>> >
>> >
>> > - MESOS-8626 The 'allocatable' check in the allocator is problematic
>> with
>> > multi-role frameworks.
>> >
>> > Consider roleA reserved cpu/memory on an agent and roleB reserved disk
>> on
>> > the same agent.
>> >
>> > A framework under both roleA and roleB will not be able to get the
>> > reserved disk due to the
>> >
>> > allocatable check. With the introduction of resource providers, the
>> > similar situation will
>> >
>> > become more common.
>> >
>> > Proposed change
>> >
>> > Instead of hardcoding a one-size-fits-all value in Mesos, we are
>> proposing
>> > to add a new master flag
>> >
>> > min_allocatable_resources. It specifies one or more scalar resources
>> > quantities that define the
>> >
>> > minimum allocatable resources for the allocator. The allocator will only
>> > offer resources that are more
>> >
>> > than at least one of the specified resources.  The default behavior *is
>> > backward compatible* i.e.
>> >
>> > by default, the flag is set to “cpus:0.01|mem:32”.
>> >
>> > Usage
>> >
>> > The flag takes in either a simple text of resource(s) delimited by a bar
>> > (|) or a JSON array of JSON
>> >
>> > formatted resources. Note, the input should be “pure” scalar quantities
>> > i.e. the specified resource(s)
>> >
>> > should only have name, type (set to scalar) and scalar fields set.
>> >
>> >
>> > Examples:
>> >
>> >- - To eliminate cpu or memory only offer due to the quota chopping,
>> >- we could set the flag to “cpus:0.01;mem:32”
>> >-
>> >- - To enable offering disk 

Re: Proposing change to the allocatable check in the allocator

2018-06-12 Thread Alex Rukletsov
Instead of the master flag, why not a master API call. This will allow to
update the value without restarting the master.

Another thought is that we should explain operators how and when to use
this knob. For example, if they observe a behavioural pattern A, then it
means B is happening, and tuning the knob to C might help.

On Tue, Jun 12, 2018 at 7:36 AM, Jie Yu  wrote:

> I would suggest we also consider the possibility of adding per framework
> control on `min_allocatable_resources`.
>
> If we want to consider supporting per-framework setting, we should probably
> model this as a protobuf, rather than a free form JSON. The same protobuf
> can be reused for both master flag, framework API, or even supporting
> Resource Request in the future. Something like the following:
>
> message ResourceQuantityPredicate {
>   enum Type {
> SCALAR_GE,
>   }
>   optional Type type;
>   optional Value.Scalar scalar;
> }
> message ResourceRequirement {
>   required string resource_name;
>   oneof predicates {
> ResourceQuantityPredicate quantity;
>   }
> }
> message ResourceRequirementList {
>   // All requirements MUST be met.
>   repeated ResourceRequirement requirements;
> }
>
> // Resource request API.
> message Request {
>   repeated ResoruceRequrementList accepted;
> }
>
> // `allocatable()`
> message MinimalAllocatableResources {
>   repeated ResoruceRequrementList accepted;
> }
>
> On Mon, Jun 11, 2018 at 3:47 PM, Meng Zhu  wrote:
>
> > Hi:
> >
> > The allocatable
> >  allocator/mesos/hierarchical.cpp#L2471-L2479>
> >  check in the allocator (shown below) was originally introduced to
> >
> > help alleviate the situation where a framework receives some resources,
> > but no
> >
> > cpu/memory, thus cannot launch a task.
> >
> >
> > constexpr double MIN_CPUS = 0.01;constexpr Bytes MIN_MEM = Megabytes(32);
> > bool HierarchicalAllocatorProcess::allocatable(
> > const Resources& resources)
> > {
> >   Option cpus = resources.cpus();
> >   Option mem = resources.mem();
> >
> >   return (cpus.isSome() && cpus.get() >= MIN_CPUS) ||
> >  (mem.isSome() && mem.get() >= MIN_MEM);
> > }
> >
> >
> > Issues
> >
> > However, there has been a couple of issues surfacing lately surrounding
> > the check.
> >
> >-
> >- - MESOS-8935 Quota limit "chopping" can lead to cpu-only and
> >memory-only offers.
> >
> > We introduced fined-grained quota-allocation (MESOS-7099) in Mesos 1.5.
> > When we
> >
> > allocate resources to a role, we'll "chop" the available resources of the
> > agent up to the
> >
> > quota limit for the role. However, this has the unintended consequence of
> > creating
> >
> > cpu-only and memory-only offers, even though there might be other agents
> > with both
> >
> > cpu and memory resources available in the cluster.
> >
> >
> > - MESOS-8626 The 'allocatable' check in the allocator is problematic with
> > multi-role frameworks.
> >
> > Consider roleA reserved cpu/memory on an agent and roleB reserved disk on
> > the same agent.
> >
> > A framework under both roleA and roleB will not be able to get the
> > reserved disk due to the
> >
> > allocatable check. With the introduction of resource providers, the
> > similar situation will
> >
> > become more common.
> >
> > Proposed change
> >
> > Instead of hardcoding a one-size-fits-all value in Mesos, we are
> proposing
> > to add a new master flag
> >
> > min_allocatable_resources. It specifies one or more scalar resources
> > quantities that define the
> >
> > minimum allocatable resources for the allocator. The allocator will only
> > offer resources that are more
> >
> > than at least one of the specified resources.  The default behavior *is
> > backward compatible* i.e.
> >
> > by default, the flag is set to “cpus:0.01|mem:32”.
> >
> > Usage
> >
> > The flag takes in either a simple text of resource(s) delimited by a bar
> > (|) or a JSON array of JSON
> >
> > formatted resources. Note, the input should be “pure” scalar quantities
> > i.e. the specified resource(s)
> >
> > should only have name, type (set to scalar) and scalar fields set.
> >
> >
> > Examples:
> >
> >- - To eliminate cpu or memory only offer due to the quota chopping,
> >- we could set the flag to “cpus:0.01;mem:32”
> >-
> >- - To enable offering disk only offer, we could set the flag to
> >“disk:32”
> >-
> >- - For both, we could set the flag to “cpus:0.01;mem:32|disk:32”.
> >- Then the allocator will only offer resources that at least contain
> >“cpus:0.01;mem:32”
> >- OR resources that at least contain “disk:32”.
> >
> >
> > Let me know what you think! Thanks!
> >
> >
> > -Meng
> >
> >
>


mesos-execute cmd

2018-06-12 Thread Abel Souza

Hello,

I believe this question relates to the framework used by the 
mesos-execute command (available by default in Mesos installation):


When I request a number of cores greater than what is available in one 
single node, the mesos-execute automatically turn down all offers made 
by Mesos and hangs forever. E.g.: Each agent node in my cluster has 8 
cores, and when I request 9 cores through mesos-execute 
--resources='cpus:9', the command waits forever. But If I execute 
mesos-execute --resources='cpus:8', tasks start execution right away.


So I would like to know if there is a way to enable the mesos-execute to 
handle situations where multiple nodes are needed to satisfy a resource 
request. If so, what would be needed?


Thank you,

/Abel Souza



Re: reserving resources for host/mesos

2018-06-12 Thread Tomek Janiszewski
Just measure CPU/MEM usage on clean agent (when no task is running) and
ensure you leave a little bit more than that.

wt., 12 cze 2018 o 15:01 użytkownik Paul Mackles  napisał:

> Hi - Basic question that I couldn’t find an answer to in existing docs…
> when configuring the available resources on a slave, is it appropriate to
> leave some resources over for the mesos-agent itself (and the host OS)? Any
> pointers on existing configs folks are using would be appreciated.
>
>
> --
> Thanks,
> Paul
>


reserving resources for host/mesos

2018-06-12 Thread Paul Mackles
Hi - Basic question that I couldn’t find an answer to in existing docs…
when configuring the available resources on a slave, is it appropriate to
leave some resources over for the mesos-agent itself (and the host OS)? Any
pointers on existing configs folks are using would be appreciated.

-- 
Thanks,
Paul