you could try using the posix cpu isolator (it only provides monitoring not
limiting)
`--isolation='posix/cpu,cgroups/mem` then then only using a small
allocation of CPU for your marathon app (0.1)

On Wed, Mar 11, 2015 at 9:41 AM, Cole Brown <[email protected]> wrote:

> I may be wrong in this, but it seems as though the ideal solution for this
> would be to avoid using the cpu isolator, but allocate MIN_CPUS cpus to
> your tasks. That way you can avoid having the isolator give you 1/100th of
> a CPU/sec of CPU time, while still allowing yourself 100 tasks per CPU
> resource.
>
> Though I could imagine the resource fragmentation clashing with other
> processes!
>
>
> On Wed, Mar 11, 2015 at 11:43 AM Ian Downes <[email protected]> wrote:
>
>> The --isolation flag for the slave determines how resources are
>> *isolated*, i.e., by not specifying any cpu isolator there will be no
>> isolation between executors for cpu usage; the Linux scheduler will try to
>> balance their execution.
>>
>> Cpu and memory are considered required resources for executors and I
>> believe the master enforces this.
>>
>> What are behavior are you trying to achieve? If your jobs don't require
>> much cpu then can you not just set a small value, like 0.25 cpu?
>>
>> On Wed, Mar 11, 2015 at 7:20 AM, Geoffroy Jabouley <
>> [email protected]> wrote:
>>
>>> Hello
>>>
>>> As cpu relatives shares are *not very* relevant in our heterogenous
>>> cluster, we would like to get rid of CPU resources management and only use
>>> MEM resources for our cluster and tasks allocation.
>>>
>>> Even when modifying the isolation flag of our slave to
>>> "--isolation=cgroups/mem", we see these in the logs:
>>>
>>> *from the slave, at startup:*
>>> I0311 15:09:55.006750 50906 slave.cpp:289] Slave resources:
>>> ports(*):[31000-32000, 80-443]; *cpus(*):2*; mem(*):1979; disk(*):22974
>>>
>>> *from the master:*
>>> I0311 15:15:16.764714 50884 hierarchical_allocator_process.hpp:563]
>>> Recovered ports(*):[31000-32000, 80-443]; *cpus(*):2*; mem(*):1979;
>>> disk(*):22974 (total allocatable: ports(*):[31000-32000, 80-443];
>>> *cpus(*):2*; mem(*):1979; disk(*):22974) on slave
>>> 20150311-150951-3982541578-5050-50860-S0 from framework
>>> 20150311-150951-3982541578-5050-50860-0000
>>>
>>> And mesos master UI is showing both CPU and MEM resources status.
>>>
>>>
>>>
>>> Btw, we are using Marathon and Jenkins frameworks to start our mesos
>>> tasks, and the "cpus" field seems mandatory (set to 1.0 by default). So i
>>> guess you cannot easily bypass cpu resources allocation...
>>>
>>>
>>> Any idea?
>>> Regards
>>>
>>> 2015-02-19 15:15 GMT+01:00 Ryan Thomas <[email protected]>:
>>>
>>>> Hey Don,
>>>>
>>>> Have you tried only setting the 'cgroups/mem' isolation flag on the
>>>> slave and not the cpu one?
>>>>
>>>> http://mesosphere.com/docs/reference/mesos-slave/
>>>>
>>>>
>>>> ryan
>>>>
>>>> On 19 February 2015 at 14:13, Donald Laidlaw <[email protected]> wrote:
>>>>
>>>>> I am using Mesos 0.21.1 with Marathon 0.8.0 and running everything in
>>>>> docker containers.
>>>>>
>>>>> Is there a way to have mesos ignore the cpu relative shares? That is,
>>>>> not limit the docker container CPU at all when it runs. I would still want
>>>>> to have the Memory resource limitation, but would rather just let the 
>>>>> linux
>>>>> system under the containers schedule all the CPU.
>>>>>
>>>>> This would allow us to just allocate tasks to mesos slaves based on
>>>>> available memory only, and to let those tasks get whatever CPU they could
>>>>> when they needed it. This is desireable where there can be lots of 
>>>>> relative
>>>>> high memory tasks that have very low CPU requirements. Especially if we do
>>>>> not know the capabilities of the slave machines with regards to CPU. Some
>>>>> of them may have fast CPU's, some slow, so it is hard to pick a relative
>>>>> number for that slave.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Don Laidlaw
>>>>>
>>>>
>>>>
>>>
>>

Reply via email to