> On 10/22/19 9:08 PM, Benjamin Mahler wrote:
> It's easier to do something custom for your own needs than to bring generic 
> support into the project.
Sure, but besides not having the full capabilities within Mesos for specifying 
more flexible scheduling, apparently pinning is a recurrent need.

> 
> For example, in kubernetes, as far as I can tell they offer two modes for the 
> agent: "static" (i.e. pinning for integer requests) and "none" (regular 
> shares / limit model).
> https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy
> 
> This means the user has to choose which model they want. If they choose the 
> "static" model, their utilization may go down given that some CPUs become 
> exclusive, which prevents other containers from "bursting" up and using them 
> when the owning container doesn't make use of them. We're about to add this 
> type of bursting on a per container basis: 
> https://issues.apache.org/jira/browse/MESOS-9916
You see, I do not understand the binary choices between "static" or "none". One 
can have process/container pinning + CFS quotas + CFS shares. Sure, the overall 
management can be a bit more complex, but everything else is up to the user to 
decide.
I understand it is important to not overcomplicate things, but at least having 
some basic toolbox enabling new scheduling can be a good thing.

Even Intel has done something in regards with task pinning: 
https://github.com/intel/CPU-Manager-for-Kubernetes

> 
> The kubernetes approach is inline with some of the previous proposals for 
> Mesos, and I think it could be brought in first class to the project and has 
> no impact on the API. It will be up to operators to decide how they want to 
> run things and they may use attributes to mark which nodes have the cpu 
> pinning isolation on, to target scheduling there.
> However, a complementary and potentially preferred approach that's been 
> proposed is to have it opt-in on a per container basis. When a task is being 
> launched it could state that it is latency sensitive and/or explicitly 
> specify the cpus it wants. Most tasks would not bother with this, only those 
> that have these special needs.
That's a good approach to starting to have more flexible scheduling strategies 
brought-in to Mesos. The idea brought in the previous email 
(https://github.com/criteo/mesos-command-modules/) is also good, though task 
management would be sent outside Mesos control, which may not be a good thing 
in some cases I would guess..

/Abel

> 
> Ben
> 
>> On Mon, Oct 21, 2019 at 10:41 AM Abel Souza <a...@cs.umu.se> wrote:
>> Hi,
>> 
>> Does anyone know if pinning capabilities will ever be available to Mesos?
>> 
>> Someone registered an issue at Jira 
>> (https://issues.apache.org/jira/browse/MESOS-5342), started an 
>> implementation (https://github.com/ct-clmsn/mesos-cpusets), but 
>> apparently it never went through mainline. I successfully compiled it in 
>> my testbed and loaded it into the Mesos master agent, but it keeps 
>> crashing the master during the submission process.
>> 
>> So before moving on into potential fixes to these crashes, I would like 
>> to know if someone knows about possible updates to this specific 
>> capability in future Mesos releases.
>> 
>> Thank you,
>> 
>> /Abel

Reply via email to