[ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158463#comment-14158463
 ] 

Chris Heller commented on MESOS-1860:
-------------------------------------

So I think the approach of making this available in the framework as a library 
would be a good idea. I would push to have it always included -- and a 
conforming framework would expose the ability to load a customize constraint 
ruleset which could be provided by the user.

This would be a place to hang new overrides like constraints, role classifiers 
and containerizer overloads.

I could see that some user rules might break an conforming framework, but now 
the issue is obvious since it will only arise when loading in the custom 
ruleset.

> Give more control to the Mesos Administrator
> --------------------------------------------
>
>                 Key: MESOS-1860
>                 URL: https://issues.apache.org/jira/browse/MESOS-1860
>             Project: Mesos
>          Issue Type: Story
>          Components: framework, master, slave
>            Reporter: Chris Heller
>              Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerizer of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to