[ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Heller updated MESOS-1860:
--------------------------------
    Description: 
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constraints and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.

  was:
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constrains and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.


> Give more control to the Mesos Administrator
> --------------------------------------------
>
>                 Key: MESOS-1860
>                 URL: https://issues.apache.org/jira/browse/MESOS-1860
>             Project: Mesos
>          Issue Type: Story
>          Components: framework, master, slave
>            Reporter: Chris Heller
>              Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerized of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to