[
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158180#comment-14158180
]
Chris Heller commented on MESOS-1860:
-------------------------------------
In my particular use case I wish to have some services run on Mesos by way of
Marathon. I want to restrict those services to run on a particular set of nodes
(which happen to also be Mesos master nodes). This can be achieved by using
attribute constraints in Marathon, because the framework exposes them to me.
I also want to run spark jobs on this same cluster, but specifically want to
keep them off the nodes which are running Mesos masters. Spark does not
currently expose attribute constraints to the user. While one approach would be
to modify the Spark framework to add attribute constraints, I could see
encountering this issue again with some other Mesos parameter, and wonder if
perhaps it would be better if this control existed at the cluster level not at
the framework level.
> Give more control to the Mesos Administrator
> --------------------------------------------
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
> Issue Type: Story
> Components: framework, master, slave
> Reporter: Chris Heller
> Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds
> desirable
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so
> that:
> - an administrator could define attribute constraints which would apply to a
> given framework, without requiring framework support (i.e. an administrator
> could specify that the spark framework only accept offers with an attribute
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again
> without framework support (i.e. an administrator could specify that the spark
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further, how might it be possible that attribute
> constraints and request classifications could be setup for a single instance
> of a framework (i.e. a user fires up spark-shell with a given attribute
> constraint -- without needing to modify spark-shell to support attribute
> constraints)?
> This functionality could apply even deeper: an administrator should be able
> to specify the containerizer of a given framework, without the framework
> needing to explicitly allow for such a parameter.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)