[jira] [Created] (MESOS-3170) 0.23 Build fails when compiling against -lsasl2 which has been statically linked

2015-07-29 Thread Chris Heller (JIRA)
Chris Heller created MESOS-3170:
---

 Summary: 0.23 Build fails when compiling against -lsasl2 which has 
been statically linked
 Key: MESOS-3170
 URL: https://issues.apache.org/jira/browse/MESOS-3170
 Project: Mesos
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Chris Heller
Priority: Minor
 Fix For: 0.24.0


If the sasl library has been statically linked the check from CRAM-MD5 can 
fail, due to missing symbols.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (MESOS-2249) Mesos entities should be able to use IPv6 and IPv4 in the same time

2015-07-23 Thread Chris Heller (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Heller updated MESOS-2249:

Comment: was deleted

(was: I propose to resolve this issue since its duplicate is already resolved. 
One step closer to IPv6!)

> Mesos entities should be able to use IPv6 and IPv4 in the same time
> ---
>
> Key: MESOS-2249
> URL: https://issues.apache.org/jira/browse/MESOS-2249
> Project: Mesos
>  Issue Type: Task
>Reporter: Evelina Dumitrescu
>Assignee: Evelina Dumitrescu
>
> Each Mesos entity should be able to bind on both Ipv4 and Ipv6 and let the 
> enitity that wants to connect to decide which protocol to use.
> For example, we can have a slave that wants to use IPv4 and another one that 
> wants to use IPv6, so the master should bind on both.
> In consequence, I want to propose in process.cpp to have two Node fields, one 
> for each type of endpoint. It might be better that the field for Ipv6 to be 
> an Option, because the stack might not support IPv6(eg: the kernel si not 
> compiled with Ipv6 support). Also, UPID will contain two fields of Node, for 
> each type of protocol.
> For the HTTP endpoints, whenever a request is done, the entities should try 
> firstly to connect on IPv4 and if the connection fails, to try to use IPv6, 
> or vice versa. We could let the user set up which policy to use. I think in 
> this context it does not matter which protocol is used. I saw this approach 
> in various projects:
> http://www.perforce.com/perforce/r13.1/manuals/cmdref/env.P4PORT.html 
> (tcp4to6(Attempt to listen/connect to an IPv4 address. If this fails, try 
> IPv6.) and tcp6to4(Attempt to listen/connect to an IPv6 address. If this 
> fails, try IPv4.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-2249) Mesos entities should be able to use IPv6 and IPv4 in the same time

2015-07-23 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638653#comment-14638653
 ] 

Chris Heller commented on MESOS-2249:
-

I propose to resolve this issue since its duplicate is already resolved. One 
step closer to IPv6!

> Mesos entities should be able to use IPv6 and IPv4 in the same time
> ---
>
> Key: MESOS-2249
> URL: https://issues.apache.org/jira/browse/MESOS-2249
> Project: Mesos
>  Issue Type: Task
>Reporter: Evelina Dumitrescu
>Assignee: Evelina Dumitrescu
>
> Each Mesos entity should be able to bind on both Ipv4 and Ipv6 and let the 
> enitity that wants to connect to decide which protocol to use.
> For example, we can have a slave that wants to use IPv4 and another one that 
> wants to use IPv6, so the master should bind on both.
> In consequence, I want to propose in process.cpp to have two Node fields, one 
> for each type of endpoint. It might be better that the field for Ipv6 to be 
> an Option, because the stack might not support IPv6(eg: the kernel si not 
> compiled with Ipv6 support). Also, UPID will contain two fields of Node, for 
> each type of protocol.
> For the HTTP endpoints, whenever a request is done, the entities should try 
> firstly to connect on IPv4 and if the connection fails, to try to use IPv6, 
> or vice versa. We could let the user set up which policy to use. I think in 
> this context it does not matter which protocol is used. I saw this approach 
> in various projects:
> http://www.perforce.com/perforce/r13.1/manuals/cmdref/env.P4PORT.html 
> (tcp4to6(Attempt to listen/connect to an IPv4 address. If this fails, try 
> IPv6.) and tcp6to4(Attempt to listen/connect to an IPv6 address. If this 
> fails, try IPv4.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-1886) Always `docker pull` if explicit ":latest" tag is present

2014-10-09 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14166035#comment-14166035
 ] 

Chris Heller commented on MESOS-1886:
-

No the control should be with the framework on this one. +1 for the extra field 
in DockerInfo.

> Always `docker pull` if explicit ":latest" tag is present
> -
>
> Key: MESOS-1886
> URL: https://issues.apache.org/jira/browse/MESOS-1886
> Project: Mesos
>  Issue Type: Improvement
>  Components: containerization
>Affects Versions: 0.20.1
>Reporter: Chris Heller
>Priority: Minor
>  Labels: docker
>
> With 0.20.1 the behavior of a docker container has changed (see MESOS-1762).
> This change brings the docker behavior more in line with that of {{docker 
> run}}.
> I propose,if the image given explicitly has the ":latest" tag, this should 
> signify to mesos that an unconditional `docker pull` should be done on the 
> image... and if it should fail for any reason (i.e. the registry is 
> unavailable) we fall back to the current behavior.
> This would break slightly with the semantics of how the docker command line 
> operates, but the alternative is to require explicit tags on every release -- 
> which is a hinderance when developing a new image, or one must log in to each 
> node and run an explicit `docker pull`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-1886) Always `docker pull` if explicit ":latest" tag is present

2014-10-09 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165660#comment-14165660
 ] 

Chris Heller edited comment on MESOS-1886 at 10/9/14 8:12 PM:
--

Making this only accessible in the DockerInfo message will limit the usefulness 
of this. The selection is then on a per-framework basis, excepting when the 
framework exposes the option to the user.

In my case, I'm running spark jobs inside docker images. With this design, in 
order to change what behavior I want, I will need to rebuild spark!

This said, I am in agreement that overloading the meaning of the tag is a hack, 
and your approach is better. Since I've already hacked spark to expose the 
docker info, I can hack it again to set this flag.

But in the general case, a user needs access to the message, even if the 
framework chose not to expose some of the options.

_Seems this is similar in theme to what I reported in MESOS-1860 :-)_


was (Author: chrisheller):
Making this only accessible in the DockerInfo message will limit the usefulness 
of this. The selection is then on a per-framework basis, excepting when the 
framework exposes the option to the user.

In my case, I'm running spark jobs inside docker images. With this design, in 
order to change what behavior I want, I will need to rebuild spark!

_Seems this is similar in theme to what I reported in MESOS-1860 :-)_

The issue here is a variation on the issue I reported here: MESOS-1860, where 
these options 

> Always `docker pull` if explicit ":latest" tag is present
> -
>
> Key: MESOS-1886
> URL: https://issues.apache.org/jira/browse/MESOS-1886
> Project: Mesos
>  Issue Type: Improvement
>  Components: containerization
>Affects Versions: 0.20.1
>Reporter: Chris Heller
>Priority: Minor
>  Labels: docker
>
> With 0.20.1 the behavior of a docker container has changed (see MESOS-1762).
> This change brings the docker behavior more in line with that of {{docker 
> run}}.
> I propose,if the image given explicitly has the ":latest" tag, this should 
> signify to mesos that an unconditional `docker pull` should be done on the 
> image... and if it should fail for any reason (i.e. the registry is 
> unavailable) we fall back to the current behavior.
> This would break slightly with the semantics of how the docker command line 
> operates, but the alternative is to require explicit tags on every release -- 
> which is a hinderance when developing a new image, or one must log in to each 
> node and run an explicit `docker pull`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-1886) Always `docker pull` if explicit ":latest" tag is present

2014-10-09 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165660#comment-14165660
 ] 

Chris Heller commented on MESOS-1886:
-

Making this only accessible in the DockerInfo message will limit the usefulness 
of this. The selection is then on a per-framework basis, excepting when the 
framework exposes the option to the user.

In my case, I'm running spark jobs inside docker images. With this design, in 
order to change what behavior I want, I will need to rebuild spark!

_Seems this is similar in theme to what I reported in MESOS-1860 :-)_

The issue here is a variation on the issue I reported here: MESOS-1860, where 
these options 

> Always `docker pull` if explicit ":latest" tag is present
> -
>
> Key: MESOS-1886
> URL: https://issues.apache.org/jira/browse/MESOS-1886
> Project: Mesos
>  Issue Type: Improvement
>  Components: containerization
>Affects Versions: 0.20.1
>Reporter: Chris Heller
>Priority: Minor
>  Labels: docker
>
> With 0.20.1 the behavior of a docker container has changed (see MESOS-1762).
> This change brings the docker behavior more in line with that of {{docker 
> run}}.
> I propose,if the image given explicitly has the ":latest" tag, this should 
> signify to mesos that an unconditional `docker pull` should be done on the 
> image... and if it should fail for any reason (i.e. the registry is 
> unavailable) we fall back to the current behavior.
> This would break slightly with the semantics of how the docker command line 
> operates, but the alternative is to require explicit tags on every release -- 
> which is a hinderance when developing a new image, or one must log in to each 
> node and run an explicit `docker pull`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-101) Create meta-framework (i.e. a framework for launching and managing other frameworks)

2014-10-09 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165170#comment-14165170
 ] 

Chris Heller commented on MESOS-101:


I see. In this model marathon could be run inside this metaframework. This is a 
great idea. +1

> Create meta-framework (i.e. a framework for launching and managing other 
> frameworks)
> 
>
> Key: MESOS-101
> URL: https://issues.apache.org/jira/browse/MESOS-101
> Project: Mesos
>  Issue Type: Story
>  Components: framework
>Reporter: Andy Konwinski
>
> Framework developers should be able to submit their framework to, and have it 
> run by, a metaframework (i.e. the metaframework would launch a mesos task 
> which would run the new frameworks scheduler passing it the masters address).
> The meta framework could provide a way to kill or update currently running 
> frameworks, it might also have its own webui.
> This framework could be written in any language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-1886) Always `docker pull` if explicit ":latest" tag is present

2014-10-09 Thread Chris Heller (JIRA)
Chris Heller created MESOS-1886:
---

 Summary: Always `docker pull` if explicit ":latest" tag is present
 Key: MESOS-1886
 URL: https://issues.apache.org/jira/browse/MESOS-1886
 Project: Mesos
  Issue Type: Improvement
  Components: containerization
Affects Versions: 0.20.1
Reporter: Chris Heller
Priority: Minor


With 0.20.1 the behavior of a docker container has changed (see MESOS-1762).

This change brings the docker behavior more in line with that of {{docker run}}.

I propose,if the image given explicitly has the ":latest" tag, this should 
signify to mesos that an unconditional `docker pull` should be done on the 
image... and if it should fail for any reason (i.e. the registry is 
unavailable) we fall back to the current behavior.

This would break slightly with the semantics of how the docker command line 
operates, but the alternative is to require explicit tags on every release -- 
which is a hinderance when developing a new image, or one must log in to each 
node and run an explicit `docker pull`.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-101) Create meta-framework (i.e. a framework for launching and managing other frameworks)

2014-10-09 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14165050#comment-14165050
 ] 

Chris Heller commented on MESOS-101:


Would this type of functionality replace the need for something like Marathon? 
It sounds very similar in purpose.

> Create meta-framework (i.e. a framework for launching and managing other 
> frameworks)
> 
>
> Key: MESOS-101
> URL: https://issues.apache.org/jira/browse/MESOS-101
> Project: Mesos
>  Issue Type: Story
>  Components: framework
>Reporter: Andy Konwinski
>
> Framework developers should be able to submit their framework to, and have it 
> run by, a metaframework (i.e. the metaframework would launch a mesos task 
> which would run the new frameworks scheduler passing it the masters address).
> The meta framework could provide a way to kill or update currently running 
> frameworks, it might also have its own webui.
> This framework could be written in any language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158463#comment-14158463
 ] 

Chris Heller commented on MESOS-1860:
-

So I think the approach of making this available in the framework as a library 
would be a good idea. I would push to have it always included -- and a 
conforming framework would expose the ability to load a customize constraint 
ruleset which could be provided by the user.

This would be a place to hang new overrides like constraints, role classifiers 
and containerizer overloads.

I could see that some user rules might break an conforming framework, but now 
the issue is obvious since it will only arise when loading in the custom 
ruleset.

> Give more control to the Mesos Administrator
> 
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
>  Issue Type: Story
>  Components: framework, master, slave
>Reporter: Chris Heller
>  Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerizer of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158361#comment-14158361
 ] 

Chris Heller commented on MESOS-1860:
-

I like the idea of leaving the work in the framework, but perhaps with a 
unified way for framework users to override the constraints and options where 
needed.

I would like to proxy the request/offer messages of a framework.

> Give more control to the Mesos Administrator
> 
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
>  Issue Type: Story
>  Components: framework, master, slave
>Reporter: Chris Heller
>  Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerizer of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158180#comment-14158180
 ] 

Chris Heller commented on MESOS-1860:
-

In my particular use case I wish to have some services run on Mesos by way of 
Marathon. I want to restrict those services to run on a particular set of nodes 
(which happen to also be Mesos master nodes). This can be achieved by using 
attribute constraints in Marathon, because the framework exposes them to me.

I also want to run spark jobs on this same cluster, but specifically want to 
keep them off the nodes which are running Mesos masters. Spark does not 
currently expose attribute constraints to the user. While one approach would be 
to modify the Spark framework to add attribute constraints, I could see 
encountering this issue again with some other Mesos parameter, and wonder if 
perhaps it would be better if this control existed at the cluster level not at 
the framework level.

> Give more control to the Mesos Administrator
> 
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
>  Issue Type: Story
>  Components: framework, master, slave
>Reporter: Chris Heller
>  Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerizer of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Heller updated MESOS-1860:

Description: 
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constraints and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerizer of a given framework, without the framework needing 
to explicitly allow for such a parameter.

  was:
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constraints and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.


> Give more control to the Mesos Administrator
> 
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
>  Issue Type: Story
>  Components: framework, master, slave
>Reporter: Chris Heller
>  Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerizer of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.

[jira] [Updated] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Heller updated MESOS-1860:

Description: 
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constraints and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.

  was:
Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constrains and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.


> Give more control to the Mesos Administrator
> 
>
> Key: MESOS-1860
> URL: https://issues.apache.org/jira/browse/MESOS-1860
> Project: Mesos
>  Issue Type: Story
>  Components: framework, master, slave
>Reporter: Chris Heller
>  Labels: design, features
>
> Mesos currently relies on a framework to:
> - discard offers which don't match attributes that the framework finds 
> desirable 
> - specify which role a given resource request will belong to.
> This creates a scenario where to restrict a framework to a certain subset of 
> slaves within a cluster one must unfortunately modify the framework.
> This story is meant to open a discussion on how Mesos could be modified so 
> that:
> - an administrator could define attribute constraints which would apply to a 
> given framework, without requiring framework support (i.e. an administrator 
> could specify that the spark framework only accept offers with an attribute 
> of 'appclass=spark' or any other predicate).
> - an administrator could classify framework requests into a given role, again 
> without framework support (i.e. an administrator could specify that the spark 
> framework requests for 'cpu(*)' become requests for 'cpu(spark)')
> Taking things a step further,  how might it be possible that attribute 
> constraints and request classifications could be setup for a single instance 
> of a framework (i.e. a user fires up spark-shell with a given attribute 
> constraint -- without needing to modify spark-shell to support attribute 
> constraints)?
> This functionality could apply even deeper: an administrator should be able 
> to specify the containerized of a given framework, without the framework 
> needing to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3

[jira] [Created] (MESOS-1860) Give more control to the Mesos Administrator

2014-10-03 Thread Chris Heller (JIRA)
Chris Heller created MESOS-1860:
---

 Summary: Give more control to the Mesos Administrator
 Key: MESOS-1860
 URL: https://issues.apache.org/jira/browse/MESOS-1860
 Project: Mesos
  Issue Type: Story
  Components: framework, master, slave
Reporter: Chris Heller


Mesos currently relies on a framework to:

- discard offers which don't match attributes that the framework finds 
desirable 
- specify which role a given resource request will belong to.

This creates a scenario where to restrict a framework to a certain subset of 
slaves within a cluster one must unfortunately modify the framework.

This story is meant to open a discussion on how Mesos could be modified so that:

- an administrator could define attribute constraints which would apply to a 
given framework, without requiring framework support (i.e. an administrator 
could specify that the spark framework only accept offers with an attribute of 
'appclass=spark' or any other predicate).
- an administrator could classify framework requests into a given role, again 
without framework support (i.e. an administrator could specify that the spark 
framework requests for 'cpu(*)' become requests for 'cpu(spark)')

Taking things a step further,  how might it be possible that attribute 
constrains and request classifications could be setup for a single instance of 
a framework (i.e. a user fires up spark-shell with a given attribute constraint 
-- without needing to modify spark-shell to support attribute constraints)?

This functionality could apply even deeper: an administrator should be able to 
specify the containerized of a given framework, without the framework needing 
to explicitly allow for such a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)