i've only done it by accident, and noted that it worked. i have no idea if
there are unpleasant side effects in mesos or marathon because of the
unsatisfiable constraints !

On 12 Mar 2015, at 09:33, Aaron Carey <[email protected]> wrote:

 Thanks Craig, I'll have a look into this approach.

It does feel a little flaky though, I suspect it may be easy for things to
get out of sync, I'll see how we get on.

Thanks!

Aaron

 ------------------------------
*From:* craig w [[email protected]]
*Sent:* 12 March 2015 09:19
*To:* [email protected]
*Subject:* Re: Deploying containers to every mesos slave node

  Perhaps you could query the Mesos API to see how many slaves there are,
then use that in the request to Marathon.

On Thu, Mar 12, 2015 at 5:09 AM, Michael Neale <[email protected]>
wrote:

> It would be idea if there was as a way to specify to marathon the number
> of instances to be a variable representing the mesos slaves (I don't think
> possible right now).
>  On Thu, 12 Mar 2015 at 8:07 pm craig w <[email protected]> wrote:
>
>> If you know when the scaling occurs (perhaps there's an API you can query
>> or maybe it can notify you), then you can update the configuration for the
>> application (deployed using marathon) to change the number of instances
>> (via the Marathon REST API).
>>
>> On Thu, Mar 12, 2015 at 5:03 AM, Aaron Carey <[email protected]> wrote:
>>
>>>  Hi Craig,
>>>
>>> I'd looked into that, but I was thinking this may cause issues when our
>>> cluster auto scales up or down, as instances would no longer equal slaves?
>>>
>>> Thanks,
>>> Aaron
>>>
>>>  ------------------------------
>>> *From:* craig w [[email protected]]
>>> *Sent:* 12 March 2015 08:57
>>> *To:* [email protected]
>>> *Subject:* Re: Deploying containers to every mesos slave node
>>>
>>>    Aaron,
>>>
>>>  You could use Marathon (a Mesos framework) to deploy a container to
>>> each host by using constraints [1] and setting the number of instances of
>>> the container to equal the number of slaves.
>>>
>>>  [1] constraints -
>>> https://mesosphere.github.io/marathon/docs/constraints.html
>>>
>>> On Thu, Mar 12, 2015 at 4:54 AM, Aaron Carey <[email protected]> wrote:
>>>
>>>>  Hi All,
>>>>
>>>> In setting up our cluster, we require things like consul to be running
>>>> on all of our nodes. I was just wondering if there was any sort of best
>>>> practice (or a scheduler perhaps) that people could share for this sort of
>>>> thing?
>>>>
>>>> Currently the approach is to use salt to provision each node and add
>>>> consul/mesos slave process and so on to it, but it'd be nice to remove the
>>>> dependency on salt.
>>>>
>>>> Thanks,
>>>> Aaron
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> https://github.com/mindscratch
>>> https://www.google.com/+CraigWickesser
>>> https://twitter.com/mind_scratch
>>> https://twitter.com/craig_links
>>>
>>>
>>
>>
>> --
>>
>> https://github.com/mindscratch
>> https://www.google.com/+CraigWickesser
>> https://twitter.com/mind_scratch
>> https://twitter.com/craig_links
>>
>>


-- 

https://github.com/mindscratch
https://www.google.com/+CraigWickesser
https://twitter.com/mind_scratch
https://twitter.com/craig_links

Reply via email to