Hi Petr,

It would be great if you can also share your use case here, I can post your
feedback/use case to Magnum community to improve Magnum support for Mesos.

Thanks,

Guangya

On Tue, Feb 16, 2016 at 4:20 PM, Guangya Liu <[email protected]> wrote:

> Hi Peter,
>
> Have you ever tried Magnum (https://github.com/openstack/magnum) which is
> the container service in OpenStack leveraging HEAT to integrate with
> Kubernetes, Swarm and Mesos. With Magnum, you do not need to maintain your
> own HEAT template but just let Magnum do this for you, it is more simple
> than using HEAT directly.
>
> The Magnum can now supports both scale up and scale down, when scale down,
> the Magnum will select the node which does not have container or have the
> least containers.
>
> The mesos now support "Host Maintain" (
> https://github.com/apache/mesos/blob/master/docs/maintenance.md) which
> can be leveraged by HEAT or Magnum, when HEAT or Magnum want to scale down
> a host, we can call some cloud-init script to first maintain the host
> before HEAT delete it. The host maintain will emit "InverseOffer" and you
> can update the framework to handle "InverseOffer" for the host which is
> going to be scale down.
>
> Thanks,
>
> Guangya
>
>
> On Tue, Feb 16, 2016 at 4:02 PM, Petr Novak <[email protected]> wrote:
>
>> Hello,
>> we are considering adopting Mesos but at the same time we need to run it
>> on top of OpenStack at some places. My main questions is about how and if
>> autoscaling defined via HEAT templates works together. And has to be done.
>> I assume that scaling up is not much a problem - when Mesos detects more
>> resources it notifies frameworks which might scale based on their buildin
>> strategies, though I assume it can't be defined in HEAT templates. Scaling
>> down has to go through some cooperation between Mesos and HEAT. Do I have
>> to update Mesos frameworks source code to somehow listen to OpenStack
>> events or something like this?
>>
>> Is there any ongoing effort from Mesosphere and OpenStack to integrate
>> more closely in this regard?
>>
>> Many thanks for any points regarding other possible problems and any
>> clarification,
>> Petr
>>
>
>

Reply via email to