From: Joe Gordon <joe.gord...@gmail.com<mailto:joe.gord...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 9, 2015 at 6:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:


On 2/9/15, 3:02 AM, "Thierry Carrez" 
<thie...@openstack.org<mailto:thie...@openstack.org>> wrote:

>Adrian Otto wrote:
>> [...]
>> We have multiple options for solving this challenge. Here are a few:
>>
>> 1) Cherry pick scheduler code from Nova, which already has a working a
>>filter scheduler design.
>> 2) Integrate swarmd to leverage its scheduler[2].
>> 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
>>This is expected to happen about a year from now, possibly sooner.
>> 4) Write our own filter scheduler, inspired by Nova.
>
>I haven't looked enough into Swarm to answer that question myself, but
>how much would #2 tie Magnum to Docker containers ?
>
>There is value for Magnum to support other container engines / formats
>(think Rocket/Appc) in the long run, so we should avoid early design
>choices that would prevent such support in the future.

Thierry,
Magnum has an object type of a bay which represents the underlying cluster
architecture used.  This could be kubernetes, raw docker, swarmd, or some
future invention.  This way Magnum can grow independently of the
underlying technology and provide a satisfactory user experience dealing
with the chaos that is the container development world :)

While I don't disagree with anything said here, this does sound a lot like 
https://xkcd.com/927/


Andrew had suggested offering a unified standard user experience and API.  I 
think that matches the 927 comic pretty well.  I think we should offer each 
type of system using APIs that are similar in nature but that offer the native 
features of the system.  In other words, we will offer integration across the 
various container landscape with OpenStack.

We should strive to be conservative and pragmatic in our systems support and 
only support container schedulers and container managers that have become 
strongly emergent systems.  At this point that is docker and kubernetes.  Mesos 
might fit that definition as well.  Swarmd and rocket are not yet strongly 
emergent, but they show promise of becoming so.  As a result, they are clearly 
systems we should be thinking about for our roadmap.  All of these systems 
present very similar operational models.

At some point competition will choke off new system design placing an upper 
bound on the amount of systems we have to deal with.

Regards
-steve



We will absolutely support relevant container technology, likely through
new Bay formats (which are really just heat templates).

Regards
-steve

>
>--
>Thierry Carrez (ttx)
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to