I would recommend against implementing a lowest common denominator API for
the COEs ³right now.²   It¹s too early to tell if the COEs are going to be
seen as a commodity (where in the long run they may all perform relatively
equal for the majority of workloads ‹ in which case why do you care to
have choice in COE?), or continue to be specialized/differentiated.  If
you assume having choice to provision more than one COE using the same
system is valuable, then it is logical that users value the
differentiation in the COEs in some way. If they are differentiated, and
you value that, then you likely want to avoid the
lowest-common-demonitator API because that abstracts you from the
differentiation that you value.

Kind regards,
-Keith



On 4/19/16, 10:18 AM, "Hongbin Lu" <hongbin...@huawei.com> wrote:

>Sorry, it is too late to adjust the schedule now, but I don't mind to
>have a pre-discussion here. If you have opinions/ideas on this topic but
>cannot attend the session [1], we'd like to have you inputs in this ML or
>in the etherpad [2]. This will help to set the stage for the session.
>
>For background, Magnum supports provisioning Container Orchestration
>Engines (COEs), including Kubernetes, Docker Swarm and Apache Mesos, on
>top of Nova instances. After the provisioning, users need to use the
>native COE APIs to manage containers (and/or other COE resources). In the
>Austin summit, we will have a session to discuss if it makes sense to
>build a common abstraction layer for the supported COEs. If you think it
>is a good idea, it would be great to elaborate the details. For example,
>answering the following questions could be useful:
>* Which abstraction(s) you are looking for (i.e. container, pod)?
>* What are your use cases for the abstraction(s)?
>* How the native APIs provided by individual COEs doesn't satisfy your
>requirements?
>
>If you think it is a bad idea, I would love to hear your inputs as well:
>* Why it is bad?
>* If there is no common abstraction, how to address the pain of
>leveraging native COE APIs as reported below?
>
>[1] 
>https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102
>[2] https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
>
>Best regards,
>Hongbin
>
>> -----Original Message-----
>> From: Fox, Kevin M [mailto:kevin....@pnnl.gov]
>> Sent: April-18-16 6:13 PM
>> To: OpenStack Development Mailing List (not for usage questions);
>> Flavio Percoco
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> I'd love to attend, but this is right on top of the app catalog meeting.
>> I think the app catalog might be one of the primary users of a cross
>> COE api.
>> 
>> At minimum we'd like to be able to be able to store url's for
>> Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow
>> in Horizon to have Magnum start up a new instance of of the template
>> the user selected.
>> 
>> Thanks,
>> Kevin
>> ________________________________________
>> From: Hongbin Lu [hongbin...@huawei.com]
>> Sent: Monday, April 18, 2016 2:09 PM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for usage
>> questions)
>> Cc: foundat...@lists.openstack.org
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> 
>> Hi all,
>> 
>> Magnum will have a fishbowl session to discuss if it makes sense to
>> build a common abstraction layer for all COEs (kubernetes, docker swarm
>> and mesos):
>> 
>> https://www.openstack.org/summit/austin-2016/summit-
>> schedule/events/9102
>> 
>> Frankly, this is a controversial topic since I heard agreements and
>> disagreements from different people. It would be great if all of you
>> can join the session and share your opinions and use cases. I wish we
>> will have a productive discussion.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -----Original Message-----
>> > From: Flavio Percoco [mailto:fla...@redhat.com]
>> > Sent: April-12-16 8:40 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Cc: foundat...@lists.openstack.org
>> > Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
>> > One Platform - Containers/Bare Metal? (Re: Board of Directors Meeting)
>> >
>> > On 11/04/16 16:53 +0000, Adrian Otto wrote:
>> > >Amrith,
>> > >
>> > >I respect your point of view, and agree that the idea of a common
>> > >compute API is attractive... until you think a bit deeper about what
>> > that
>> > >would mean. We seriously considered a "global" compute API at the
>> > >time we were first contemplating Magnum. However, what we came to
>> > >learn through the journey of understanding the details of how such a
>> > >thing would be implemented, that such an API would either be (1) the
>> > >lowest common denominator (LCD) of all compute types, or (2) an
>> > >exceedingly
>> > complex interface.
>> > >
>> > >You expressed a sentiment below that trying to offer choices for VM,
>> > >Bare Metal (BM), and Containers for Trove instances "adds
>> > >considerable
>> > complexity".
>> > >Roughly the same complexity would accompany the use of a
>> > >comprehensive compute API. I suppose you were imagining an LCD
>> > >approach. If that's what you want, just use the existing Nova API,
>> > >and load different compute drivers on different host aggregates. A
>> > >single Nova client can produce VM, BM (Ironic), and Container
>> > >(lbvirt-lxc) instances all with a common API (Nova) if it's
>> configured in this way. That's what we do.
>> > >Flavors determine which compute type you get.
>> > >
>> > >If what you meant is that you could tap into the power of all the
>> > >unique characteristics of each of the various compute types (through
>> > >some modular extensibility framework) you'll likely end up with
>> > >complexity in Trove that is comparable to integrating with the
>> native
>> > >upstream APIs, along with the disadvantage of waiting for OpenStack
>> > >to continually catch up to the pace of change of the various
>> upstream
>> > >systems on which it depends. This is a recipe for disappointment.
>> > >
>> > >We concluded that wrapping native APIs is a mistake, particularly
>> > >when they are sufficiently different than what the Nova API already
>> offers.
>> > >Containers APIs have limited similarities, so when you try to make a
>> > >universal interface to all of them, you end up with a really
>> > >complicated mess. It would be even worse if we tried to accommodate
>> > all
>> > >the unique aspects of BM and VM as well. Magnum's approach is to
>> > >offer the upstream native API's for the different container
>> > >orchestration engines (COE), and compose Bays for them to run on
>> that
>> > >are built from the compute types that OpenStack supports. We do this
>> > >by using different Heat orchestration templates (and conditional
>> > >templates) to arrange a COE on the compute type of your choice. With
>> > >that said,
>> > there
>> > >are still gaps where not all storage or network drivers work with
>> > >Ironic, and there are non-trivial security hurdles to clear to
>> safely
>> > use Bays composed of libvirt-lxc instances in a multi-tenant
>> > environment.
>> > >
>> > >My suggestion to get what you want for Trove is to see if the cloud
>> > has
>> > >Magnum, and if it does, create a bay with the flavor type specified
>> > for
>> > >whatever compute type you want, and then use the native API for the
>> > COE
>> > >you selected for that bay. Start your instance on the COE, just like
>> > >you use Nova today. This way, you have low complexity in Trove, and
>> > you
>> > >can scale both the number of instances of your data nodes
>> > >(containers), and the infrastructure on which they run (Nova
>> instances).
>> >
>> >
>> > I've been researching on this area and I've reached pretty much the
>> > same conclusion. I've had moments of wondering whether creating bays
>> > is something Trove should do but I now think it should.
>> >
>> > The need of handling the native API is the part I find a bit painful
>> > as that means more code needs to happen in Trove for us to provide
>> > this provisioning facilities. I wonder if a common *library* would
>> > help here, at least to handle those "simple" cases. Anyway, I look
>> > forward to chatting with you all about this.
>> >
>> > It'd be great if you (and other magnum folks) could join this session:
>> >
>> > https://etherpad.openstack.org/p/trove-newton-summit-container
>> >
>> > Thanks for chiming in, Adrian.
>> > Flavio
>> >
>> > >Regards,
>> > >
>> > >Adrian
>> > >
>> > >
>> > >
>> > >
>> > >    On Apr 11, 2016, at 8:47 AM, Amrith Kumar <amr...@tesora.com>
>> > wrote:
>> > >
>> > >    Monty, Dims,
>> > >
>> > >    I read the notes and was similarly intrigued about the idea. In
>> > particular,
>> > >    from the perspective of projects like Trove, having a common
>> > Compute API is
>> > >    very valuable. It would allow the projects to have a single view
>> > of
>> > >    provisioning compute, as we can today with Nova and get the
>> > benefit of bare
>> > >    metal through Ironic, VM's through Nova VM's, and containers
>> > through
>> > >    nova-docker.
>> > >
>> > >    With this in place, a project like Trove can offer database-as-
>> a-
>> > service on
>> > >    a spectrum of compute infrastructures as any end-user would
>> expect.
>> > >    Databases don't always make sense in VM's, and while containers
>> > are great
>> > >    for quick and dirty prototyping, and VM's are great for much
>> > > more,
>> > there
>> > >    are databases that will in production only be meaningful on
>> bare-
>> > metal.
>> > >
>> > >    Therefore, if there is a move towards offering a common API for
>> > VM's,
>> > >    bare-metal and containers, that would be huge.
>> > >
>> > >    Without such a mechanism, consuming containers in Trove adds
>> > considerable
>> > >    complexity and leads to a very sub-optimal architecture (IMHO).
>> > FWIW, a
>> > >    working prototype of Trove leveraging Ironic, VM's, and nova-
>> > docker to
>> > >    provision databases is something I worked on a while ago, and
>> > > have
>> > not
>> > >    revisited it since then (once the direction appeared to be
>> Magnum
>> > for
>> > >    containers).
>> > >
>> > >    With all that said, I don't want to downplay the value in a
>> > container
>> > >    specific API. I'm merely observing that from the perspective of
>> a
>> > consumer
>> > >    of computing services, a common abstraction is incredibly
>> valuable.
>> > >
>> > >    Thanks,
>> > >
>> > >    -amrith
>> > >
>> > >
>> > >        -----Original Message-----
>> > >        From: Monty Taylor [mailto:mord...@inaugust.com]
>> > >        Sent: Monday, April 11, 2016 11:31 AM
>> > >        To: Allison Randal <alli...@lohutok.net>; Davanum Srinivas
>> > >        <dava...@gmail.com>; foundat...@lists.openstack.org
>> > >        Cc: OpenStack Development Mailing List (not for usage
>> > questions)
>> > >        <openstack-dev@lists.openstack.org>
>> > >        Subject: Re: [openstack-dev] [OpenStack Foundation]
>> > [board][tc][all]
>> > >        One
>> > >        Platform - Containers/Bare Metal? (Re: Board of Directors
>> > > Meeting)
>> > >
>> > >        On 04/11/2016 09:43 AM, Allison Randal wrote:
>> > >
>> > >                On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas <
>> > >                dava...@gmail.com>
>> > >
>> > >        wrote:
>> > >
>> > >                    Reading unofficial notes [1], i found one topic
>> > very
>> > >                    interesting:
>> > >                    One Platform - How do we truly support
>> containers
>> > and bare
>> > >                    metal
>> > >                    under a common API with VMs? (Ironic, Nova,
>> > adjacent
>> > >                    communities e.g.
>> > >                    Kubernetes, Apache Mesos etc)
>> > >
>> > >                    Anyone present at the meeting, please expand on
>> > those few
>> > >                    notes on
>> > >                    etherpad? And how if any this feedback is
>> getting
>> > back to
>> > >                    the
>> > >                    projects?
>> > >
>> > >
>> > >            It was really two separate conversations that got
>> > conflated in the
>> > >            summary. One conversation was just being supportive of
>> > bare metal,
>> > >            VMs, and containers within the OpenStack umbrella. The
>> > other
>> > >            conversation started with Monty talking about his work
>> on
>> > shade,
>> > >            and
>> > >            how it wouldn't exist if more APIs were focused on the
>> > > way
>> > users
>> > >            consume the APIs, and less an expression of the
>> > implementation
>> > >            details
>> > >
>> > >        of each project.
>> > >
>> > >            OpenStackClient was mentioned as a unified CLI for
>> > OpenStack
>> > >            focused
>> > >            more on the way users consume the CLI. (OpenStackSDK
>> > wasn't
>> > >            mentioned,
>> > >            but falls in the same general category of work.)
>> > >
>> > >            i.e. There wasn't anything new in the conversation, it
>> > > was
>> > more a
>> > >            matter of the developers/TC members on the board sharing
>> > >            information
>> > >            about work that's already happening.
>> > >
>> > >
>> > >        I agree with that - but would like to clarify the 'bare
>> > > metal,
>> > VMs and
>> > >        containers' part a bit. (an in fact, I was concerned in the
>> > meeting
>> > >        that
>> > >        the messaging around this would be confusing because we
>> > 'supporting
>> > >        bare
>> > >        metal' and 'supporting containers' mean two different things
>> > but we use
>> > >        one phrase to talk about it.
>> > >
>> > >        It's abundantly clear at the strategic level that having
>> > OpenStack be
>> > >        able
>> > >        to provide both VMs and Bare Metal as two different sorts of
>> > resources
>> > >        (ostensibly but not prescriptively via nova) is one of our
>> > advantages.
>> > >        We
>> > >        wanted to underscore how important it is to be able to do
>> > > that,
>> > and
>> > >        wanted
>> > >        to underscore that so that it's really clear how important
>> it
>> > is any
>> > >        time
>> > >        the "but cloud should just be VMs" sentiment arises.
>> > >
>> > >        The way we discussed "supporting containers" was quite
>> > different and
>> > >        was
>> > >        not about nova providing containers. Rather, it was about
>> > reaching out
>> > >        to
>> > >        our friends in other communities and working with them on
>> > making
>> > >        OpenStack
>> > >        the best place to run things like kubernetes or docker swarm.
>> > >        Those are systems that ultimately need to run, and it seems
>> > that good
>> > >        integration (like kuryr with libnetwork) can provide a
>> really
>> > strong
>> > >        story. I think pretty much everyone agrees that there is not
>> > much value
>> > >        to
>> > >        us or the world for us to compete with kubernetes or docker.
>> > >
>> > >        So, we do want to be supportive of bare metal and containers
>> > > -
>> > but the
>> > >        specific _WAY_ we want to be supportive of those things is
>> > different
>> > >        for
>> > >        each one.
>> > >
>> > >        Monty
>> > >
>> > >
>> > >
>> >
>> ______________________________________________________________________
>> > _
>> > ___
>> > >        OpenStack Development Mailing List (not for usage questions)
>> > >        Unsubscribe: openstack-dev-requ...@lists.openstack.org?
>> > >        subject:unsubscribe
>> > >
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> >
>> ______________________________________________________________________
>> > _
>> > ___
>> > >    OpenStack Development Mailing List (not for usage questions)
>> > >    Unsubscribe: OpenStack-dev-
>> > requ...@lists.openstack.org?subject:unsubscribe
>> > >    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
>> dev
>> > >
>> > >
>> >
>> > >____________________________________________________________________
>> _
>> > >_
>> > _
>> > >___ OpenStack Development Mailing List (not for usage questions)
>> > >Unsubscribe:
>> > >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> _______________________________________________________________________
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> _______________________________________________________________________
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to