On 01/18/2015 09:23 AM, Jay Lau wrote:
Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake <sd...@redhat.com <mailto:sd...@redhat.com>>:

    On 01/18/2015 06:39 AM, Jay Lau wrote:
    Thanks Steven, just some questions/comments here:

    1) For native docker support, do we have some project to handle
    the network? The current native docker support did not have any
    logic for network management, are we going to leverage neutron or
    nova-network just like nova-docker for this?
    We can just use flannel for both these use cases.  One way to
    approach using flannel is that we can expect docker networks will
    always be setup the same way, connecting into a flannel network.

What about introducing neutron/nova-network support for native docker container just like nova-docker?



Does that mean introducing an agent on the uOS? I'd rather not have agents, since all of these uOS systems have wonky filesystem layouts and there is not an easy way to customize them, with dib for example.

    2) For k8s, swarm, we can leverage the scheduler in those
    container management tools, but what about docker native support?
    How to handle resource scheduling for native docker containers?

    I am not clear on how to handle native Docker scheduling if a bay
    has more then one node.  I keep hoping someone in the community
    will propose something that doesn't introduce an agent dependency
    in the OS.

My thinking is as this: Add a new scheduler just like what nova/cinder is doing now and then we can migrate to gantt once it become mature, comments?

Cool that WFM.  Too bad we can't just use gantt out the gate.

Regards
-steve


    Regards
    -steve


    Thanks!

    2015-01-18 8:51 GMT+08:00 Steven Dake <sd...@redhat.com
    <mailto:sd...@redhat.com>>:

        Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week. I wanted to kick off discussions around milestone #2 since
        Milestone #1 development is mostly wrapped up.

        The milestone #2 blueprints:
        https://blueprints.launchpad.net/magnum/milestone-2

        The overall goal of Milestone #1 was to make Magnum usable
        for developers.  The overall goal of Milestone #2 is to make
        Magnum usable by operators and their customers.  To do this
        we are implementing blueprints like multi-tenant,
        horizontal-scale, and the introduction of coreOS in addition
        to Fedora Atomic as a Container uOS.  We are also plan to
        introduce some updates to allow bays to be more scalable.  We
        want bays to scale to more nodes manually (short term), as
        well as automatically (longer term).  Finally we want to tidy
        up some of the nit-picky things about Magnum that none of the
        core developers really like at the moment.  One example is
        the magnum-bay-status blueprint which will prevent the
        creation of pods/services/replicationcontrollers until a bay
        has completed orchestration via Heat. Our final significant
        blueprint for milestone #2 is the ability to launch our
        supported uOS on bare metal using Nova's Ironic plugin and
        the baremetal flavor.  As always, we want to improve our unit
        testing from what is now 70% to ~80% in the next milestone.

        Please have a look at the blueprints and feel free to comment
        on this thread or in the blueprints directly.  If you would
        like to see different blueprints tackled during milestone #2
        that feedback is welcome, or if you think the core team[1] is
        on the right track, we welcome positive kudos too.

        If you would like to see what we tackled in Milestone #1, the
code should be tagged and ready to run Tuesday January 20th. Master should work well enough now, and the developer
        quickstart guide is mostly correct.

        The Milestone #1 bluerpints are here for comparison sake:
        https://blueprints.launchpad.net/magnum/milestone-1

        Regards,
        -steve


        [1] https://review.openstack.org/#/admin/groups/473,members

        
__________________________________________________________________________
        OpenStack Development Mailing List (not for usage questions)
        Unsubscribe:
        openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
        <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
        http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- Thanks,

    Jay Lau (Guangya Liu)


    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to