On 09/30/2014 12:10 PM, John Griffith wrote:

On Tue, Sep 30, 2014 at 7:35 AM, John Garbutt <j...@johngarbutt.com <mailto:j...@johngarbutt.com>> wrote:

    On 30 September 2014 14:04, joehuang <joehu...@huawei.com
    <mailto:joehu...@huawei.com>> wrote:
    > Hello, Dear TC and all,
    > Large cloud operators prefer to deploy multiple OpenStack
    instances(as different zones), rather than a single monolithic
    OpenStack instance because of these reasons:
    > 1) Multiple data centers distributed geographically;
    > 2) Multi-vendor business policy;
    > 3) Server nodes scale up modularized from 00's up to million;
    > 4) Fault and maintenance isolation between zones (only REST
    > At the same time, they also want to integrate these OpenStack
    instances into one cloud. Instead of proprietary orchestration
    layer, they want to use standard OpenStack framework for
    Northbound API compatibility with HEAT/Horizon or other 3rd
    ecosystem apps.
    > We call this pattern as "OpenStack Cascading", with proposal
    described by [1][2]. PoC live demo video can be found[3][4].
    > Nova, Cinder, Neutron, Ceilometer and Glance (optional) are
    involved in the OpenStack cascading.
    > Kindly ask for cross program design summit session to discuss
    OpenStack cascading and the contribution to Kilo.
    > Kindly invite those who are interested in the OpenStack
    cascading to work together and contribute it to OpenStack.
    > (I applied for “other projects” track [5], but it would be
    better to have a discussion as a formal cross program session,
    because many core programs are involved )
    > [1] wiki:
    > [2] PoC source code: https://github.com/stackforge/tricircle
    > [3] Live demo video at YouTube:
    > [4] Live demo video at Youku (low quality, for those who can't
    access YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
    > [5]

    There are etherpads for suggesting cross project sessions here:

    I am interested at comparing this to Nova's cells concept:

    Cells basically scales out a single datacenter region by aggregating
    multiple child Nova installations with an API cell.

    Each child cell can be tested in isolation, via its own API, before
    joining it up to an API cell, that adds it into the region. Each cell
    logically has its own database and message queue, which helps get more
    independent failure domains. You can use cell level scheduling to
    restrict people or types of instances to particular subsets of the
    cloud, if required.

    It doesn't attempt to aggregate between regions, they are kept
    independent. Except, the usual assumption that you have a common
    identity between all regions.

    It also keeps a single Cinder, Glance, Neutron deployment per region.

I'm starting on work to support a comparable mechanism to share data between Keystone servers.


    It would be great to get some help hardening, testing, and building
    out more of the cells vision. I suspect we may form a new Nova subteam
    to trying and drive this work forward in kilo, if we can build up
    enough people wanting to work on improving cells.


    OpenStack-dev mailing list

​Interesting idea, to be honest when TripleO was first announced what you have here is more along the lines of what I envisioned. It seems that this would have some interesting wins in terms of upgrades, migrations and scaling in general. Anyway, you should propose it to the etherpad as John G ( the other John G :) ) recommended, I'd love to dig deeper into this.


OpenStack-dev mailing list

OpenStack-dev mailing list

Reply via email to