So this does seem a-lot like cells but makes cells appear in the other projects.
IMHO the same problems that occur in cells appear here in that we are sacrificing consistency of the already problematic systems that already exist to gain scale (and to gain more inconsistency). Every time I see a 'the parent OpenStack manage many child OpenStacks by using standard OpenStack API' in that wiki I wonder how the parent will resolve inconsistencies that exist in children (likely it can't). How do quotas work across parent/children, how do race conditions get resolved... IMHO I'd rather stick with the less scalable distributed system we have, iron out its quirks, fix the quota (via whatever that project is named now), split out the nova/... drivers so they can be maintainable in various projects, fix the various already inconsistent state machines that exist, split out the scheduler into its own project so that can be shared... All of the mentioned things improve scale and improve tolerance to individual failures rather than create a whole new level of 'pain' via a tightly bound set of proxies, cascading hierarchies.... Managing this whole cascading clusters and such also would seem to be operational management nightmare that I'm not sure is justified at the current time being (when operators already have enough trouble with the current code bases). How I imagine this working out (in my view): * Split out the shared services (gantt, scheduler, quotas...) into real SOA services that everyone can use. * Have cinder-api, nova-api, neutron-api integrate with the split out services to obtain consistent views of the world when performing API operations. * Have cinder, nova, neutron provide 'workers' (nova-compute is a basic worker) that can be scaled out across all your clusters and interconnected to a type of conductor node in some manner (mq?), and have the outcome of cinder-api, nova-api, neutron-api be a workflow that some service (conductor/s?) ensures occurs reliably (or aborts). This makes it so that cinder-api, nova-api... can scale at will, conductors can scale at will and so can worker nodes... * Profit! TLDR; It would seem like this adds more complexity, not less, and I'm not sure complexity is what openstack needs more of right now... -Josh On Sep 30, 2014, at 6:04 AM, joehuang <joehu...@huawei.com> wrote: > Hello, Dear TC and all, > > Large cloud operators prefer to deploy multiple OpenStack instances(as > different zones), rather than a single monolithic OpenStack instance because > of these reasons: > > 1) Multiple data centers distributed geographically; > 2) Multi-vendor business policy; > 3) Server nodes scale up modularized from 00's up to million; > 4) Fault and maintenance isolation between zones (only REST interface); > > At the same time, they also want to integrate these OpenStack instances into > one cloud. Instead of proprietary orchestration layer, they want to use > standard OpenStack framework for Northbound API compatibility with > HEAT/Horizon or other 3rd ecosystem apps. > > We call this pattern as "OpenStack Cascading", with proposal described by > . PoC live demo video can be found. > > Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the > OpenStack cascading. > > Kindly ask for cross program design summit session to discuss OpenStack > cascading and the contribution to Kilo. > > Kindly invite those who are interested in the OpenStack cascading to work > together and contribute it to OpenStack. > > (I applied for “other projects” track , but it would be better to have a > discussion as a formal cross program session, because many core programs are > involved ) > > >  wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution >  PoC source code: https://github.com/stackforge/tricircle >  Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY >  Live demo video at Youku (low quality, for those who can't access > YouTube)：http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html >  > http://firstname.lastname@example.org/msg36395.html > > Best Regards > Chaoyi Huang ( Joe Huang ) > _______________________________________________ > OpenStack-dev mailing list > OpenStackemail@example.com > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _______________________________________________ OpenStack-dev mailing list OpenStackfirstname.lastname@example.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev