Hello, Andrew and Tim,

I understand CERN has Cells solution installation and there is a subteam to 
solve Cells challenge.

I copy the reply to John Garbutt to clarify the difference:

The major difference between Cells and OpenStack cascading is the  problem 
domain:
OpenStack cascading: to integrate multi-site / multi-vendor OpenStack instances 
into one cloud with OpenStack API exposed.
Cells: a single OpenStack instance scale up methodology
Therefore, no conflict between Cells and OpenStack cascading. They can be used 
for different scenario, and Cells can also be used as the cascaded OpenStack 
(the child OpenStack).

And OpenStack cascading also provide the capability for cross data center L2/L3 
networking for a tennat.

"Flavor", "Server Group" (Host Aggregate?), "Security Group"(not clear the 
concrete problem) issue could be solved in OpenStack cascading solution from 
architecure point of view.

Best Regards

Chaoyi Huang ( joehuang )
________________________________________
From: Andrew Laski [andrew.la...@rackspace.com]
Sent: 01 October 2014 3:49
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

On 09/30/2014 03:07 PM, Tim Bell wrote:
>> -----Original Message-----
>> From: John Garbutt [mailto:j...@johngarbutt.com]
>> Sent: 30 September 2014 15:35
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
>> cascading
>>
>> On 30 September 2014 14:04, joehuang <joehu...@huawei.com> wrote:
>>> Hello, Dear TC and all,
>>>
>>> Large cloud operators prefer to deploy multiple OpenStack instances(as
>> different zones), rather than a single monolithic OpenStack instance because 
>> of
>> these reasons:
>>> 1) Multiple data centers distributed geographically;
>>> 2) Multi-vendor business policy;
>>> 3) Server nodes scale up modularized from 00's up to million;
>>> 4) Fault and maintenance isolation between zones (only REST
>>> interface);
>>>
>>> At the same time, they also want to integrate these OpenStack instances into
>> one cloud. Instead of proprietary orchestration layer, they want to use 
>> standard
>> OpenStack framework for Northbound API compatibility with HEAT/Horizon or
>> other 3rd ecosystem apps.
>>> We call this pattern as "OpenStack Cascading", with proposal described by
>> [1][2]. PoC live demo video can be found[3][4].
>>> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the
>> OpenStack cascading.
>>> Kindly ask for cross program design summit session to discuss OpenStack
>> cascading and the contribution to Kilo.
>>> Kindly invite those who are interested in the OpenStack cascading to work
>> together and contribute it to OpenStack.
>>> (I applied for “other projects” track [5], but it would be better to
>>> have a discussion as a formal cross program session, because many core
>>> programs are involved )
>>>
>>>
>>> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
>>> [2] PoC source code: https://github.com/stackforge/tricircle
>>> [3] Live demo video at YouTube:
>>> https://www.youtube.com/watch?v=OSU6PYRz5qY
>>> [4] Live demo video at Youku (low quality, for those who can't access
>>> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
>>> [5]
>>> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395
>>> .html
>> There are etherpads for suggesting cross project sessions here:
>> https://wiki.openstack.org/wiki/Summit/Planning
>> https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
>>
>> I am interested at comparing this to Nova's cells concept:
>> http://docs.openstack.org/trunk/config-reference/content/section_compute-
>> cells.html
>>
>> Cells basically scales out a single datacenter region by aggregating 
>> multiple child
>> Nova installations with an API cell.
>>
>> Each child cell can be tested in isolation, via its own API, before joining 
>> it up to
>> an API cell, that adds it into the region. Each cell logically has its own 
>> database
>> and message queue, which helps get more independent failure domains. You can
>> use cell level scheduling to restrict people or types of instances to 
>> particular
>> subsets of the cloud, if required.
>>
>> It doesn't attempt to aggregate between regions, they are kept independent.
>> Except, the usual assumption that you have a common identity between all
>> regions.
>>
>> It also keeps a single Cinder, Glance, Neutron deployment per region.
>>
>> It would be great to get some help hardening, testing, and building out more 
>> of
>> the cells vision. I suspect we may form a new Nova subteam to trying and 
>> drive
>> this work forward in kilo, if we can build up enough people wanting to work 
>> on
>> improving cells.
>>
> At CERN, we've deployed cells at scale but are finding a number of 
> architectural issues that need resolution in the short term to attain feature 
> parity. A vision of "we all run cells but some of us have only one" is not 
> there yet. Typical examples are flavors, security groups and server groups, 
> all of which are not yet implemented to the necessary levels for cell 
> parent/child.
>
> We would be very keen on agreeing the strategy in Paris so that we can ensure 
> the gap is closed, test it in the gate and that future features cannot 
> 'wishlist' cell support.
>
> Resources can be made available if we can agree the direction but current 
> reviews are not progressing (such as 
> https://bugs.launchpad.net/nova/+bug/1211011)

I am working on putting together this strategy so we can discuss it in
Paris.  I, and perhaps a few others, will be spending time on this in
Kilo so that these thing do progress.

There are some good ideas in this thread and scaling out is a concern we
need to continually work on.  But we do have a solution that addresses
this to an extent so I think the conversation should be about how we
scale past cells, not replicate it.


>
> Tim
>
>> Thanks,
>> John
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to