Hello, Duncan, 

Your substantial concerns are warmly welcome and very important.

Agree with you that the interconnect between leaves should be faily thin: 

During the PoC, all Nova/Cinder/Ceilometer/Neutron/Glance (Glance is optional 
to be located in leave) in the leave work independently from other leaves. The 
only interconnect between two leaves is the L2/L3 network across OpenStack for 
the tenant. But it will be done by the L2 proxy/L3 proxy located in the 
cascading level, and the instrcution will only be issued by the corresponding 
L2/L3 proxy one way.

And also, from Ceilometer perspective, it must work as distributed service. We 
roughly estimated how much meter data volume will be generated for 1 million 
level cloud, if we use current Ceilometer (not include Gnocchi), and sampling 
period is 1 minutes, it's about 20 GB / minute (quite roughly estimated). Using 
single Ceilometer instance is almost impossible for the large scale distributed 
cloud. Therefore, Ceilometer cascading must be designed very carefully.

In our PoC design principle, the cascaded OpenStack should work passively, and 
has no kowledge "whether it is running under cascading senario or not to" and 
"whether there is sibling OpenStack or not", to reduce interconnect between 
cascaded OpenStacks as much as possible. And one level cascading is enough for 
foreseeable future.

PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in a 
f2f workshop for deep diving in the OpenStack cascading?

Best Regards

Chaoyi Huang ( joehuang )

From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 02 October 2014 18:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by     
OpenStack cascading

So I have substantial concerns about hierarchy based designs and data
mass - the interconnect between leaves in the hierarchy are often
going to be fairly thin, particularly if they are geographically
distributed, so the semantics of what is allowed to access what data
resource (glance, swift, cinder, manilla) need some very careful
thought, and the way those restrictions are portrayed to the user to
avoid confusion needs even more thought.

On 30 September 2014 14:04, joehuang <joehu...@huawei.com> wrote:
> Hello, Dear TC and all,
> Large cloud operators prefer to deploy multiple OpenStack instances(as 
> different zones), rather than a single monolithic OpenStack instance because 
> of these reasons:
> 1) Multiple data centers distributed geographically;
> 2) Multi-vendor business policy;
> 3) Server nodes scale up modularized from 00's up to million;
> 4) Fault and maintenance isolation between zones (only REST interface);
> At the same time, they also want to integrate these OpenStack instances into 
> one cloud. Instead of proprietary orchestration layer, they want to use 
> standard OpenStack framework for Northbound API compatibility with 
> HEAT/Horizon or other 3rd ecosystem apps.
> We call this pattern as "OpenStack Cascading", with proposal described by 
> [1][2]. PoC live demo video can be found[3][4].
> Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
> OpenStack cascading.
> Kindly ask for cross program design summit session to discuss OpenStack 
> cascading and the contribution to Kilo.
> Kindly invite those who are interested in the OpenStack cascading to work 
> together and contribute it to OpenStack.
> (I applied for “other projects” track [5], but it would be better to have a 
> discussion as a formal cross program session, because many core programs are 
> involved )
> [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [2] PoC source code: https://github.com/stackforge/tricircle
> [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
> [4] Live demo video at Youku (low quality, for those who can't access 
> YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
> [5] 
> http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html
> Best Regards
> Chaoyi Huang ( Joe Huang )
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Duncan Thomas

OpenStack-dev mailing list
OpenStack-dev mailing list

Reply via email to