----- Original Message -----
> From: "henry hly" <henry4...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> 
> On Fri, Dec 12, 2014 at 11:41 AM, Dan Smith <d...@danplanet.com>
> wrote:
> >> [joehuang] Could you pls. make it more clear for the deployment
> >> mode
> >> of cells when used for globally distributed DCs with single API.
> >> Do
> >> you mean cinder/neutron/glance/ceilometer will be shared by all
> >> cells, and use RPC for inter-dc communication, and only support
> >> one
> >> vendor's OpenStack distribution? How to do the cross data center
> >> integration and troubleshooting with RPC if the
> >> driver/agent/backend(storage/network/sever) from different vendor.
> >
> > Correct, cells only applies to single-vendor distributed
> > deployments. In
> > both its current and future forms, it uses private APIs for
> > communication between the components, and thus isn't suited for a
> > multi-vendor environment.
> >
> > Just MHO, but building functionality into existing or new
> > components to
> > allow deployments from multiple vendors to appear as a single API
> > endpoint isn't something I have much interest in.
> >
> > --Dan
> >
> 
> Even with the same distribution, cell still face many challenges
> across multiple DC connected with WAN. Considering OAM, it's easier
> to
> manage autonomous systems connected with external northband interface
> across remote sites, than a single monolithic system connected with
> internal RPC message.

The key question here is this primarily the role of OpenStack or an external 
cloud management platform, and I don't profess to know the answer. What do 
people use (workaround or otherwise) for these use cases *today*? Another 
question I have is, one of the stated use cases is for managing OpenStack 
clouds from multiple vendors - is the implication here that some of these have 
additional divergent API extensions or is the concern solely the 
incompatibilities inherent in communicating using the RPC mechanisms? If there 
are divergent API extensions, how is that handled from a proxying point of view 
if not all underlying OpenStack clouds necessarily support it (I guess same 
applies when using distributions without additional extensions but of different 
versions - e.g. Icehouse vs Juno which I believe was also a targeted use case?)?

> Although Cell did some separation and modulation (not to say it's
> still internal RPC across WAN), they leaves cinder, neutron,
> ceilometer. Shall we wait for all these projects to re-factor with
> Cell-like hierarchy structure, or adopt a more loose coupled way, to
> distribute them into autonomous units at the basis of the whole
> Openstack (except Keystone which can handle multiple region
> naturally)?

Similarly though, is the intent with Cascading that each new project would have 
to also implement and provide a proxy for use in these deployments? One of the 
challenges with maintaining/supporting the existing Cells implementation has 
been that it's effectively it's own thing and as a result it is often not 
considered when adding new functionality.

> As we can see, compared with Cell, much less work is needed to build
> a
> Cascading solution, No patch is needed except Neutron (waiting some
> upcoming features not landed in Juno), nearly all work lies in the
> proxy, which is in fact another kind of driver/agent.

Right, but the proxies still appear to be a not insignificant amount of code - 
is the intent not that the proxies would eventually reside within the relevant 
projects? I've been assuming yes but I am wondering if this was an incorrect 
assumption on my part based on your comment.

Thanks,

Steve

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to