Hi,

Some comments inline.

From: Alan Bishop <abis...@redhat.com>
Sent: Thursday, August 16, 2018 7:09 PM

On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya 
<bdobr...@redhat.com<mailto:bdobr...@redhat.com>> wrote:
On 8/13/18 9:47 PM, Giulio Fidente wrote:
> Hello,
>
> I'd like to get some feedback regarding the remaining
> work for the split controlplane spec implementation [1]
>
> Specifically, while for some services like nova-compute it is not
> necessary to update the controlplane nodes after an edge cloud is
> deployed, for other services, like cinder (or glance, probably
> others), it is necessary to do an update of the config files on the
> controlplane when a new edge cloud is deployed.

[G0]: What is the reason to run a shared cinder in an edge cloud 
infrastructure? Maybe it is a better approach to run an individual Cinder in 
every edge cloud instance.

> In fact for services like cinder or glance, which are hosted in the
> controlplane, we need to pull data from the edge clouds (for example
> the newly deployed ceph cluster keyrings and fsid) to configure cinder
> (or glance) with a new backend.

[G0]: Solution ideas for Glance are listed in 
[3<https://wiki.openstack.org/wiki/Image_handling_in_edge_environment>].

> It looks like this demands for some architectural changes to solve the > 
> following two:
>
> - how do we trigger/drive updates of the controlplane nodes after the
> edge cloud is deployed?

Note, there is also a strict(?) requirement of local management
capabilities for edge clouds temporary disconnected off the central
controlplane. That complicates the updates triggering even more. We'll
need at least a notification-and-triggering system to perform required
state synchronizations, including conflicts resolving. If that's the
case, the architecture changes for TripleO deployment framework are
inevitable AFAICT.

This is another interesting point. I don't mean to disregard it, but want to
highlight the issue that Giulio and I (and others, I'm sure) are focused on.

As a cinder guy, I'll use cinder as an example. Cinder services running in the
control plane need to be aware of the storage "backends" deployed at the
Edge. So if a split-stack deployment includes edge nodes running a ceph
cluster, the cinder services need to be updated to add the ceph cluster as a
new cinder backend. So, not only is control plane data needed in order to
deploy an additional stack at the edge, data from the edge deployment needs to
be fed back into a subsequent stack update in the controlplane. Otherwise,
cinder (and other storage services) will have no way of utilizing ceph
clusters at the edge.
>
> - how do we scale the controlplane parameters to accomodate for N
> backends of the same type?

Yes, this is also a big problem for me. Currently, TripleO can deploy cinder
with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X,
Vendor Y, etc.). However, the current THT do not let you deploy multiple
instances of the same backend (e.g. more than one ceph). If the goal is to
deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will need
the ability to deploy multiple homogeneous cinder backends. This requirement
will likely apply to glance and manila as well.

> A very rough approach to the latter could be to use jinja to scale up
> the CephClient service so that we can have multiple copies of it in the
> controlplane.
>
> Each instance of CephClient should provide the ceph config file and
> keyring necessary for each cinder (or glance) backend.
>
> Also note that Ceph is only a particular example but we'd need a similar
> workflow for any backend type.
>
> The etherpad for the PTG session [2] touches this, but it'd be good to
> start this conversation before then.
>
> 1.
> https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html
>
> 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane
>

[3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment

Br,
Gerg0


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to