Re: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates

2018-08-17 Thread Jiří Stránský

On 14.8.2018 15:19, Bogdan Dobrelya wrote:

On 8/13/18 9:47 PM, Giulio Fidente wrote:

Hello,

I'd like to get some feedback regarding the remaining
work for the split controlplane spec implementation [1]

Specifically, while for some services like nova-compute it is not
necessary to update the controlplane nodes after an edge cloud is
deployed, for other services, like cinder (or glance, probably
others), it is necessary to do an update of the config files on the
controlplane when a new edge cloud is deployed.

In fact for services like cinder or glance, which are hosted in the
controlplane, we need to pull data from the edge clouds (for example
the newly deployed ceph cluster keyrings and fsid) to configure cinder
(or glance) with a new backend.

It looks like this demands for some architectural changes to solve the > 
following two:

- how do we trigger/drive updates of the controlplane nodes after the
edge cloud is deployed?


Note, there is also a strict(?) requirement of local management
capabilities for edge clouds temporary disconnected off the central
controlplane. That complicates the updates triggering even more. We'll
need at least a notification-and-triggering system to perform required
state synchronizations, including conflicts resolving. If that's the
case, the architecture changes for TripleO deployment framework are
inevitable AFAICT.


Indeed this would complicate things much, but IIUC the spec [1] that 
Giulio referenced doesn't talk about local management at all.


Within the context of what the spec covers, i.e. 1 stack for Controller 
role and other stack(s) for Compute or *Storage roles, i hope we could 
address updates/upgrades workflow similarly as the deployment workflow 
would be addressed -- working with the stacks one by one.


That would probably mean:

1. `update/upgrade prepare` on Controller stack

2. `update/upgrade prepare` on other stacks (perhaps reusing some 
outputs from Controller stack here)


3. `update/upgrade run` on Controller stack

4. `update/upgrade run` on other stacks

5. (`external-update/external-upgrade run` on other stacks where 
appropriate)


6. `update/upgrade converge` on Controller stack

7. `update/upgrade converge` on other stacks (again maybe reusing 
outputs from Controller stack)


I'm not *sure* such approach would work, but at the moment i don't see a 
reason why it wouldn't :)


Jirka





- how do we scale the controlplane parameters to accomodate for N
backends of the same type?

A very rough approach to the latter could be to use jinja to scale up
the CephClient service so that we can have multiple copies of it in the
controlplane.

Each instance of CephClient should provide the ceph config file and
keyring necessary for each cinder (or glance) backend.

Also note that Ceph is only a particular example but we'd need a similar
workflow for any backend type.

The etherpad for the PTG session [2] touches this, but it'd be good to
start this conversation before then.

1.
https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html

2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates

2018-08-17 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Some comments inline.

From: Alan Bishop 
Sent: Thursday, August 16, 2018 7:09 PM

On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya 
mailto:bdobr...@redhat.com>> wrote:
On 8/13/18 9:47 PM, Giulio Fidente wrote:
> Hello,
>
> I'd like to get some feedback regarding the remaining
> work for the split controlplane spec implementation [1]
>
> Specifically, while for some services like nova-compute it is not
> necessary to update the controlplane nodes after an edge cloud is
> deployed, for other services, like cinder (or glance, probably
> others), it is necessary to do an update of the config files on the
> controlplane when a new edge cloud is deployed.

[G0]: What is the reason to run a shared cinder in an edge cloud 
infrastructure? Maybe it is a better approach to run an individual Cinder in 
every edge cloud instance.

> In fact for services like cinder or glance, which are hosted in the
> controlplane, we need to pull data from the edge clouds (for example
> the newly deployed ceph cluster keyrings and fsid) to configure cinder
> (or glance) with a new backend.

[G0]: Solution ideas for Glance are listed in 
[3].

> It looks like this demands for some architectural changes to solve the > 
> following two:
>
> - how do we trigger/drive updates of the controlplane nodes after the
> edge cloud is deployed?

Note, there is also a strict(?) requirement of local management
capabilities for edge clouds temporary disconnected off the central
controlplane. That complicates the updates triggering even more. We'll
need at least a notification-and-triggering system to perform required
state synchronizations, including conflicts resolving. If that's the
case, the architecture changes for TripleO deployment framework are
inevitable AFAICT.

This is another interesting point. I don't mean to disregard it, but want to
highlight the issue that Giulio and I (and others, I'm sure) are focused on.

As a cinder guy, I'll use cinder as an example. Cinder services running in the
control plane need to be aware of the storage "backends" deployed at the
Edge. So if a split-stack deployment includes edge nodes running a ceph
cluster, the cinder services need to be updated to add the ceph cluster as a
new cinder backend. So, not only is control plane data needed in order to
deploy an additional stack at the edge, data from the edge deployment needs to
be fed back into a subsequent stack update in the controlplane. Otherwise,
cinder (and other storage services) will have no way of utilizing ceph
clusters at the edge.
>
> - how do we scale the controlplane parameters to accomodate for N
> backends of the same type?

Yes, this is also a big problem for me. Currently, TripleO can deploy cinder
with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X,
Vendor Y, etc.). However, the current THT do not let you deploy multiple
instances of the same backend (e.g. more than one ceph). If the goal is to
deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will need
the ability to deploy multiple homogeneous cinder backends. This requirement
will likely apply to glance and manila as well.

> A very rough approach to the latter could be to use jinja to scale up
> the CephClient service so that we can have multiple copies of it in the
> controlplane.
>
> Each instance of CephClient should provide the ceph config file and
> keyring necessary for each cinder (or glance) backend.
>
> Also note that Ceph is only a particular example but we'd need a similar
> workflow for any backend type.
>
> The etherpad for the PTG session [2] touches this, but it'd be good to
> start this conversation before then.
>
> 1.
> https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html
>
> 2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane
>

[3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment

Br,
Gerg0


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates

2018-08-16 Thread Alan Bishop
On Tue, Aug 14, 2018 at 9:20 AM Bogdan Dobrelya  wrote:

> On 8/13/18 9:47 PM, Giulio Fidente wrote:
> > Hello,
> >
> > I'd like to get some feedback regarding the remaining
> > work for the split controlplane spec implementation [1]
> >
> > Specifically, while for some services like nova-compute it is not
> > necessary to update the controlplane nodes after an edge cloud is
> > deployed, for other services, like cinder (or glance, probably
> > others), it is necessary to do an update of the config files on the
> > controlplane when a new edge cloud is deployed.
> >
> > In fact for services like cinder or glance, which are hosted in the
> > controlplane, we need to pull data from the edge clouds (for example
> > the newly deployed ceph cluster keyrings and fsid) to configure cinder
> > (or glance) with a new backend.
> >
> > It looks like this demands for some architectural changes to solve the >
> following two:
> >
> > - how do we trigger/drive updates of the controlplane nodes after the
> > edge cloud is deployed?
>
> Note, there is also a strict(?) requirement of local management
> capabilities for edge clouds temporary disconnected off the central
> controlplane. That complicates the updates triggering even more. We'll
> need at least a notification-and-triggering system to perform required
> state synchronizations, including conflicts resolving. If that's the
> case, the architecture changes for TripleO deployment framework are
> inevitable AFAICT.
>

This is another interesting point. I don't mean to disregard it, but want to
highlight the issue that Giulio and I (and others, I'm sure) are focused on.

As a cinder guy, I'll use cinder as an example. Cinder services running in
the
control plane need to be aware of the storage "backends" deployed at the
Edge. So if a split-stack deployment includes edge nodes running a ceph
cluster, the cinder services need to be updated to add the ceph cluster as a
new cinder backend. So, not only is control plane data needed in order to
deploy an additional stack at the edge, data from the edge deployment needs
to
be fed back into a subsequent stack update in the controlplane. Otherwise,
cinder (and other storage services) will have no way of utilizing ceph
clusters at the edge.

>
> > - how do we scale the controlplane parameters to accomodate for N
> > backends of the same type?
>

Yes, this is also a big problem for me. Currently, TripleO can deploy cinder
with multiple heterogeneous backends (e.g. one each of ceph, NFS, Vendor X,
Vendor Y, etc.). However, the current THT do not let you deploy multiple
instances of the same backend (e.g. more than one ceph). If the goal is to
deploy multiple edge nodes consisting of Compute+Ceph, then TripleO will
need
the ability to deploy multiple homogeneous cinder backends. This requirement
will likely apply to glance and manila as well.


> > A very rough approach to the latter could be to use jinja to scale up
> > the CephClient service so that we can have multiple copies of it in the
> > controlplane.
> >
> > Each instance of CephClient should provide the ceph config file and
> > keyring necessary for each cinder (or glance) backend.
> >
> > Also note that Ceph is only a particular example but we'd need a similar
> > workflow for any backend type.
> >
> > The etherpad for the PTG session [2] touches this, but it'd be good to
> > start this conversation before then.
> >
> > 1.
> >
> https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html
> >
> > 2.
> https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][Edge][FEMDC] Edge clouds and controlplane updates

2018-08-14 Thread Bogdan Dobrelya

On 8/13/18 9:47 PM, Giulio Fidente wrote:

Hello,

I'd like to get some feedback regarding the remaining
work for the split controlplane spec implementation [1]

Specifically, while for some services like nova-compute it is not
necessary to update the controlplane nodes after an edge cloud is
deployed, for other services, like cinder (or glance, probably
others), it is necessary to do an update of the config files on the
controlplane when a new edge cloud is deployed.

In fact for services like cinder or glance, which are hosted in the
controlplane, we need to pull data from the edge clouds (for example
the newly deployed ceph cluster keyrings and fsid) to configure cinder
(or glance) with a new backend.

It looks like this demands for some architectural changes to solve the > 
following two:

- how do we trigger/drive updates of the controlplane nodes after the
edge cloud is deployed?


Note, there is also a strict(?) requirement of local management 
capabilities for edge clouds temporary disconnected off the central 
controlplane. That complicates the updates triggering even more. We'll 
need at least a notification-and-triggering system to perform required 
state synchronizations, including conflicts resolving. If that's the 
case, the architecture changes for TripleO deployment framework are 
inevitable AFAICT.




- how do we scale the controlplane parameters to accomodate for N
backends of the same type?

A very rough approach to the latter could be to use jinja to scale up
the CephClient service so that we can have multiple copies of it in the
controlplane.

Each instance of CephClient should provide the ceph config file and
keyring necessary for each cinder (or glance) backend.

Also note that Ceph is only a particular example but we'd need a similar
workflow for any backend type.

The etherpad for the PTG session [2] touches this, but it'd be good to
start this conversation before then.

1.
https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html

2. https://etherpad.openstack.org/p/tripleo-ptg-queens-split-controlplane




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev