Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Dan Prince
On Wed, Oct 17, 2018 at 11:15 AM Alex Schultz  wrote:
>
> Time to resurrect this thread.
>
> On Thu, Jul 5, 2018 at 12:14 PM James Slagle  wrote:
> >
> > On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> > > Last week I was tinkering with my docker configuration a bit and was a
> > > bit surprised that puppet/services/docker.yaml no longer used puppet to
> > > configure the docker daemon. It now uses Ansible [1] which is very cool
> > > but brings up the question of how should we clearly indicate to
> > > developers and users that we are using Ansible vs Puppet for
> > > configuration?
> > >
> > > TripleO has been around for a while now, has supported multiple
> > > configuration ans service types over the years: os-apply-config,
> > > puppet, containers, and now Ansible. In the past we've used rigid
> > > directory structures to identify which "service type" was used. More
> > > recently we mixed things up a bit more even by extending one service
> > > type from another ("docker" services all initially extended the
> > > "puppet" services to generate config files and provide an easy upgrade
> > > path).
> > >
> > > Similarly we now use Ansible all over the place for other things in
> > > many of or docker and puppet services for things like upgrades. That is
> > > all good too. I guess the thing I'm getting at here is just a way to
> > > cleanly identify which services are configured via Puppet vs. Ansible.
> > > And how can we do that in the least destructive way possible so as not
> > > to confuse ourselves and our users in the process.
> > >
> > > Also, I think its work keeping in mind that TripleO was once a multi-
> > > vendor project with vendors that had different preferences on service
> > > configuration. Also having the ability to support multiple
> > > configuration mechanisms in the future could once again present itself
> > > (thinking of Kubernetes as an example). Keeping in mind there may be a
> > > conversion period that could well last more than a release or two.
> > >
> > > I suggested a 'services/ansible' directory with mixed responces in our
> > > #tripleo meeting this week. Any other thoughts on the matter?
> >
> > I would almost rather see us organize the directories by service
> > name/project instead of implementation.
> >
> > Instead of:
> >
> > puppet/services/nova-api.yaml
> > puppet/services/nova-conductor.yaml
> > docker/services/nova-api.yaml
> > docker/services/nova-conductor.yaml
> >
> > We'd have:
> >
> > services/nova/nova-api-puppet.yaml
> > services/nova/nova-conductor-puppet.yaml
> > services/nova/nova-api-docker.yaml
> > services/nova/nova-conductor-docker.yaml
> >
> > (or perhaps even another level of directories to indicate
> > puppet/docker/ansible?)
> >
> > Personally, such an organization is something I'm more used to. It
> > feels more similar to how most would expect a puppet module or ansible
> > role to be organized, where you have the abstraction (service
> > configuration) at a higher directory level than specific
> > implementations.
> >
> > It would also lend itself more easily to adding implementations only
> > for specific services, and address the question of if a new top level
> > implementation directory needs to be created. For example, adding a
> > services/nova/nova-api-chef.yaml seems a lot less contentious than
> > adding a top level chef/services/nova-api.yaml.
> >
> > It'd also be nice if we had a way to mark the default within a given
> > service's directory. Perhaps services/nova/nova-api-default.yaml,
> > which would be a new template that just consumes the default? Or
> > perhaps a symlink, although it was pointed out symlinks don't work in
> > swift containers. Still, that could possibly be addressed in our plan
> > upload workflows. Then the resource-registry would point at
> > nova-api-default.yaml. One could easily tell which is the default
> > without having to cross reference with the resource-registry.
> >
>
> So since I'm adding a new ansible service, I thought I'd try and take
> a stab at this naming thing. I've taken James's idea and proposed an
> implementation here:
> https://review.openstack.org/#/c/588111/
>
> The idea would be that the THT code for the service deployment would
> end up in something like:
>
> deployment//-.yaml

A matter of preference but I can live with this.

>
> Additionally I took a stab at combining the puppet/docker service
> definitions for the aodh services in a similar structure to start
> reducing the overhead we've had from maintaining the docker/puppet
> implementations seperately.  You can see the patch
> https://review.openstack.org/#/c/611188/ for an additional example of
> this.
>
> Please let me know what you think.

I'm okay with it in that it consolidates some things (which we greatly
need to do). It does address my initial concern in that people are now
putting Ansible services into the puppet/services directory albeit a
bit heavy handed in that it changes everything (rather than just 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Dan Prince
On Thu, Oct 25, 2018 at 11:26 AM Alex Schultz  wrote:
>
> On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya  wrote:
> >
> >
> > On 10/19/18 8:04 PM, Alex Schultz wrote:
> > > On Fri, Oct 19, 2018 at 10:53 AM James Slagle  
> > > wrote:
> > >>
> > >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  
> > >> wrote:
> > >> > Additionally I took a stab at combining the puppet/docker service
> > >> > definitions for the aodh services in a similar structure to start
> > >> > reducing the overhead we've had from maintaining the docker/puppet
> > >> > implementations seperately.  You can see the patch
> > >> > https://review.openstack.org/#/c/611188/ for an additional example of
> > >> > this.
> > >>
> > >> That patch takes the approach of removing baremetal support. Is that
> > >> what we agreed to do?
> > >>
> > >
> > > Since it's deprecated since Queens[0], yes? I think it is time to stop
> > > continuing this method of installation.  Given that I'm not even sure
> >
> > My point and concern retains as before, unless we fully dropped the
> > docker support for Queens (and downstream LTS released for it), we
> > should not modify the t-h-t directory tree, due to associated
> > maintenance of backports complexity reasons
> >
>
> This is why we have duplication of things in THT.  For environment
> files this is actually an issue due to the fact they are the end user
> interface. But these service files should be internal and where they
> live should not matter.  We already have had this in the past and have
> managed to continue to do backports so I don't think this as a reason
> not to do this clean up.  It feels like we use this as a reason not to
> actually move forward on cleanup and we end up carrying the tech debt.
> By this logic, we'll never be able to cleanup anything if we can't
> handle moving files around.

Yeah. The environment files would contain some level of duplication
until we refactor our plan storage mechanism to use a plain old
tarball (stored in Swift still) instead of storing files in the
expanded format. Swift does not support softlinks, but a tarball would
and thus would allow us to de-dup things in the future.

The patch is here but it needs some love:

https://review.openstack.org/#/c/581153/

Dan

>
> I think there are some patches to do soft links (dprince might be able
> to provide the patches) which could at least handle this backward
> compatibility around locations, but I think we need to actually move
> forward on the simplification of the service definitions unless
> there's a blocking technical issue with this effort.
>
> Thanks,
> -Alex
>
> > > the upgrade process even works anymore with baremetal, I don't think
> > > there's a reason to keep it as it directly impacts the time it takes
> > > to perform deployments and also contributes to increased complexity
> > > all around.
> > >
> > > [0] 
> > > http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html
> > >
> > >> I'm not specifically opposed, as I'm pretty sure the baremetal
> > >> implementations are no longer tested anywhere, but I know that Dan had
> > >> some concerns about that last time around.
> > >>
> > >> The alternative we discussed was using jinja2 to include common
> > >> data/tasks in both the puppet/docker/ansible implementations. That
> > >> would also result in reducing the number of Heat resources in these
> > >> stacks and hopefully reduce the amount of time it takes to
> > >> create/update the ServiceChain stacks.
> > >>
> > >
> > > I'd rather we officially get rid of the one of the two methods and
> > > converge on a single method without increasing the complexity via
> > > jinja to continue to support both. If there's an improvement to be had
> > > after we've converged on a single structure for including the base
> > > bits, maybe we could do that then?
> > >
> > > Thanks,
> > > -Alex
> >
> >
> > --
> > Best regards,
> > Bogdan Dobrelya,
> > Irc #bogdando
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Alex Schultz
On Thu, Oct 25, 2018 at 9:16 AM Bogdan Dobrelya  wrote:
>
>
> On 10/19/18 8:04 PM, Alex Schultz wrote:
> > On Fri, Oct 19, 2018 at 10:53 AM James Slagle  
> > wrote:
> >>
> >> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  
> >> wrote:
> >> > Additionally I took a stab at combining the puppet/docker service
> >> > definitions for the aodh services in a similar structure to start
> >> > reducing the overhead we've had from maintaining the docker/puppet
> >> > implementations seperately.  You can see the patch
> >> > https://review.openstack.org/#/c/611188/ for an additional example of
> >> > this.
> >>
> >> That patch takes the approach of removing baremetal support. Is that
> >> what we agreed to do?
> >>
> >
> > Since it's deprecated since Queens[0], yes? I think it is time to stop
> > continuing this method of installation.  Given that I'm not even sure
>
> My point and concern retains as before, unless we fully dropped the
> docker support for Queens (and downstream LTS released for it), we
> should not modify the t-h-t directory tree, due to associated
> maintenance of backports complexity reasons
>

This is why we have duplication of things in THT.  For environment
files this is actually an issue due to the fact they are the end user
interface. But these service files should be internal and where they
live should not matter.  We already have had this in the past and have
managed to continue to do backports so I don't think this as a reason
not to do this clean up.  It feels like we use this as a reason not to
actually move forward on cleanup and we end up carrying the tech debt.
By this logic, we'll never be able to cleanup anything if we can't
handle moving files around.

I think there are some patches to do soft links (dprince might be able
to provide the patches) which could at least handle this backward
compatibility around locations, but I think we need to actually move
forward on the simplification of the service definitions unless
there's a blocking technical issue with this effort.

Thanks,
-Alex

> > the upgrade process even works anymore with baremetal, I don't think
> > there's a reason to keep it as it directly impacts the time it takes
> > to perform deployments and also contributes to increased complexity
> > all around.
> >
> > [0] 
> > http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html
> >
> >> I'm not specifically opposed, as I'm pretty sure the baremetal
> >> implementations are no longer tested anywhere, but I know that Dan had
> >> some concerns about that last time around.
> >>
> >> The alternative we discussed was using jinja2 to include common
> >> data/tasks in both the puppet/docker/ansible implementations. That
> >> would also result in reducing the number of Heat resources in these
> >> stacks and hopefully reduce the amount of time it takes to
> >> create/update the ServiceChain stacks.
> >>
> >
> > I'd rather we officially get rid of the one of the two methods and
> > converge on a single method without increasing the complexity via
> > jinja to continue to support both. If there's an improvement to be had
> > after we've converged on a single structure for including the base
> > bits, maybe we could do that then?
> >
> > Thanks,
> > -Alex
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-25 Thread Bogdan Dobrelya


On 10/19/18 8:04 PM, Alex Schultz wrote:

On Fri, Oct 19, 2018 at 10:53 AM James Slagle  wrote:


On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> Additionally I took a stab at combining the puppet/docker service
> definitions for the aodh services in a similar structure to start
> reducing the overhead we've had from maintaining the docker/puppet
> implementations seperately.  You can see the patch
> https://review.openstack.org/#/c/611188/ for an additional example of
> this.

That patch takes the approach of removing baremetal support. Is that
what we agreed to do?



Since it's deprecated since Queens[0], yes? I think it is time to stop
continuing this method of installation.  Given that I'm not even sure


My point and concern retains as before, unless we fully dropped the 
docker support for Queens (and downstream LTS released for it), we 
should not modify the t-h-t directory tree, due to associated 
maintenance of backports complexity reasons



the upgrade process even works anymore with baremetal, I don't think
there's a reason to keep it as it directly impacts the time it takes
to perform deployments and also contributes to increased complexity
all around.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html


I'm not specifically opposed, as I'm pretty sure the baremetal
implementations are no longer tested anywhere, but I know that Dan had
some concerns about that last time around.

The alternative we discussed was using jinja2 to include common
data/tasks in both the puppet/docker/ansible implementations. That
would also result in reducing the number of Heat resources in these
stacks and hopefully reduce the amount of time it takes to
create/update the ServiceChain stacks.



I'd rather we officially get rid of the one of the two methods and
converge on a single method without increasing the complexity via
jinja to continue to support both. If there's an improvement to be had
after we've converged on a single structure for including the base
bits, maybe we could do that then?

Thanks,
-Alex



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-22 Thread Juan Antonio Osorio Robles

On 10/19/18 8:04 PM, Alex Schultz wrote:
> On Fri, Oct 19, 2018 at 10:53 AM James Slagle  wrote:
>> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
>>> Additionally I took a stab at combining the puppet/docker service
>>> definitions for the aodh services in a similar structure to start
>>> reducing the overhead we've had from maintaining the docker/puppet
>>> implementations seperately.  You can see the patch
>>> https://review.openstack.org/#/c/611188/ for an additional example of
>>> this.
>> That patch takes the approach of removing baremetal support. Is that
>> what we agreed to do?
>>
> Since it's deprecated since Queens[0], yes? I think it is time to stop
> continuing this method of installation.  Given that I'm not even sure
> the upgrade process even works anymore with baremetal, I don't think
> there's a reason to keep it as it directly impacts the time it takes
> to perform deployments and also contributes to increased complexity
> all around.
>
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html
As an advantage to removing baremetal support, our nested stack usage
would be a little lighter and this might actually help out deployment
times and resource usage. I like the idea of going ahead and starting to
flatten the stacks for our services.
>
>> I'm not specifically opposed, as I'm pretty sure the baremetal
>> implementations are no longer tested anywhere, but I know that Dan had
>> some concerns about that last time around.
>>
>> The alternative we discussed was using jinja2 to include common
>> data/tasks in both the puppet/docker/ansible implementations. That
>> would also result in reducing the number of Heat resources in these
>> stacks and hopefully reduce the amount of time it takes to
>> create/update the ServiceChain stacks.
>>
> I'd rather we officially get rid of the one of the two methods and
> converge on a single method without increasing the complexity via
> jinja to continue to support both. If there's an improvement to be had
> after we've converged on a single structure for including the base
> bits, maybe we could do that then?
>
> Thanks,
> -Alex
>
>> --
>> -- James Slagle
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-19 Thread Alex Schultz
On Fri, Oct 19, 2018 at 10:53 AM James Slagle  wrote:
>
> On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> > Additionally I took a stab at combining the puppet/docker service
> > definitions for the aodh services in a similar structure to start
> > reducing the overhead we've had from maintaining the docker/puppet
> > implementations seperately.  You can see the patch
> > https://review.openstack.org/#/c/611188/ for an additional example of
> > this.
>
> That patch takes the approach of removing baremetal support. Is that
> what we agreed to do?
>

Since it's deprecated since Queens[0], yes? I think it is time to stop
continuing this method of installation.  Given that I'm not even sure
the upgrade process even works anymore with baremetal, I don't think
there's a reason to keep it as it directly impacts the time it takes
to perform deployments and also contributes to increased complexity
all around.

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122248.html

> I'm not specifically opposed, as I'm pretty sure the baremetal
> implementations are no longer tested anywhere, but I know that Dan had
> some concerns about that last time around.
>
> The alternative we discussed was using jinja2 to include common
> data/tasks in both the puppet/docker/ansible implementations. That
> would also result in reducing the number of Heat resources in these
> stacks and hopefully reduce the amount of time it takes to
> create/update the ServiceChain stacks.
>

I'd rather we officially get rid of the one of the two methods and
converge on a single method without increasing the complexity via
jinja to continue to support both. If there's an improvement to be had
after we've converged on a single structure for including the base
bits, maybe we could do that then?

Thanks,
-Alex

> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-19 Thread James Slagle
On Wed, Oct 17, 2018 at 11:14 AM Alex Schultz  wrote:
> Additionally I took a stab at combining the puppet/docker service
> definitions for the aodh services in a similar structure to start
> reducing the overhead we've had from maintaining the docker/puppet
> implementations seperately.  You can see the patch
> https://review.openstack.org/#/c/611188/ for an additional example of
> this.

That patch takes the approach of removing baremetal support. Is that
what we agreed to do?

I'm not specifically opposed, as I'm pretty sure the baremetal
implementations are no longer tested anywhere, but I know that Dan had
some concerns about that last time around.

The alternative we discussed was using jinja2 to include common
data/tasks in both the puppet/docker/ansible implementations. That
would also result in reducing the number of Heat resources in these
stacks and hopefully reduce the amount of time it takes to
create/update the ServiceChain stacks.

--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-10-17 Thread Alex Schultz
Time to resurrect this thread.

On Thu, Jul 5, 2018 at 12:14 PM James Slagle  wrote:
>
> On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> > Last week I was tinkering with my docker configuration a bit and was a
> > bit surprised that puppet/services/docker.yaml no longer used puppet to
> > configure the docker daemon. It now uses Ansible [1] which is very cool
> > but brings up the question of how should we clearly indicate to
> > developers and users that we are using Ansible vs Puppet for
> > configuration?
> >
> > TripleO has been around for a while now, has supported multiple
> > configuration ans service types over the years: os-apply-config,
> > puppet, containers, and now Ansible. In the past we've used rigid
> > directory structures to identify which "service type" was used. More
> > recently we mixed things up a bit more even by extending one service
> > type from another ("docker" services all initially extended the
> > "puppet" services to generate config files and provide an easy upgrade
> > path).
> >
> > Similarly we now use Ansible all over the place for other things in
> > many of or docker and puppet services for things like upgrades. That is
> > all good too. I guess the thing I'm getting at here is just a way to
> > cleanly identify which services are configured via Puppet vs. Ansible.
> > And how can we do that in the least destructive way possible so as not
> > to confuse ourselves and our users in the process.
> >
> > Also, I think its work keeping in mind that TripleO was once a multi-
> > vendor project with vendors that had different preferences on service
> > configuration. Also having the ability to support multiple
> > configuration mechanisms in the future could once again present itself
> > (thinking of Kubernetes as an example). Keeping in mind there may be a
> > conversion period that could well last more than a release or two.
> >
> > I suggested a 'services/ansible' directory with mixed responces in our
> > #tripleo meeting this week. Any other thoughts on the matter?
>
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
>
> Instead of:
>
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
>
> We'd have:
>
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml
>
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)
>
> Personally, such an organization is something I'm more used to. It
> feels more similar to how most would expect a puppet module or ansible
> role to be organized, where you have the abstraction (service
> configuration) at a higher directory level than specific
> implementations.
>
> It would also lend itself more easily to adding implementations only
> for specific services, and address the question of if a new top level
> implementation directory needs to be created. For example, adding a
> services/nova/nova-api-chef.yaml seems a lot less contentious than
> adding a top level chef/services/nova-api.yaml.
>
> It'd also be nice if we had a way to mark the default within a given
> service's directory. Perhaps services/nova/nova-api-default.yaml,
> which would be a new template that just consumes the default? Or
> perhaps a symlink, although it was pointed out symlinks don't work in
> swift containers. Still, that could possibly be addressed in our plan
> upload workflows. Then the resource-registry would point at
> nova-api-default.yaml. One could easily tell which is the default
> without having to cross reference with the resource-registry.
>

So since I'm adding a new ansible service, I thought I'd try and take
a stab at this naming thing. I've taken James's idea and proposed an
implementation here:
https://review.openstack.org/#/c/588111/

The idea would be that the THT code for the service deployment would
end up in something like:

deployment//-.yaml

Additionally I took a stab at combining the puppet/docker service
definitions for the aodh services in a similar structure to start
reducing the overhead we've had from maintaining the docker/puppet
implementations seperately.  You can see the patch
https://review.openstack.org/#/c/611188/ for an additional example of
this.

Please let me know what you think.

Thanks,
-Alex

>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-07 Thread Dan Prince
On Thu, Aug 2, 2018 at 5:42 PM Steve Baker  wrote:
>
>
>
> On 02/08/18 13:03, Alex Schultz wrote:
> > On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:
> >> On 7/6/18 7:02 PM, Ben Nemec wrote:
> >>>
> >>>
> >>> On 07/05/2018 01:23 PM, Dan Prince wrote:
>  On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> >
> > I would almost rather see us organize the directories by service
> > name/project instead of implementation.
> >
> > Instead of:
> >
> > puppet/services/nova-api.yaml
> > puppet/services/nova-conductor.yaml
> > docker/services/nova-api.yaml
> > docker/services/nova-conductor.yaml
> >
> > We'd have:
> >
> > services/nova/nova-api-puppet.yaml
> > services/nova/nova-conductor-puppet.yaml
> > services/nova/nova-api-docker.yaml
> > services/nova/nova-conductor-docker.yaml
> >
> > (or perhaps even another level of directories to indicate
> > puppet/docker/ansible?)
> 
>  I'd be open to this but doing changes on this scale is a much larger
>  developer and user impact than what I was thinking we would be willing
>  to entertain for the issue that caused me to bring this up (i.e. how to
>  identify services which get configured by Ansible).
> 
>  Its also worth noting that many projects keep these sorts of things in
>  different repos too. Like Kolla fully separates kolla-ansible and
>  kolla-kubernetes as they are quite divergent. We have been able to
>  preserve some of our common service architectures but as things move
>  towards kubernetes we may which to change things structurally a bit
>  too.
> >>>
> >>> True, but the current directory layout was from back when we intended to
> >>> support multiple deployment tools in parallel (originally
> >>> tripleo-image-elements and puppet).  Since I think it has become clear 
> >>> that
> >>> it's impractical to maintain two different technologies to do essentially
> >>> the same thing I'm not sure there's a need for it now.  It's also worth
> >>> noting that kolla-kubernetes basically died because there wasn't enough
> >>> people to maintain both deployment methods, so we're not the only ones who
> >>> have found that to be true.  If/when we move to kubernetes I would
> >>> anticipate it going like the initial containers work did - development 
> >>> for a
> >>> couple of cycles, then a switch to the new thing and deprecation of the 
> >>> old
> >>> thing, then removal of support for the old thing.
> >>>
> >>> That being said, because of the fact that the service yamls are
> >>> essentially an API for TripleO because they're referenced in user
> >>
> >> this ^^
> >>
> >>> resource registries, I'm not sure it's worth the churn to move everything
> >>> either.  I think that's going to be an issue either way though, it's just 
> >>> a
> >>> question of the scope.  _Something_ is going to move around no matter how 
> >>> we
> >>> reorganize so it's a problem that needs to be addressed anyway.
> >>
> >> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
> >> maintainers doing backports for queens (and the LTS downstream release 
> >> based
> >> on it). Now imagine kubernetes support comes within those next a few years,
> >> before we can let the old API just go...
> >>
> >> I have an example [0] to share all that pain brought by a simple move of
> >> 'API defaults' from environments/services-docker to environments/services
> >> plus environments/services-baremetal. Each time a file changes contents by
> >> its old location, like here [1], I had to run a lot of sanity checks to
> >> rebase it properly. Like checking for the updated paths in resource
> >> registries are still valid or had to/been moved as well, then picking the
> >> source of truth for diverged old vs changes locations - all that to loose
> >> nothing important in progress.
> >>
> >> So I'd say please let's do *not* change services' paths/namespaces in t-h-t
> >> "API" w/o real need to do that, when there is no more alternatives left to
> >> that.
> >>
> > Ok so it's time to dig this thread back up. I'm currently looking at
> > the chrony support which will require a new service[0][1]. Rather than
> > add it under puppet, we'll likely want to leverage ansible. So I guess
> > the question is where do we put services going forward?  Additionally
> > as we look to truly removing the baremetal deployment options and
> > puppet service deployment, it seems like we need to consolidate under
> > a single structure.  Given that we don't want force too much churn,
> > does this mean that we should align to the docker/services/*.yaml
> > structure or should we be proposing a new structure that we can try to
> > align on.
> >
> > There is outstanding tech-debt around the nested stacks and references
> > within these services when we added the container deployments so it's
> > something that would be beneficial to start tackling sooner rather
> > than 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-03 Thread Alex Schultz
On Thu, Aug 2, 2018 at 11:32 PM, Cédric Jeanneret  wrote:
>
>
> On 08/02/2018 11:41 PM, Steve Baker wrote:
>>
>>
>> On 02/08/18 13:03, Alex Schultz wrote:
>>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya 
>>> wrote:
 On 7/6/18 7:02 PM, Ben Nemec wrote:
>
>
> On 07/05/2018 01:23 PM, Dan Prince wrote:
>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>>
>>> I would almost rather see us organize the directories by service
>>> name/project instead of implementation.
>>>
>>> Instead of:
>>>
>>> puppet/services/nova-api.yaml
>>> puppet/services/nova-conductor.yaml
>>> docker/services/nova-api.yaml
>>> docker/services/nova-conductor.yaml
>>>
>>> We'd have:
>>>
>>> services/nova/nova-api-puppet.yaml
>>> services/nova/nova-conductor-puppet.yaml
>>> services/nova/nova-api-docker.yaml
>>> services/nova/nova-conductor-docker.yaml
>>>
>>> (or perhaps even another level of directories to indicate
>>> puppet/docker/ansible?)
>>
>> I'd be open to this but doing changes on this scale is a much larger
>> developer and user impact than what I was thinking we would be willing
>> to entertain for the issue that caused me to bring this up (i.e.
>> how to
>> identify services which get configured by Ansible).
>>
>> Its also worth noting that many projects keep these sorts of things in
>> different repos too. Like Kolla fully separates kolla-ansible and
>> kolla-kubernetes as they are quite divergent. We have been able to
>> preserve some of our common service architectures but as things move
>> towards kubernetes we may which to change things structurally a bit
>> too.
>
> True, but the current directory layout was from back when we
> intended to
> support multiple deployment tools in parallel (originally
> tripleo-image-elements and puppet).  Since I think it has become
> clear that
> it's impractical to maintain two different technologies to do
> essentially
> the same thing I'm not sure there's a need for it now.  It's also worth
> noting that kolla-kubernetes basically died because there wasn't enough
> people to maintain both deployment methods, so we're not the only
> ones who
> have found that to be true.  If/when we move to kubernetes I would
> anticipate it going like the initial containers work did -
> development for a
> couple of cycles, then a switch to the new thing and deprecation of
> the old
> thing, then removal of support for the old thing.
>
> That being said, because of the fact that the service yamls are
> essentially an API for TripleO because they're referenced in user

 this ^^

> resource registries, I'm not sure it's worth the churn to move
> everything
> either.  I think that's going to be an issue either way though, it's
> just a
> question of the scope.  _Something_ is going to move around no
> matter how we
> reorganize so it's a problem that needs to be addressed anyway.

 [tl;dr] I can foresee reorganizing that API becomes a nightmare for
 maintainers doing backports for queens (and the LTS downstream
 release based
 on it). Now imagine kubernetes support comes within those next a few
 years,
 before we can let the old API just go...

 I have an example [0] to share all that pain brought by a simple move of
 'API defaults' from environments/services-docker to
 environments/services
 plus environments/services-baremetal. Each time a file changes
 contents by
 its old location, like here [1], I had to run a lot of sanity checks to
 rebase it properly. Like checking for the updated paths in resource
 registries are still valid or had to/been moved as well, then picking
 the
 source of truth for diverged old vs changes locations - all that to
 loose
 nothing important in progress.

 So I'd say please let's do *not* change services' paths/namespaces in
 t-h-t
 "API" w/o real need to do that, when there is no more alternatives
 left to
 that.

>>> Ok so it's time to dig this thread back up. I'm currently looking at
>>> the chrony support which will require a new service[0][1]. Rather than
>>> add it under puppet, we'll likely want to leverage ansible. So I guess
>>> the question is where do we put services going forward?  Additionally
>>> as we look to truly removing the baremetal deployment options and
>>> puppet service deployment, it seems like we need to consolidate under
>>> a single structure.  Given that we don't want force too much churn,
>>> does this mean that we should align to the docker/services/*.yaml
>>> structure or should we be proposing a new structure that we can try to
>>> align on.
>>>
>>> There is outstanding tech-debt around the nested stacks and references
>>> within these services when we 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-02 Thread Cédric Jeanneret


On 08/02/2018 11:41 PM, Steve Baker wrote:
> 
> 
> On 02/08/18 13:03, Alex Schultz wrote:
>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya 
>> wrote:
>>> On 7/6/18 7:02 PM, Ben Nemec wrote:


 On 07/05/2018 01:23 PM, Dan Prince wrote:
> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>
>> I would almost rather see us organize the directories by service
>> name/project instead of implementation.
>>
>> Instead of:
>>
>> puppet/services/nova-api.yaml
>> puppet/services/nova-conductor.yaml
>> docker/services/nova-api.yaml
>> docker/services/nova-conductor.yaml
>>
>> We'd have:
>>
>> services/nova/nova-api-puppet.yaml
>> services/nova/nova-conductor-puppet.yaml
>> services/nova/nova-api-docker.yaml
>> services/nova/nova-conductor-docker.yaml
>>
>> (or perhaps even another level of directories to indicate
>> puppet/docker/ansible?)
>
> I'd be open to this but doing changes on this scale is a much larger
> developer and user impact than what I was thinking we would be willing
> to entertain for the issue that caused me to bring this up (i.e.
> how to
> identify services which get configured by Ansible).
>
> Its also worth noting that many projects keep these sorts of things in
> different repos too. Like Kolla fully separates kolla-ansible and
> kolla-kubernetes as they are quite divergent. We have been able to
> preserve some of our common service architectures but as things move
> towards kubernetes we may which to change things structurally a bit
> too.

 True, but the current directory layout was from back when we
 intended to
 support multiple deployment tools in parallel (originally
 tripleo-image-elements and puppet).  Since I think it has become
 clear that
 it's impractical to maintain two different technologies to do
 essentially
 the same thing I'm not sure there's a need for it now.  It's also worth
 noting that kolla-kubernetes basically died because there wasn't enough
 people to maintain both deployment methods, so we're not the only
 ones who
 have found that to be true.  If/when we move to kubernetes I would
 anticipate it going like the initial containers work did -
 development for a
 couple of cycles, then a switch to the new thing and deprecation of
 the old
 thing, then removal of support for the old thing.

 That being said, because of the fact that the service yamls are
 essentially an API for TripleO because they're referenced in user
>>>
>>> this ^^
>>>
 resource registries, I'm not sure it's worth the churn to move
 everything
 either.  I think that's going to be an issue either way though, it's
 just a
 question of the scope.  _Something_ is going to move around no
 matter how we
 reorganize so it's a problem that needs to be addressed anyway.
>>>
>>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
>>> maintainers doing backports for queens (and the LTS downstream
>>> release based
>>> on it). Now imagine kubernetes support comes within those next a few
>>> years,
>>> before we can let the old API just go...
>>>
>>> I have an example [0] to share all that pain brought by a simple move of
>>> 'API defaults' from environments/services-docker to
>>> environments/services
>>> plus environments/services-baremetal. Each time a file changes
>>> contents by
>>> its old location, like here [1], I had to run a lot of sanity checks to
>>> rebase it properly. Like checking for the updated paths in resource
>>> registries are still valid or had to/been moved as well, then picking
>>> the
>>> source of truth for diverged old vs changes locations - all that to
>>> loose
>>> nothing important in progress.
>>>
>>> So I'd say please let's do *not* change services' paths/namespaces in
>>> t-h-t
>>> "API" w/o real need to do that, when there is no more alternatives
>>> left to
>>> that.
>>>
>> Ok so it's time to dig this thread back up. I'm currently looking at
>> the chrony support which will require a new service[0][1]. Rather than
>> add it under puppet, we'll likely want to leverage ansible. So I guess
>> the question is where do we put services going forward?  Additionally
>> as we look to truly removing the baremetal deployment options and
>> puppet service deployment, it seems like we need to consolidate under
>> a single structure.  Given that we don't want force too much churn,
>> does this mean that we should align to the docker/services/*.yaml
>> structure or should we be proposing a new structure that we can try to
>> align on.
>>
>> There is outstanding tech-debt around the nested stacks and references
>> within these services when we added the container deployments so it's
>> something that would be beneficial to start tackling sooner rather
>> than later.  Personally I think we're always going to have 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-02 Thread Steve Baker



On 02/08/18 13:03, Alex Schultz wrote:

On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:

On 7/6/18 7:02 PM, Ben Nemec wrote:



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear that
it's impractical to maintain two different technologies to do essentially
the same thing I'm not sure there's a need for it now.  It's also worth
noting that kolla-kubernetes basically died because there wasn't enough
people to maintain both deployment methods, so we're not the only ones who
have found that to be true.  If/when we move to kubernetes I would
anticipate it going like the initial containers work did - development for a
couple of cycles, then a switch to the new thing and deprecation of the old
thing, then removal of support for the old thing.

That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user


this ^^


resource registries, I'm not sure it's worth the churn to move everything
either.  I think that's going to be an issue either way though, it's just a
question of the scope.  _Something_ is going to move around no matter how we
reorganize so it's a problem that needs to be addressed anyway.


[tl;dr] I can foresee reorganizing that API becomes a nightmare for
maintainers doing backports for queens (and the LTS downstream release based
on it). Now imagine kubernetes support comes within those next a few years,
before we can let the old API just go...

I have an example [0] to share all that pain brought by a simple move of
'API defaults' from environments/services-docker to environments/services
plus environments/services-baremetal. Each time a file changes contents by
its old location, like here [1], I had to run a lot of sanity checks to
rebase it properly. Like checking for the updated paths in resource
registries are still valid or had to/been moved as well, then picking the
source of truth for diverged old vs changes locations - all that to loose
nothing important in progress.

So I'd say please let's do *not* change services' paths/namespaces in t-h-t
"API" w/o real need to do that, when there is no more alternatives left to
that.


Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?


When Stein development starts, the puppet services will have been 
deprecated for an entire cycle. Can I suggest we use this reorganization 
as the time we delete the puppet services files? This would release us 
of the burden of maintaining a deployment method that we no 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-01 Thread Alex Schultz
On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:
> On 7/6/18 7:02 PM, Ben Nemec wrote:
>>
>>
>>
>> On 07/05/2018 01:23 PM, Dan Prince wrote:
>>>
>>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


 I would almost rather see us organize the directories by service
 name/project instead of implementation.

 Instead of:

 puppet/services/nova-api.yaml
 puppet/services/nova-conductor.yaml
 docker/services/nova-api.yaml
 docker/services/nova-conductor.yaml

 We'd have:

 services/nova/nova-api-puppet.yaml
 services/nova/nova-conductor-puppet.yaml
 services/nova/nova-api-docker.yaml
 services/nova/nova-conductor-docker.yaml

 (or perhaps even another level of directories to indicate
 puppet/docker/ansible?)
>>>
>>>
>>> I'd be open to this but doing changes on this scale is a much larger
>>> developer and user impact than what I was thinking we would be willing
>>> to entertain for the issue that caused me to bring this up (i.e. how to
>>> identify services which get configured by Ansible).
>>>
>>> Its also worth noting that many projects keep these sorts of things in
>>> different repos too. Like Kolla fully separates kolla-ansible and
>>> kolla-kubernetes as they are quite divergent. We have been able to
>>> preserve some of our common service architectures but as things move
>>> towards kubernetes we may which to change things structurally a bit
>>> too.
>>
>>
>> True, but the current directory layout was from back when we intended to
>> support multiple deployment tools in parallel (originally
>> tripleo-image-elements and puppet).  Since I think it has become clear that
>> it's impractical to maintain two different technologies to do essentially
>> the same thing I'm not sure there's a need for it now.  It's also worth
>> noting that kolla-kubernetes basically died because there wasn't enough
>> people to maintain both deployment methods, so we're not the only ones who
>> have found that to be true.  If/when we move to kubernetes I would
>> anticipate it going like the initial containers work did - development for a
>> couple of cycles, then a switch to the new thing and deprecation of the old
>> thing, then removal of support for the old thing.
>>
>> That being said, because of the fact that the service yamls are
>> essentially an API for TripleO because they're referenced in user
>
>
> this ^^
>
>> resource registries, I'm not sure it's worth the churn to move everything
>> either.  I think that's going to be an issue either way though, it's just a
>> question of the scope.  _Something_ is going to move around no matter how we
>> reorganize so it's a problem that needs to be addressed anyway.
>
>
> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
> maintainers doing backports for queens (and the LTS downstream release based
> on it). Now imagine kubernetes support comes within those next a few years,
> before we can let the old API just go...
>
> I have an example [0] to share all that pain brought by a simple move of
> 'API defaults' from environments/services-docker to environments/services
> plus environments/services-baremetal. Each time a file changes contents by
> its old location, like here [1], I had to run a lot of sanity checks to
> rebase it properly. Like checking for the updated paths in resource
> registries are still valid or had to/been moved as well, then picking the
> source of truth for diverged old vs changes locations - all that to loose
> nothing important in progress.
>
> So I'd say please let's do *not* change services' paths/namespaces in t-h-t
> "API" w/o real need to do that, when there is no more alternatives left to
> that.
>

Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?

Thanks,
-Alex

[0] 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-09 Thread Bogdan Dobrelya

On 7/6/18 7:02 PM, Ben Nemec wrote:



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to 
support multiple deployment tools in parallel (originally 
tripleo-image-elements and puppet).  Since I think it has become clear 
that it's impractical to maintain two different technologies to do 
essentially the same thing I'm not sure there's a need for it now.  It's 
also worth noting that kolla-kubernetes basically died because there 
wasn't enough people to maintain both deployment methods, so we're not 
the only ones who have found that to be true.  If/when we move to 
kubernetes I would anticipate it going like the initial containers work 
did - development for a couple of cycles, then a switch to the new thing 
and deprecation of the old thing, then removal of support for the old 
thing.


That being said, because of the fact that the service yamls are 
essentially an API for TripleO because they're referenced in user


this ^^

resource registries, I'm not sure it's worth the churn to move 
everything either.  I think that's going to be an issue either way 
though, it's just a question of the scope.  _Something_ is going to move 
around no matter how we reorganize so it's a problem that needs to be 
addressed anyway.


[tl;dr] I can foresee reorganizing that API becomes a nightmare for 
maintainers doing backports for queens (and the LTS downstream release 
based on it). Now imagine kubernetes support comes within those next a 
few years, before we can let the old API just go...


I have an example [0] to share all that pain brought by a simple move of 
'API defaults' from environments/services-docker to 
environments/services plus environments/services-baremetal. Each time a 
file changes contents by its old location, like here [1], I had to run a 
lot of sanity checks to rebase it properly. Like checking for the 
updated paths in resource registries are still valid or had to/been 
moved as well, then picking the source of truth for diverged old vs 
changes locations - all that to loose nothing important in progress.


So I'd say please let's do *not* change services' paths/namespaces in 
t-h-t "API" w/o real need to do that, when there is no more alternatives 
left to that.


[0] https://review.openstack.org/#/q/topic:containers-default-stable/queens
[1] https://review.openstack.org/#/c/567810



-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Ben Nemec

(adding the list back)

On 07/06/2018 12:05 PM, Dan Prince wrote:

On Fri, Jul 6, 2018 at 12:03 PM Ben Nemec  wrote:




On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear
that it's impractical to maintain two different technologies to do
essentially the same thing I'm not sure there's a need for it now.  It's
also worth noting that kolla-kubernetes basically died because there
wasn't enough people to maintain both deployment methods, so we're not
the only ones who have found that to be true.  If/when we move to
kubernetes I would anticipate it going like the initial containers work
did - development for a couple of cycles, then a switch to the new thing
and deprecation of the old thing, then removal of support for the old thing.


Sometimes the old things are a bit longer lived though. And sometimes
the new thing doesn't work out the way you thought they would. Have an
abstraction layer where you can have more than new/old things is
sometimes very useful. I'd had to see us ditch it. Especially since
you can already sort of have the both right now by using the resource
registry files to setup a nice default for everything and gradually
switch to new stuff as your defaults.


I don't know that you lose that ability in either case though.  You can 
still point your resource registry at the -puppet versions of the 
services if you want to do that.  The only thing that changes is the 
location of the files.


Given that, I don't think there's actually a _huge_ difference between 
the two options.  I prefer the flat directory just because as I've been 
working on designate it's mildly annoying to have to navigate two 
separate directory trees to find all the designate-related service 
files, but I realize that's a fairly minor complaint. :-)






That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user
resource registries, I'm not sure it's worth the churn to move
everything either.  I think that's going to be an issue either way
though, it's just a question of the scope.  _Something_ is going to move
around no matter how we reorganize so it's a problem that needs to be
addressed anyway.


I feel like renaming every service template in t-h-t as part of
solving my initial concerns around identifying the 'ansible configured
services' is a bit of a sedge hammer though. I like some of the
renaming ideas proposed here too. I'm just not convinced that renaming
*some* templates is the same as restructuring the entire t-h-t
services hierarchy. I'd rather wait and let it happen more naturally I
guess, perhaps when we need to do something more destructive already.


My thought was that either way we're causing people grief because they 
have to update their files, but the big bang approach would mean they do 
it once and then it's done.  Except I realize now that's not true, 
because as more things move to ansible the filenames would continue to 
change.


Which makes me wonder if we should be encoding implementation details 
into the filenames in the first place.  Ideally, the interface would be 
"I want designate-api, so I set OS::TripleO::Services::DesignateApi: 
services/designate-api.yaml".  As a user I probably don't care what 
technology is used to deploy it, I just want it deployed.  Then if/when 
we change our default method, it just gets swapped out seamlessly and 
there's no need for me to change my configuration.


Obviously we'd still need the ability to have method-specific templates 
too, but maybe the default designate-api.yaml could be a symlink to 
whatever we consider the primary one.  Not 

Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Ben Nemec



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to 
support multiple deployment tools in parallel (originally 
tripleo-image-elements and puppet).  Since I think it has become clear 
that it's impractical to maintain two different technologies to do 
essentially the same thing I'm not sure there's a need for it now.  It's 
also worth noting that kolla-kubernetes basically died because there 
wasn't enough people to maintain both deployment methods, so we're not 
the only ones who have found that to be true.  If/when we move to 
kubernetes I would anticipate it going like the initial containers work 
did - development for a couple of cycles, then a switch to the new thing 
and deprecation of the old thing, then removal of support for the old thing.


That being said, because of the fact that the service yamls are 
essentially an API for TripleO because they're referenced in user 
resource registries, I'm not sure it's worth the churn to move 
everything either.  I think that's going to be an issue either way 
though, it's just a question of the scope.  _Something_ is going to move 
around no matter how we reorganize so it's a problem that needs to be 
addressed anyway.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-06 Thread Cédric Jeanneret

[snip]

> 
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
> 
> Instead of:
> 
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
> 
> We'd have:
> 
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml

I'd also go for that one - it would be clearer and easier to search when
one wants to see how the service is configured, displaying all implem
for given service.
The current tree is a bit unusual.

> 
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)
> 
> Personally, such an organization is something I'm more used to. It
> feels more similar to how most would expect a puppet module or ansible
> role to be organized, where you have the abstraction (service
> configuration) at a higher directory level than specific
> implementations.
> 
> It would also lend itself more easily to adding implementations only
> for specific services, and address the question of if a new top level
> implementation directory needs to be created. For example, adding a
> services/nova/nova-api-chef.yaml seems a lot less contentious than
> adding a top level chef/services/nova-api.yaml.

True. Easier to add new deployment ways, and probably easier to search.

> 
> It'd also be nice if we had a way to mark the default within a given
> service's directory. Perhaps services/nova/nova-api-default.yaml,
> which would be a new template that just consumes the default? Or
> perhaps a symlink, although it was pointed out symlinks don't work in
> swift containers. Still, that could possibly be addressed in our plan
> upload workflows. Then the resource-registry would point at
> nova-api-default.yaml. One could easily tell which is the default
> without having to cross reference with the resource-registry.

+42 for a way to get the default implem - a template that just consume
the right one should be enough and self-explanatory.
Having a tree based on services instead of implem will allow that in an
easy way.

> 
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
> 
> I would almost rather see us organize the directories by service
> name/project instead of implementation.
> 
> Instead of:
> 
> puppet/services/nova-api.yaml
> puppet/services/nova-conductor.yaml
> docker/services/nova-api.yaml
> docker/services/nova-conductor.yaml
> 
> We'd have:
> 
> services/nova/nova-api-puppet.yaml
> services/nova/nova-conductor-puppet.yaml
> services/nova/nova-api-docker.yaml
> services/nova/nova-conductor-docker.yaml
> 
> (or perhaps even another level of directories to indicate
> puppet/docker/ansible?)

I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread James Slagle
On Thu, Jul 5, 2018 at 1:50 PM, Dan Prince  wrote:
> Last week I was tinkering with my docker configuration a bit and was a
> bit surprised that puppet/services/docker.yaml no longer used puppet to
> configure the docker daemon. It now uses Ansible [1] which is very cool
> but brings up the question of how should we clearly indicate to
> developers and users that we are using Ansible vs Puppet for
> configuration?
>
> TripleO has been around for a while now, has supported multiple
> configuration ans service types over the years: os-apply-config,
> puppet, containers, and now Ansible. In the past we've used rigid
> directory structures to identify which "service type" was used. More
> recently we mixed things up a bit more even by extending one service
> type from another ("docker" services all initially extended the
> "puppet" services to generate config files and provide an easy upgrade
> path).
>
> Similarly we now use Ansible all over the place for other things in
> many of or docker and puppet services for things like upgrades. That is
> all good too. I guess the thing I'm getting at here is just a way to
> cleanly identify which services are configured via Puppet vs. Ansible.
> And how can we do that in the least destructive way possible so as not
> to confuse ourselves and our users in the process.
>
> Also, I think its work keeping in mind that TripleO was once a multi-
> vendor project with vendors that had different preferences on service
> configuration. Also having the ability to support multiple
> configuration mechanisms in the future could once again present itself
> (thinking of Kubernetes as an example). Keeping in mind there may be a
> conversion period that could well last more than a release or two.
>
> I suggested a 'services/ansible' directory with mixed responces in our
> #tripleo meeting this week. Any other thoughts on the matter?

I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)

Personally, such an organization is something I'm more used to. It
feels more similar to how most would expect a puppet module or ansible
role to be organized, where you have the abstraction (service
configuration) at a higher directory level than specific
implementations.

It would also lend itself more easily to adding implementations only
for specific services, and address the question of if a new top level
implementation directory needs to be created. For example, adding a
services/nova/nova-api-chef.yaml seems a lot less contentious than
adding a top level chef/services/nova-api.yaml.

It'd also be nice if we had a way to mark the default within a given
service's directory. Perhaps services/nova/nova-api-default.yaml,
which would be a new template that just consumes the default? Or
perhaps a symlink, although it was pointed out symlinks don't work in
swift containers. Still, that could possibly be addressed in our plan
upload workflows. Then the resource-registry would point at
nova-api-default.yaml. One could easily tell which is the default
without having to cross reference with the resource-registry.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] easily identifying how services are configured

2018-07-05 Thread Dan Prince
Last week I was tinkering with my docker configuration a bit and was a
bit surprised that puppet/services/docker.yaml no longer used puppet to
configure the docker daemon. It now uses Ansible [1] which is very cool
but brings up the question of how should we clearly indicate to
developers and users that we are using Ansible vs Puppet for
configuration?

TripleO has been around for a while now, has supported multiple
configuration ans service types over the years: os-apply-config,
puppet, containers, and now Ansible. In the past we've used rigid
directory structures to identify which "service type" was used. More
recently we mixed things up a bit more even by extending one service
type from another ("docker" services all initially extended the
"puppet" services to generate config files and provide an easy upgrade
path).

Similarly we now use Ansible all over the place for other things in
many of or docker and puppet services for things like upgrades. That is
all good too. I guess the thing I'm getting at here is just a way to
cleanly identify which services are configured via Puppet vs. Ansible.
And how can we do that in the least destructive way possible so as not
to confuse ourselves and our users in the process.

Also, I think its work keeping in mind that TripleO was once a multi-
vendor project with vendors that had different preferences on service
configuration. Also having the ability to support multiple
configuration mechanisms in the future could once again present itself
(thinking of Kubernetes as an example). Keeping in mind there may be a
conversion period that could well last more than a release or two.

I suggested a 'services/ansible' directory with mixed responces in our
#tripleo meeting this week. Any other thoughts on the matter?

Thanks,

Dan

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/puppet/services/docker.yaml?id=00f5019ef28771e0b3544d0aa3110d5603d7c
159

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev