Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-28 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 05:17:36PM +0200, Jiří Stránský wrote:
> For the puppet modules specifically, we might also add another
> directory+mount into the docker-puppet container, which would be blank by
> default (unlike the existing, already populated /etc/puppet and
> /usr/share/openstack-puppet/modules). And we'd put that directory at the
> very start of modulepath. Then i *think* puppet would use a particular
> module from that dir *only*, not merge the contents with the rest of
> modulepath...

No, you would still have the problem that types/providers from *all*
available paths are activated, so if in your container you have
/etc/puppet/modules/themodule/lib/puppet/provider/something/foo.rb,
and you mount into the container
/container/puppet/modules/themodule/lib/puppet/provider/something/bar.rb,
then you end up with both foo.rb and bar.rb active and possibly
conflicting.

This only affects module lib directories. As Alex pointed out, puppet
classes themselves behave differently and don't conflict in this
fashion.

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-28 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 10:12:54AM -0600, Alex Schultz wrote:
> -1 to more services. We take a Heat time penalty for each new
> composable service we add and in this case I don't think this should
> be a service itself.  I think for this case, it would be better suited
> as a host prep task than a defined service.  Providing a way for users
> to define external host prep tasks might make more sense.

But right now, the only way to define a host_prep_task is via a
service template, right?  What I've done for this particular case is
create a new service template that exists only to provide a set of
host_prep_tasks:

  
https://github.com/CCI-MOC/rhosp-director-config/blob/master/templates/services/patch-puppet-modules.yaml

Is there a better way to do this?

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Alex Schultz
On Tue, Jun 19, 2018 at 9:17 AM, Jiří Stránský  wrote:
> On 19.6.2018 16:29, Lars Kellogg-Stedman wrote:
>>
>> On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:
>>>
>>> Is this the same issue Carlos is trying to fix via
>>> https://review.openstack.org/#/c/494517/ ?
>>
>>
>> That solves part of the problem, but it's not a complete solution.
>> In particular, it doesn't solve the problem that bit me: if you're
>> changing puppet providers (e.g., replacing
>> provider/keystone_config/ini_setting.rb with
>> provider/keystone_config/openstackconfig.rb), you still have the old
>> provider sitting around causing problems because unpacking a tarball
>> only *adds* files.
>>
>>> Yeah I think we've never seen this because normally the
>>> /etc/puppet/modules tarball overwrites the symlink, effectively giving
>>> you a new tree (the first time round at least).
>>
>>
>> But it doesn't, and that's the unexpected problem: if you replace the
>> /etc/puppet/modules/keystone symlink with a directory, then
>> /usr/share/openstack-puppet/modules/keystone is still there, and while
>> the manifests won't be used, the contents of the lib/ directory will
>> still be active.
>>
>>> Probably we could add something to the script to enable a forced
>>> cleanup each update:
>>>
>>>
>>> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9
>>
>>
>> We could:
>>
>> (a) unpack the replacement puppet modules into a temporary location,
>>then
>>
>> (b) for each module; rm -rf the target directory and then copy it into
>>place
>>
>> But! This would require deploy_artifacts.sh to know that it was
>> unpacking puppet modules rather than a generic tarball.
>>
>>> This would have to be optional, so we could add something like a
>>> DeployArtifactsCleanupDirs parameter perhaps?
>>
>>
>> If we went with the above, sure.
>>
>>> One more thought which just occurred to me - we could add support for
>>> a git checkout/pull to the script?
>>
>>
>> Reiterating our conversion in #tripleo, I think rather than adding a
>> bunch of specific functionality to the DeployArtifacts feature, it
>> would make more sense to add the ability to include some sort of
>> user-defined pre/post tasks, either as shell scripts or as ansible
>> playbooks or something.
>>
>> On the other hand, I like your suggestion of just ditching
>> DeployArtifacts for a new composable service that defines
>> host_prep_tasks (or re-implenting DeployArtifacts as a composable
>> service), so I'm going to look at that as a possible alternative to
>> what I'm currently doing.
>>
>
> For the puppet modules specifically, we might also add another
> directory+mount into the docker-puppet container, which would be blank by
> default (unlike the existing, already populated /etc/puppet and
> /usr/share/openstack-puppet/modules). And we'd put that directory at the
> very start of modulepath. Then i *think* puppet would use a particular
> module from that dir *only*, not merge the contents with the rest of
> modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter
> (didn't test it though). That should get us around the "tgz only adds files"
> problem without any rm -rf.
>

So the described problem is only a problem with puppet facts and
providers as they all get loaded from the entire module path. Normal
puppet classes are less conflict-y because it takes the first it finds
and stops.

> The above is somewhat of an orthogonal suggestion to the composable service
> approach, they would work well alongside i think. (And +1 on
> "DeployArtifacts as composable service" as something worth investigating /
> implementing.)
>

-1 to more services. We take a Heat time penalty for each new
composable service we add and in this case I don't think this should
be a service itself.  I think for this case, it would be better suited
as a host prep task than a defined service.  Providing a way for users
to define external host prep tasks might make more sense.

Thanks,
-Alex

> Jirka
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Jiří Stránský

On 19.6.2018 16:29, Lars Kellogg-Stedman wrote:

On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:

Is this the same issue Carlos is trying to fix via
https://review.openstack.org/#/c/494517/ ?


That solves part of the problem, but it's not a complete solution.
In particular, it doesn't solve the problem that bit me: if you're
changing puppet providers (e.g., replacing
provider/keystone_config/ini_setting.rb with
provider/keystone_config/openstackconfig.rb), you still have the old
provider sitting around causing problems because unpacking a tarball
only *adds* files.


Yeah I think we've never seen this because normally the
/etc/puppet/modules tarball overwrites the symlink, effectively giving
you a new tree (the first time round at least).


But it doesn't, and that's the unexpected problem: if you replace the
/etc/puppet/modules/keystone symlink with a directory, then
/usr/share/openstack-puppet/modules/keystone is still there, and while
the manifests won't be used, the contents of the lib/ directory will
still be active.


Probably we could add something to the script to enable a forced
cleanup each update:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9


We could:

(a) unpack the replacement puppet modules into a temporary location,
   then

(b) for each module; rm -rf the target directory and then copy it into
   place

But! This would require deploy_artifacts.sh to know that it was
unpacking puppet modules rather than a generic tarball.


This would have to be optional, so we could add something like a
DeployArtifactsCleanupDirs parameter perhaps?


If we went with the above, sure.


One more thought which just occurred to me - we could add support for
a git checkout/pull to the script?


Reiterating our conversion in #tripleo, I think rather than adding a
bunch of specific functionality to the DeployArtifacts feature, it
would make more sense to add the ability to include some sort of
user-defined pre/post tasks, either as shell scripts or as ansible
playbooks or something.

On the other hand, I like your suggestion of just ditching
DeployArtifacts for a new composable service that defines
host_prep_tasks (or re-implenting DeployArtifacts as a composable
service), so I'm going to look at that as a possible alternative to
what I'm currently doing.



For the puppet modules specifically, we might also add another 
directory+mount into the docker-puppet container, which would be blank 
by default (unlike the existing, already populated /etc/puppet and 
/usr/share/openstack-puppet/modules). And we'd put that directory at the 
very start of modulepath. Then i *think* puppet would use a particular 
module from that dir *only*, not merge the contents with the rest of 
modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter 
(didn't test it though). That should get us around the "tgz only adds 
files" problem without any rm -rf.


The above is somewhat of an orthogonal suggestion to the composable 
service approach, they would work well alongside i think. (And +1 on 
"DeployArtifacts as composable service" as something worth investigating 
/ implementing.)


Jirka

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-19 Thread Lars Kellogg-Stedman
On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote:
> Is this the same issue Carlos is trying to fix via
> https://review.openstack.org/#/c/494517/ ?

That solves part of the problem, but it's not a complete solution.
In particular, it doesn't solve the problem that bit me: if you're
changing puppet providers (e.g., replacing
provider/keystone_config/ini_setting.rb with
provider/keystone_config/openstackconfig.rb), you still have the old
provider sitting around causing problems because unpacking a tarball
only *adds* files.

> Yeah I think we've never seen this because normally the
> /etc/puppet/modules tarball overwrites the symlink, effectively giving
> you a new tree (the first time round at least).

But it doesn't, and that's the unexpected problem: if you replace the
/etc/puppet/modules/keystone symlink with a directory, then
/usr/share/openstack-puppet/modules/keystone is still there, and while
the manifests won't be used, the contents of the lib/ directory will
still be active.

> Probably we could add something to the script to enable a forced
> cleanup each update:
> 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9

We could:

(a) unpack the replacement puppet modules into a temporary location,
  then

(b) for each module; rm -rf the target directory and then copy it into
  place

But! This would require deploy_artifacts.sh to know that it was
unpacking puppet modules rather than a generic tarball.

> This would have to be optional, so we could add something like a
> DeployArtifactsCleanupDirs parameter perhaps?

If we went with the above, sure.

> One more thought which just occurred to me - we could add support for
> a git checkout/pull to the script?

Reiterating our conversion in #tripleo, I think rather than adding a
bunch of specific functionality to the DeployArtifacts feature, it
would make more sense to add the ability to include some sort of
user-defined pre/post tasks, either as shell scripts or as ansible
playbooks or something.

On the other hand, I like your suggestion of just ditching
DeployArtifacts for a new composable service that defines
host_prep_tasks (or re-implenting DeployArtifacts as a composable
service), so I'm going to look at that as a possible alternative to
what I'm currently doing.

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DeployArtifacts considered...complicated?

2018-06-18 Thread Steven Hardy
On Sat, Jun 16, 2018 at 3:06 AM, Lars Kellogg-Stedman  wrote:
> I've been working on a series of patches to enable support for
> keystone federation in tripleo.  I've been making good use of the
> DeployArtifacts support for testing puppet modules...until today.
>
> I have some patches that teach puppet-keystone about multi-valued
> configuration options (like trusted_dashboard).  They replace the
> keystone_config provider (and corresponding type) with ones that work
> with the 'openstackconfig' provider (instead of ini_settings).  These
> work great when I test them in isolation, but whenever I ran them as
> part of an "overcloud deploy" I would get erroneous output.
>
> After digging through the various layers I found myself looking at
> docker-puppet.py [1], which ultimately ends up calling puppet like
> this:
>
>   puppet apply ... 
> --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules ...
>
> It's that --modulepath argument that's the culprit.  DeployArtifacts
> (when using the upload-puppet-modules script) works by replacing the
> symlinks in /etc/puppet/modules with the directories from your upload
> directory.  Even though the 'keystone' module in /etc/puppet/modules
> takes precedence when doing something like 'include ::keystone', *all
> the providers and types* in lib/puppet/* in
> /usr/share/openstack-puppet/modules will be activated.

Is this the same issue Carlos is trying to fix via
https://review.openstack.org/#/c/494517/ ?

I think there was some confusion on that patch around the underlying
problem, but I think your explanation here helps, e.g the problem is
you can conceivably end up with a mix of old/new modules?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] DeployArtifacts considered...complicated?

2018-06-15 Thread Lars Kellogg-Stedman
I've been working on a series of patches to enable support for
keystone federation in tripleo.  I've been making good use of the
DeployArtifacts support for testing puppet modules...until today.

I have some patches that teach puppet-keystone about multi-valued
configuration options (like trusted_dashboard).  They replace the
keystone_config provider (and corresponding type) with ones that work
with the 'openstackconfig' provider (instead of ini_settings).  These
work great when I test them in isolation, but whenever I ran them as
part of an "overcloud deploy" I would get erroneous output.

After digging through the various layers I found myself looking at
docker-puppet.py [1], which ultimately ends up calling puppet like
this:

  puppet apply ... 
--modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules ...

It's that --modulepath argument that's the culprit.  DeployArtifacts
(when using the upload-puppet-modules script) works by replacing the
symlinks in /etc/puppet/modules with the directories from your upload
directory.  Even though the 'keystone' module in /etc/puppet/modules
takes precedence when doing something like 'include ::keystone', *all
the providers and types* in lib/puppet/* in
/usr/share/openstack-puppet/modules will be activated.

So in this case -- in which I've replaced the keystone_config
provider -- we get the old ini_settings provider, and I don't get the
output that I expect.

The quickest workaround is to generate the tarball by hand and map the
modules onto /usr/share/openstack-puppet/modules...

  tar -cz -f patches/puppet-modules.tar.gz \
--transform "s|patches/puppet-modules|usr/share/openstack-puppet/modules|" \
patches/puppet-modules

...and then use upload-swift-artifacts:

upload-swift-artifacts -f patches/puppet-modules.tar.gz

Done this way, I get the output I expect.

[1]: 
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/docker-puppet.py

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev