Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-30 Thread bdobrelia
> Why do you use [puppet] tag?
> Is there anything related to Puppet OpenStack modules we should take
> care of?


There are several deployment automation tools leveraging the puppet-openstack 
modules.
Those are under the Big Tent and each has its own composition layer and 
deployment patterns with specific data flows. That is why I opened this 
discussion public and put the puppet tag as well. I believe the idea of 
automated checks of the data layer applies to them as well and I’d like to have 
opinions on that from a broad audience.
What is the correct tag for deploymnent automation with a Puppet?

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
Hello.
There are few types of a deployment regressions possible. When changing
a module version to be used from upstream (or internal module repo), for
example from Liberty to Mitaka. Or when changing the composition layer
(modular tasks in Fuel). Specifically, adding/removing/changing classes
and a class parameters.

An example regression for swift deployment data [0]. Something was
changed unnoticed by existing noop tests and as a result
the swift data became being stored in root partition.

Suggested per-commit based regressions detection [1] for deployment data
assumes to automatically detect if a class in a noop catalog run has
gained or lost a parameter or if it has been updated to another value by
a patch under test. Later, this check could even replace existing noop
tests, everything will be checked automatically, unless every deployment
scenario is covered by a corresponding template, which are represented
as YAML files [2] in Fuel.
Note: The tool [3] can help to get all deployment cases (-Y) and all
deployment tasks (-S) as well.

I propose to review the patch [1], understand how it works (see tl;dr
section below) and to start using it ASAP. The earlier we commit the
"initial" data layer state, less regressions would pop up.

(tl;dr)
The check should be done for every modular component (aka deployment
task). Data generated in the noop catalog run for all classes and
defines of a given deployment task should be verified against its
"acknowledged" (committed) state.
And fail the test gate, if changes has been found, like new parameter
with a defined value, removed a parameter, changed a parameter's value.

In order to remove a regression, a patch author will have to add (and
reviewers should acknowledge) detected changes in the committed state of
the deployment data. This may be done manually, with a tool like [3] or
by a pre-commit hook, or even at the CI side!
The regression check should show the diff between committed state and a
new state proposed in a patch. Changed state should be *reviewed* and
accepted with a patch, to became a committed one. So the deployment data
will evolve with *only* approved changes. And those changes would be
very easy to be discovered for each patch under review process!
No more regressions, everyone happy.

Examples:

- A. A patch author removed the mpm_module parameter from the
composition layer (apache modular task). The test should fail with a

Diff:
  @@ -90,7 +90,7 @@
 manage_user=> 'true',
 max_keepalive_requests => '100',
 mod_dir=> '/etc/httpd/conf.d',
  -  mpm_module => 'false',
  +  mpm_module => 'prefork',
 name   => 'Apache',
 package_ensure => 'installed',
 ports_file => '/etc/httpd/conf/ports.conf',

It illustrates that the mpm_module's committed value was "false".
But the new one came as the 'prefork', likely from the apache class
defaults.
The solution:
Follow the failed build link and see for detected changes (a diff).
Acknowledge the changes and include rebuilt templates in the patch as a
new revision. The tool [3] (use -h for help) example command:
./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb

Or edit the committed templates manually and include data changes in the
patch as well.

-B. An upstream module author added the new parameter mpm_mode with a
default '123'. The test should fail with a

Diff:
   @@ -90,6 +90,7 @@
  manage_user=> 'true',
  max_keepalive_requests => '100',
  mod_dir=> '/etc/httpd/conf.d',
   +  mpm_mode   => '123',
  mpm_module => 'false',
  name   => 'Apache',
  package_ensure => 'installed',

It illustrates that the composition layer is not consistent with the
upstream module data schema, and that could be a potential regression in
deployment (new parameter added upstream and goes with defaults, being
ignored by the composition manifest).
The solution is the same as for the case A.

[0] https://bugs.launchpad.net/fuel/+bug/1508482
[1] https://review.openstack.org/240015
[2]
https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
[3]
https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> Hello.
> There are few types of a deployment regressions possible. When changing
> a module version to be used from upstream (or internal module repo), for
> example from Liberty to Mitaka. Or when changing the composition layer
> (modular tasks in Fuel). Specifically, adding/removing/changing classes
> and a class parameters.
> 
> An example regression for swift deployment data [0]. Something was
> changed unnoticed by existing noop tests and as a result
> the swift data became being stored in root partition.
> 
> Suggested per-commit based regressions detection [1] for deployment data
> assumes to automatically detect if a class in a noop catalog run has
> gained or lost a parameter or if it has been updated to another value by
> a patch under test. Later, this check could even replace existing noop
> tests, everything will be checked automatically, unless every deployment
> scenario is covered by a corresponding template, which are represented
> as YAML files [2] in Fuel.
> Note: The tool [3] can help to get all deployment cases (-Y) and all
> deployment tasks (-S) as well.
> 
> I propose to review the patch [1], understand how it works (see tl;dr
> section below) and to start using it ASAP. The earlier we commit the
> "initial" data layer state, less regressions would pop up.
> 
> (tl;dr)
> The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state.
> And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value.
> 
> In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!
> The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!
> No more regressions, everyone happy.
> 
> Examples:
> 
> - A. A patch author removed the mpm_module parameter from the
> composition layer (apache modular task). The test should fail with a
> 
> Diff:
>   @@ -90,7 +90,7 @@
>  manage_user=> 'true',
>  max_keepalive_requests => '100',
>  mod_dir=> '/etc/httpd/conf.d',
>   -  mpm_module => 'false',
>   +  mpm_module => 'prefork',
>  name   => 'Apache',
>  package_ensure => 'installed',
>  ports_file => '/etc/httpd/conf/ports.conf',
> 
> It illustrates that the mpm_module's committed value was "false".
> But the new one came as the 'prefork', likely from the apache class
> defaults.
> The solution:
> Follow the failed build link and see for detected changes (a diff).
> Acknowledge the changes and include rebuilt templates in the patch as a
> new revision. The tool [3] (use -h for help) example command:
> ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> 
> Or edit the committed templates manually and include data changes in the
> patch as well.
> 
> -B. An upstream module author added the new parameter mpm_mode with a
> default '123'. The test should fail with a
> 
> Diff:
>@@ -90,6 +90,7 @@
>   manage_user=> 'true',
>   max_keepalive_requests => '100',
>   mod_dir=> '/etc/httpd/conf.d',
>+  mpm_mode   => '123',
>   mpm_module => 'false',
>   name   => 'Apache',
>   package_ensure => 'installed',
> 
> It illustrates that the composition layer is not consistent with the
> upstream module data schema, and that could be a potential regression in
> deployment (new parameter added upstream and goes with defaults, being
> ignored by the composition manifest).
> The solution is the same as for the case A.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1508482
> [1] https://review.openstack.org/240015

Please use the 6th revision to catch the idea. Next revisions are filled
with tons of auto-generated templates representing committed state of
the deployment data plane. This is still WIP...

We are thinking on should we check the data state for *all* classes and
defines of each deployment task, or should we do this only for some of
the classes. The latter one would drastically decrease the amount of
auto-generated data templates. But the former one would allow to catch
more regressions.

> [2]
> 

Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Matthew Mosesohn
Bogdan,

I don't want to maintain catalog resources 10 (or 20) times. It's an
incredible amount of work to upkeep. There has to be a better way to ensure
we don't lose some things. The approach I had in mind would have covered
these cases:
1 - track all hiera lookups and make sure we catch new/lost lookups
2 - ensure all classes called from top level tasks are passed these values
from hiera

We lost in this case a hiera lookup. Such a test would cover this, rather
than comparing resulting puppet resources.

On Thu, Oct 29, 2015 at 5:39 PM, Bogdan Dobrelya 
wrote:

> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >   @@ -90,7 +90,7 @@
> >  manage_user=> 'true',
> >  max_keepalive_requests => '100',
> >  mod_dir=> '/etc/httpd/conf.d',
> >   -  mpm_module => 'false',
> >   +  mpm_module => 'prefork',
> >  name   => 'Apache',
> >  package_ensure => 'installed',
> >  ports_file => '/etc/httpd/conf/ports.conf',
> >
> > It illustrates that the mpm_module's committed value was "false".
> > But the new one came as the 'prefork', likely from the apache class
> > defaults.
> > The solution:
> > Follow the failed build link and see for detected changes (a diff).
> > Acknowledge the changes and include rebuilt templates in the patch as a
> > new revision. The tool [3] (use -h for help) example command:
> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> >
> > Or edit the committed templates manually and include data changes in the
> > patch as well.
> >
> > -B. An upstream module author added the new parameter mpm_mode with a
> > default '123'. The test should fail with a
> >
> > Diff:
> >@@ -90,6 +90,7 @@
> >   manage_user=> 'true',
> >   max_keepalive_requests => '100',
> >   mod_dir=> '/etc/httpd/conf.d',
> >+  mpm_mode   => '123',
> >   mpm_module => 'false',
> >   name   => 'Apache',
> >   package_ensure => 'installed',
> >
> > It illustrates that the composition layer is not consistent with the
> > upstream module data schema, and that could be a potential regression in
> > deployment (new parameter 

Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Bogdan Dobrelya
> Bogdan,
> 
> I don't want to maintain catalog resources 10 (or 20) times. It's an

Matt,

This check should be automatic by design, there is nothing to maintain,
only approve or reject changes to data layer, if any discovered. This
has nothing to the current noop tests writing patterns. If the data
check could show it well, we could even stop using the custom noop tests
which must be composed manually for each change.

> incredible amount of work to upkeep. There has to be a better way to ensure
> we don't lose some things. The approach I had in mind would have covered
> these cases:
> 1 - track all hiera lookups and make sure we catch new/lost lookups
> 2 - ensure all classes called from top level tasks are passed these values
> from hiera
> 
> We lost in this case a hiera lookup. Such a test would cover this, rather
> than comparing resulting puppet resources.
> 
> On Thu, Oct 29, 2015 at 5:39 PM, Bogdan Dobrelya 
> wrote:
> 
>> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
>> > Hello.
>> > There are few types of a deployment regressions possible. When changing
>> > a module version to be used from upstream (or internal module repo), for
>> > example from Liberty to Mitaka. Or when changing the composition layer
>> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
>> > and a class parameters.
>> >
>> > An example regression for swift deployment data [0]. Something was
>> > changed unnoticed by existing noop tests and as a result
>> > the swift data became being stored in root partition.
>> >
>> > Suggested per-commit based regressions detection [1] for deployment data
>> > assumes to automatically detect if a class in a noop catalog run has
>> > gained or lost a parameter or if it has been updated to another value by
>> > a patch under test. Later, this check could even replace existing noop
>> > tests, everything will be checked automatically, unless every deployment
>> > scenario is covered by a corresponding template, which are represented
>> > as YAML files [2] in Fuel.
>> > Note: The tool [3] can help to get all deployment cases (-Y) and all
>> > deployment tasks (-S) as well.
>> >
>> > I propose to review the patch [1], understand how it works (see tl;dr
>> > section below) and to start using it ASAP. The earlier we commit the
>> > "initial" data layer state, less regressions would pop up.
>> >
>> > (tl;dr)
>> > The check should be done for every modular component (aka deployment
>> > task). Data generated in the noop catalog run for all classes and
>> > defines of a given deployment task should be verified against its
>> > "acknowledged" (committed) state.
>> > And fail the test gate, if changes has been found, like new parameter
>> > with a defined value, removed a parameter, changed a parameter's value.
>> >
>> > In order to remove a regression, a patch author will have to add (and
>> > reviewers should acknowledge) detected changes in the committed state of
>> > the deployment data. This may be done manually, with a tool like [3] or
>> > by a pre-commit hook, or even at the CI side!
>> > The regression check should show the diff between committed state and a
>> > new state proposed in a patch. Changed state should be *reviewed* and
>> > accepted with a patch, to became a committed one. So the deployment data
>> > will evolve with *only* approved changes. And those changes would be
>> > very easy to be discovered for each patch under review process!
>> > No more regressions, everyone happy.
>> >
>> > Examples:
>> >
>> > - A. A patch author removed the mpm_module parameter from the
>> > composition layer (apache modular task). The test should fail with a
>> >
>> > Diff:
>> >   @@ -90,7 +90,7 @@
>> >  manage_user=> 'true',
>> >  max_keepalive_requests => '100',
>> >  mod_dir=> '/etc/httpd/conf.d',
>> >   -  mpm_module => 'false',
>> >   +  mpm_module => 'prefork',
>> >  name   => 'Apache',
>> >  package_ensure => 'installed',
>> >  ports_file => '/etc/httpd/conf/ports.conf',
>> >
>> > It illustrates that the mpm_module's committed value was "false".
>> > But the new one came as the 'prefork', likely from the apache class
>> > defaults.
>> > The solution:
>> > Follow the failed build link and see for detected changes (a diff).
>> > Acknowledge the changes and include rebuilt templates in the patch as a
>> > new revision. The tool [3] (use -h for help) example command:
>> > ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
>> >
>> > Or edit the committed templates manually and include data changes in the
>> > patch as well.
>> >
>> > -B. An upstream module author added the new parameter mpm_mode with a
>> > default '123'. The test should fail with a
>> >
>> > Diff:
>> >@@ -90,6 +90,7 @@
>> >   manage_user=> 'true',
>> >   max_keepalive_requests => '100',
>> >   

Re: [openstack-dev] [Fuel][puppet] CI gate for regressions detection in deployment data

2015-10-29 Thread Emilien Macchi
Why do you use [puppet] tag?
Is there anything related to Puppet OpenStack modules we should take
care of?

Good luck,

On 10/29/2015 11:24 PM, Bogdan Dobrelya wrote:
> Hello.
> There are few types of a deployment regressions possible. When changing
> a module version to be used from upstream (or internal module repo), for
> example from Liberty to Mitaka. Or when changing the composition layer
> (modular tasks in Fuel). Specifically, adding/removing/changing classes
> and a class parameters.
> 
> An example regression for swift deployment data [0]. Something was
> changed unnoticed by existing noop tests and as a result
> the swift data became being stored in root partition.
> 
> Suggested per-commit based regressions detection [1] for deployment data
> assumes to automatically detect if a class in a noop catalog run has
> gained or lost a parameter or if it has been updated to another value by
> a patch under test. Later, this check could even replace existing noop
> tests, everything will be checked automatically, unless every deployment
> scenario is covered by a corresponding template, which are represented
> as YAML files [2] in Fuel.
> Note: The tool [3] can help to get all deployment cases (-Y) and all
> deployment tasks (-S) as well.
> 
> I propose to review the patch [1], understand how it works (see tl;dr
> section below) and to start using it ASAP. The earlier we commit the
> "initial" data layer state, less regressions would pop up.
> 
> (tl;dr)
> The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state.
> And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value.
> 
> In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!
> The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!
> No more regressions, everyone happy.
> 
> Examples:
> 
> - A. A patch author removed the mpm_module parameter from the
> composition layer (apache modular task). The test should fail with a
> 
> Diff:
>   @@ -90,7 +90,7 @@
>  manage_user=> 'true',
>  max_keepalive_requests => '100',
>  mod_dir=> '/etc/httpd/conf.d',
>   -  mpm_module => 'false',
>   +  mpm_module => 'prefork',
>  name   => 'Apache',
>  package_ensure => 'installed',
>  ports_file => '/etc/httpd/conf/ports.conf',
> 
> It illustrates that the mpm_module's committed value was "false".
> But the new one came as the 'prefork', likely from the apache class
> defaults.
> The solution:
> Follow the failed build link and see for detected changes (a diff).
> Acknowledge the changes and include rebuilt templates in the patch as a
> new revision. The tool [3] (use -h for help) example command:
> ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> 
> Or edit the committed templates manually and include data changes in the
> patch as well.
> 
> -B. An upstream module author added the new parameter mpm_mode with a
> default '123'. The test should fail with a
> 
> Diff:
>@@ -90,6 +90,7 @@
>   manage_user=> 'true',
>   max_keepalive_requests => '100',
>   mod_dir=> '/etc/httpd/conf.d',
>+  mpm_mode   => '123',
>   mpm_module => 'false',
>   name   => 'Apache',
>   package_ensure => 'installed',
> 
> It illustrates that the composition layer is not consistent with the
> upstream module data schema, and that could be a potential regression in
> deployment (new parameter added upstream and goes with defaults, being
> ignored by the composition manifest).
> The solution is the same as for the case A.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1508482
> [1] https://review.openstack.org/240015
> [2]
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> [3]
> https://review.openstack.org/#/c/240015/7/utils/jenkins/fuel_noop_tests.rb
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe