Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2016-03-31 Thread Bogdan Dobrelya
It is time for update!
The previous idea with the committed state and automatic cross-repo
merge hooks in zuul seems too complex to implement. So, the "CI gate for
blah blah" magically becomes now a manual helper tool for
reviewers/developers, see the docs update [0], [1].

You may start using it right now, as described in the docs. Hopefully,
it will help to visualize data changes for complex patches better.

[0] https://review.openstack.org/#/c/299912/
[1] http://goo.gl/Pj3lNf

> On 01.12.2015 11:28, Aleksandr Didenko wrote:
>> Hi,
>> 
>>> pregenerated catalogs for the Noop tests to become the very first
>>> committed state in the data regression process has to be put in the
>>> *separate repo*
>> 
>> +1 to that, we can put this new repo into .fixtures.yml
>> 
>>> note, we could as well move the tests/noop/astute.yaml/ there
>> 
>> +1 here too, astute.yaml files are basically configuration fixtures, we
>> can put them into .fixtures.yml as well
> 
> I found the better -and easier for patch authors- way to use the data
> regression checks. Originally suggested workflow was:
> 
> 1.
> "The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state."
> 
> This part remains the same with the only comment that the astute.yaml
> fixtures of deployment cases should be fetched from the
> fuel-noop-fixtures repo. And the committed state for generated catalogs
> should be
> stored there as well.
> 
> 2.
> "And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value."
> 
> This should be changed as following:
> - the data checks gate should be just a non voting helper for reviewers
> and patch authors. The only its task would be to show inducted data
> changes in a pretty and fast view to help accept/update/reject a patch
> on review.
> - the data checks gate job should fetch the committed data state from
> the fuel-noop-fixtures repo and run regressions check with the patch
> under review checked out on fuel-library repo.
> - the Noop tests gate should be changed to fetch the astute.yaml
> fixtures from the fuel-noop-fixtures repo in order to run noop tests as
> usual.
> 
> 3.
> "In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!"
> 
> Instead, the patch authors should do nothing additionally. Once accepted
> with wf+1, the patch on reivew should be merged with a pre-commit zuul
> hook (is it possible?). The hook should just regenerate catalogs with
> the changes introduced by the patch and update the committed state of
> data in the fuel-noop-fixtures repo. After that, the patch may be safely
> merged to the fuel-library and everything will be up to date with the
> committed data state.
> 
> 4.
> "The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!"
> 
> So this part would work even better now, with no additional actions
> required from the review process sides.
> 
>> 
>> Regards,
>> Alex
>> 
>> 
>> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya > > wrote:
>> 
>> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>> >> Hi,
>> >>
>> >> let me try to rephrase this a bit and Bogdan will correct me if
>> I'm wrong
>> >> or missing something.
>> >>
>> >> We have a set of top-scope manifests (called Fuel puppet tasks)
>> that we use
>> >> for OpenStack deployment. We execute those tasks with "puppet
>> apply". Each
>> >> task supposed to bring target system into some desired state, so
>> puppet
>> >> compiles a catalog and applies it. So basically, puppet catalog =
>> desired
>> >> system state.
>> >>
>> >> So we can compile* catalogs for all top-scope manifests in master
>> branch
>> >> and store those compiled* catalogs in fuel-library repo. Then for
>> each
>> >> proposed patch CI will compare new catalogs with stored ones and
>> print out
>> >> the difference if any. This will pretty much show what is going to be
>> >> changed in system configuration by proposed patch.
>> >>
>> >> We were discussing such checks before several times, iirc, but we
>> did not
>> >> have right tools to implement such thing before. Well, now we do
>> :) I think
>> >> it could be quite useful even in non-voting mode.
>>   

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-22 Thread Aleksandr Didenko
Hi,

btw, right now we have 33 outdated astute.yaml fixtures for noop rspec
tests [0] - and this number is based on a single parameter of
network_metadata['vips'] hash, actual amount of outdated fixtures could be
bigger. So I've registered a new BP [1] to create a script that will
generate up-to-date fixtures on demand and/or periodically.

Regards,
Alex

[0] http://paste.openstack.org/show/482508/
[1]
https://blueprints.launchpad.net/fuel/+spec/deployment-dryrun-fixtures-generator


On Mon, Dec 7, 2015 at 11:51 AM, Bogdan Dobrelya 
wrote:

> On 02.12.2015 17:03, Bogdan Dobrelya wrote:
> > On 01.12.2015 11:28, Aleksandr Didenko wrote:
> >> Hi,
> >>
> >>> pregenerated catalogs for the Noop tests to become the very first
> >>> committed state in the data regression process has to be put in the
> >>> *separate repo*
> >>
> >> +1 to that, we can put this new repo into .fixtures.yml
> >>
> >>> note, we could as well move the tests/noop/astute.yaml/ there
> >>
> >> +1 here too, astute.yaml files are basically configuration fixtures, we
> >> can put them into .fixtures.yml as well
>
> Folks, the patch to create the fuel-noop-fixtures [0] is in trouble.
> I'm not sure I've answered Andreas's questions correct:
>
> - Would it be OK to keep Noop tests fixtures for fuel-library as a
> separate Fuel-related repo but *not* as a part of the Fuel project?
>
> - Should we require the contribution license agreement for fixtures
> which are only be used by tests?
>
> [0] https://review.openstack.org/252992
>
> >
> > I found the better -and easier for patch authors- way to use the data
> > regression checks. Originally suggested workflow was:
> >
> > 1.
> > "The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state."
> >
> > This part remains the same with the only comment that the astute.yaml
> > fixtures of deployment cases should be fetched from the
> > fuel-noop-fixtures repo. And the committed state for generated catalogs
> > should be
> > stored there as well.
> >
> > 2.
> > "And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value."
> >
> > This should be changed as following:
> > - the data checks gate should be just a non voting helper for reviewers
> > and patch authors. The only its task would be to show inducted data
> > changes in a pretty and fast view to help accept/update/reject a patch
> > on review.
> > - the data checks gate job should fetch the committed data state from
> > the fuel-noop-fixtures repo and run regressions check with the patch
> > under review checked out on fuel-library repo.
> > - the Noop tests gate should be changed to fetch the astute.yaml
> > fixtures from the fuel-noop-fixtures repo in order to run noop tests as
> > usual.
> >
> > 3.
> > "In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!"
> >
> > Instead, the patch authors should do nothing additionally. Once accepted
> > with wf+1, the patch on reivew should be merged with a pre-commit zuul
> > hook (is it possible?). The hook should just regenerate catalogs with
> > the changes introduced by the patch and update the committed state of
> > data in the fuel-noop-fixtures repo. After that, the patch may be safely
> > merged to the fuel-library and everything will be up to date with the
> > committed data state.
> >
> > 4.
> > "The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!"
> >
> > So this part would work even better now, with no additional actions
> > required from the review process sides.
> >
> >>
> >> Regards,
> >> Alex
> >>
> >>
> >> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya <
> bdobre...@mirantis.com
> >> > wrote:
> >>
> >> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
> >> >> Hi,
> >> >>
> >> >> let me try to rephrase this a bit and Bogdan will correct me if
> >> I'm wrong
> >> >> or missing something.
> >> >>
> >> >> We have a set of top-scope manifests (called Fuel puppet tasks)
> >> that we use
> >> >> for OpenStack deployment. We execute those tasks with "puppet
> >> apply". Each
> >> >> task supposed to bring target system into some desired state, so
> >> puppet
> >> >> compiles a catalog and applies it. So 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-07 Thread Bogdan Dobrelya
On 02.12.2015 17:03, Bogdan Dobrelya wrote:
> On 01.12.2015 11:28, Aleksandr Didenko wrote:
>> Hi,
>>
>>> pregenerated catalogs for the Noop tests to become the very first
>>> committed state in the data regression process has to be put in the
>>> *separate repo*
>>
>> +1 to that, we can put this new repo into .fixtures.yml
>>
>>> note, we could as well move the tests/noop/astute.yaml/ there
>>
>> +1 here too, astute.yaml files are basically configuration fixtures, we
>> can put them into .fixtures.yml as well

Folks, the patch to create the fuel-noop-fixtures [0] is in trouble.
I'm not sure I've answered Andreas's questions correct:

- Would it be OK to keep Noop tests fixtures for fuel-library as a
separate Fuel-related repo but *not* as a part of the Fuel project?

- Should we require the contribution license agreement for fixtures
which are only be used by tests?

[0] https://review.openstack.org/252992

> 
> I found the better -and easier for patch authors- way to use the data
> regression checks. Originally suggested workflow was:
> 
> 1.
> "The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state."
> 
> This part remains the same with the only comment that the astute.yaml
> fixtures of deployment cases should be fetched from the
> fuel-noop-fixtures repo. And the committed state for generated catalogs
> should be
> stored there as well.
> 
> 2.
> "And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value."
> 
> This should be changed as following:
> - the data checks gate should be just a non voting helper for reviewers
> and patch authors. The only its task would be to show inducted data
> changes in a pretty and fast view to help accept/update/reject a patch
> on review.
> - the data checks gate job should fetch the committed data state from
> the fuel-noop-fixtures repo and run regressions check with the patch
> under review checked out on fuel-library repo.
> - the Noop tests gate should be changed to fetch the astute.yaml
> fixtures from the fuel-noop-fixtures repo in order to run noop tests as
> usual.
> 
> 3.
> "In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!"
> 
> Instead, the patch authors should do nothing additionally. Once accepted
> with wf+1, the patch on reivew should be merged with a pre-commit zuul
> hook (is it possible?). The hook should just regenerate catalogs with
> the changes introduced by the patch and update the committed state of
> data in the fuel-noop-fixtures repo. After that, the patch may be safely
> merged to the fuel-library and everything will be up to date with the
> committed data state.
> 
> 4.
> "The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!"
> 
> So this part would work even better now, with no additional actions
> required from the review process sides.
> 
>>
>> Regards,
>> Alex
>>
>>
>> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya > > wrote:
>>
>> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>> >> Hi,
>> >>
>> >> let me try to rephrase this a bit and Bogdan will correct me if
>> I'm wrong
>> >> or missing something.
>> >>
>> >> We have a set of top-scope manifests (called Fuel puppet tasks)
>> that we use
>> >> for OpenStack deployment. We execute those tasks with "puppet
>> apply". Each
>> >> task supposed to bring target system into some desired state, so
>> puppet
>> >> compiles a catalog and applies it. So basically, puppet catalog =
>> desired
>> >> system state.
>> >>
>> >> So we can compile* catalogs for all top-scope manifests in master
>> branch
>> >> and store those compiled* catalogs in fuel-library repo. Then for
>> each
>> >> proposed patch CI will compare new catalogs with stored ones and
>> print out
>> >> the difference if any. This will pretty much show what is going to be
>> >> changed in system configuration by proposed patch.
>> >>
>> >> We were discussing such checks before several times, iirc, but we
>> did not
>> >> have right tools to implement such thing before. Well, now we do
>> :) I think
>> >> it could be quite useful even in non-voting mode.
>> >>
>> >> * By 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-02 Thread Bogdan Dobrelya
On 01.12.2015 11:28, Aleksandr Didenko wrote:
> Hi,
> 
>> pregenerated catalogs for the Noop tests to become the very first
>> committed state in the data regression process has to be put in the
>> *separate repo*
> 
> +1 to that, we can put this new repo into .fixtures.yml
> 
>> note, we could as well move the tests/noop/astute.yaml/ there
> 
> +1 here too, astute.yaml files are basically configuration fixtures, we
> can put them into .fixtures.yml as well

I found the better -and easier for patch authors- way to use the data
regression checks. Originally suggested workflow was:

1.
"The check should be done for every modular component (aka deployment
task). Data generated in the noop catalog run for all classes and
defines of a given deployment task should be verified against its
"acknowledged" (committed) state."

This part remains the same with the only comment that the astute.yaml
fixtures of deployment cases should be fetched from the
fuel-noop-fixtures repo. And the committed state for generated catalogs
should be
stored there as well.

2.
"And fail the test gate, if changes has been found, like new parameter
with a defined value, removed a parameter, changed a parameter's value."

This should be changed as following:
- the data checks gate should be just a non voting helper for reviewers
and patch authors. The only its task would be to show inducted data
changes in a pretty and fast view to help accept/update/reject a patch
on review.
- the data checks gate job should fetch the committed data state from
the fuel-noop-fixtures repo and run regressions check with the patch
under review checked out on fuel-library repo.
- the Noop tests gate should be changed to fetch the astute.yaml
fixtures from the fuel-noop-fixtures repo in order to run noop tests as
usual.

3.
"In order to remove a regression, a patch author will have to add (and
reviewers should acknowledge) detected changes in the committed state of
the deployment data. This may be done manually, with a tool like [3] or
by a pre-commit hook, or even at the CI side!"

Instead, the patch authors should do nothing additionally. Once accepted
with wf+1, the patch on reivew should be merged with a pre-commit zuul
hook (is it possible?). The hook should just regenerate catalogs with
the changes introduced by the patch and update the committed state of
data in the fuel-noop-fixtures repo. After that, the patch may be safely
merged to the fuel-library and everything will be up to date with the
committed data state.

4.
"The regression check should show the diff between committed state and a
new state proposed in a patch. Changed state should be *reviewed* and
accepted with a patch, to became a committed one. So the deployment data
will evolve with *only* approved changes. And those changes would be
very easy to be discovered for each patch under review process!"

So this part would work even better now, with no additional actions
required from the review process sides.

> 
> Regards,
> Alex
> 
> 
> On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya  > wrote:
> 
> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
> >> Hi,
> >>
> >> let me try to rephrase this a bit and Bogdan will correct me if
> I'm wrong
> >> or missing something.
> >>
> >> We have a set of top-scope manifests (called Fuel puppet tasks)
> that we use
> >> for OpenStack deployment. We execute those tasks with "puppet
> apply". Each
> >> task supposed to bring target system into some desired state, so
> puppet
> >> compiles a catalog and applies it. So basically, puppet catalog =
> desired
> >> system state.
> >>
> >> So we can compile* catalogs for all top-scope manifests in master
> branch
> >> and store those compiled* catalogs in fuel-library repo. Then for
> each
> >> proposed patch CI will compare new catalogs with stored ones and
> print out
> >> the difference if any. This will pretty much show what is going to be
> >> changed in system configuration by proposed patch.
> >>
> >> We were discussing such checks before several times, iirc, but we
> did not
> >> have right tools to implement such thing before. Well, now we do
> :) I think
> >> it could be quite useful even in non-voting mode.
> >>
> >> * By saying compiled catalogs I don't mean actual/real puppet
> catalogs, I
> >> mean sorted lists of all classes/resources with all parameters
> that we find
> >> during puppet-rspec tests in our noop test framework, something like
> >> standard puppet-rspec coverage. See example [0] for networks.pp
> task [1].
> >>
> >> Regards,
> >> Alex
> >>
> >> [0] http://paste.openstack.org/show/477839/
> >> [1]
> >>
> 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
> >
> > 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-01 Thread Bogdan Dobrelya
On 30.11.2015 13:03, Bogdan Dobrelya wrote:
> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>>> Hi,
>>>
>>> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
>>> or missing something.
>>>
>>> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
>>> for OpenStack deployment. We execute those tasks with "puppet apply". Each
>>> task supposed to bring target system into some desired state, so puppet
>>> compiles a catalog and applies it. So basically, puppet catalog = desired
>>> system state.
>>>
>>> So we can compile* catalogs for all top-scope manifests in master branch
>>> and store those compiled* catalogs in fuel-library repo. Then for each
>>> proposed patch CI will compare new catalogs with stored ones and print out
>>> the difference if any. This will pretty much show what is going to be
>>> changed in system configuration by proposed patch.
>>>
>>> We were discussing such checks before several times, iirc, but we did not
>>> have right tools to implement such thing before. Well, now we do :) I think
>>> it could be quite useful even in non-voting mode.
>>>
>>> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
>>> mean sorted lists of all classes/resources with all parameters that we find
>>> during puppet-rspec tests in our noop test framework, something like
>>> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] http://paste.openstack.org/show/477839/
>>> [1] 
>>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
>>
>> Thank you, Alex.
>> Yes, the composition layer is a top-scope manifests, known as a Fuel
>> library modular tasks [0].
>>
>> The "deployment data checks", is nothing more than comparing the
>> committed vs changed states of fixtures [1] of puppet catalogs for known
>> deployment paths under test with rspecs written for each modular task [2].
>>
>> And the *current status* is:
>> - the script for data layer checks now implemented [3]
>> - how-to is being documented here [4]
>> - a fix to make catalogs compilation idempotent submitted [5]
> 
> The status update:
> - the issue [0] is the data regression checks blocker and is only the
> Noop tests specific. It has been reworked to not use custom facts [1].
> New uuid will be still generated each time in the catalog, but the
> augeas ensures it will be processed in idempotent way. Let's make this
> change [2] to the upstream puppet-nova as well please.
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1517915
> [1] https://review.openstack.org/251314
> [2] https://review.openstack.org/131710
> 
> - pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*. Otherwise, the stackalytics would go mad as that would
> be a 600k-liner patch to an OpenStack project, which is the Fuel-library
> now :)
> 
> So, I'm planning to use the separate repo for the templates. Note, we
> could as well move the tests/noop/astute.yaml/ there. Thoughts?
> 
>> - and there is my WIP branch [6] with the initial committed state of
>> deploy data pre-generated. So, you can checkout, make any test changes
>> to manifests and run the data check (see the README [4]). It works for
>> me, there is no issues with idempotent re-checks of a clean committed
>> state or tests failing when unexpected.
>>
>> So the plan is to implement this noop tests extention as a non-voting CI
>> gate after I make an example workflow update for developers to the
>> Fuel wiki. Thoughts?

Folks, here is another example patch [0] with 4k lines of pure fixtures.
That is why we should not keep astute.yaml fixtures (as well as
pregenerated catalogs for the data regression checks being discussed in
this topic) in the main fuel-library repo.

Instead, all of the fixtures and such must be pulled in from an external
repo, like openstack/fuel-noop-fixtures (which is out of the BigTent
projects listed perhaps) at the rake spec_prep stage of the noop tests.

[0] https://review.openstack.org/246358

>>
>> [0]
>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
>> [1]
>> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
>> [2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
>> [3] https://review.openstack.org/240015
>> [4]
>> https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
>> [5] https://review.openstack.org/247989
>> [6] https://github.com/bogdando/fuel-library-1/commits/data_checks
>>
>>
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-12-01 Thread Aleksandr Didenko
Hi,

> pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*

+1 to that, we can put this new repo into .fixtures.yml

> note, we could as well move the tests/noop/astute.yaml/ there

+1 here too, astute.yaml files are basically configuration fixtures, we can
put them into .fixtures.yml as well

Regards,
Alex


On Mon, Nov 30, 2015 at 1:03 PM, Bogdan Dobrelya 
wrote:

> On 20.11.2015 17:41, Bogdan Dobrelya wrote:
> >> Hi,
> >>
> >> let me try to rephrase this a bit and Bogdan will correct me if I'm
> wrong
> >> or missing something.
> >>
> >> We have a set of top-scope manifests (called Fuel puppet tasks) that we
> use
> >> for OpenStack deployment. We execute those tasks with "puppet apply".
> Each
> >> task supposed to bring target system into some desired state, so puppet
> >> compiles a catalog and applies it. So basically, puppet catalog =
> desired
> >> system state.
> >>
> >> So we can compile* catalogs for all top-scope manifests in master branch
> >> and store those compiled* catalogs in fuel-library repo. Then for each
> >> proposed patch CI will compare new catalogs with stored ones and print
> out
> >> the difference if any. This will pretty much show what is going to be
> >> changed in system configuration by proposed patch.
> >>
> >> We were discussing such checks before several times, iirc, but we did
> not
> >> have right tools to implement such thing before. Well, now we do :) I
> think
> >> it could be quite useful even in non-voting mode.
> >>
> >> * By saying compiled catalogs I don't mean actual/real puppet catalogs,
> I
> >> mean sorted lists of all classes/resources with all parameters that we
> find
> >> during puppet-rspec tests in our noop test framework, something like
> >> standard puppet-rspec coverage. See example [0] for networks.pp task
> [1].
> >>
> >> Regards,
> >> Alex
> >>
> >> [0] http://paste.openstack.org/show/477839/
> >> [1]
> >>
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
> >
> > Thank you, Alex.
> > Yes, the composition layer is a top-scope manifests, known as a Fuel
> > library modular tasks [0].
> >
> > The "deployment data checks", is nothing more than comparing the
> > committed vs changed states of fixtures [1] of puppet catalogs for known
> > deployment paths under test with rspecs written for each modular task
> [2].
> >
> > And the *current status* is:
> > - the script for data layer checks now implemented [3]
> > - how-to is being documented here [4]
> > - a fix to make catalogs compilation idempotent submitted [5]
>
> The status update:
> - the issue [0] is the data regression checks blocker and is only the
> Noop tests specific. It has been reworked to not use custom facts [1].
> New uuid will be still generated each time in the catalog, but the
> augeas ensures it will be processed in idempotent way. Let's make this
> change [2] to the upstream puppet-nova as well please.
>
> [0] https://bugs.launchpad.net/fuel/+bug/1517915
> [1] https://review.openstack.org/251314
> [2] https://review.openstack.org/131710
>
> - pregenerated catalogs for the Noop tests to become the very first
> committed state in the data regression process has to be put in the
> *separate repo*. Otherwise, the stackalytics would go mad as that would
> be a 600k-liner patch to an OpenStack project, which is the Fuel-library
> now :)
>
> So, I'm planning to use the separate repo for the templates. Note, we
> could as well move the tests/noop/astute.yaml/ there. Thoughts?
>
> > - and there is my WIP branch [6] with the initial committed state of
> > deploy data pre-generated. So, you can checkout, make any test changes
> > to manifests and run the data check (see the README [4]). It works for
> > me, there is no issues with idempotent re-checks of a clean committed
> > state or tests failing when unexpected.
> >
> > So the plan is to implement this noop tests extention as a non-voting CI
> > gate after I make an example workflow update for developers to the
> > Fuel wiki. Thoughts?
> >
> > [0]
> >
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
> > [1]
> >
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> > [2]
> https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
> > [3] https://review.openstack.org/240015
> > [4]
> >
> https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
> > [5] https://review.openstack.org/247989
> > [6] https://github.com/bogdando/fuel-library-1/commits/data_checks
> >
> >
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-30 Thread Bogdan Dobrelya
On 20.11.2015 17:41, Bogdan Dobrelya wrote:
>> Hi,
>>
>> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
>> or missing something.
>>
>> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
>> for OpenStack deployment. We execute those tasks with "puppet apply". Each
>> task supposed to bring target system into some desired state, so puppet
>> compiles a catalog and applies it. So basically, puppet catalog = desired
>> system state.
>>
>> So we can compile* catalogs for all top-scope manifests in master branch
>> and store those compiled* catalogs in fuel-library repo. Then for each
>> proposed patch CI will compare new catalogs with stored ones and print out
>> the difference if any. This will pretty much show what is going to be
>> changed in system configuration by proposed patch.
>>
>> We were discussing such checks before several times, iirc, but we did not
>> have right tools to implement such thing before. Well, now we do :) I think
>> it could be quite useful even in non-voting mode.
>>
>> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
>> mean sorted lists of all classes/resources with all parameters that we find
>> during puppet-rspec tests in our noop test framework, something like
>> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
>>
>> Regards,
>> Alex
>>
>> [0] http://paste.openstack.org/show/477839/
>> [1] 
>> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp
> 
> Thank you, Alex.
> Yes, the composition layer is a top-scope manifests, known as a Fuel
> library modular tasks [0].
> 
> The "deployment data checks", is nothing more than comparing the
> committed vs changed states of fixtures [1] of puppet catalogs for known
> deployment paths under test with rspecs written for each modular task [2].
> 
> And the *current status* is:
> - the script for data layer checks now implemented [3]
> - how-to is being documented here [4]
> - a fix to make catalogs compilation idempotent submitted [5]

The status update:
- the issue [0] is the data regression checks blocker and is only the
Noop tests specific. It has been reworked to not use custom facts [1].
New uuid will be still generated each time in the catalog, but the
augeas ensures it will be processed in idempotent way. Let's make this
change [2] to the upstream puppet-nova as well please.

[0] https://bugs.launchpad.net/fuel/+bug/1517915
[1] https://review.openstack.org/251314
[2] https://review.openstack.org/131710

- pregenerated catalogs for the Noop tests to become the very first
committed state in the data regression process has to be put in the
*separate repo*. Otherwise, the stackalytics would go mad as that would
be a 600k-liner patch to an OpenStack project, which is the Fuel-library
now :)

So, I'm planning to use the separate repo for the templates. Note, we
could as well move the tests/noop/astute.yaml/ there. Thoughts?

> - and there is my WIP branch [6] with the initial committed state of
> deploy data pre-generated. So, you can checkout, make any test changes
> to manifests and run the data check (see the README [4]). It works for
> me, there is no issues with idempotent re-checks of a clean committed
> state or tests failing when unexpected.
> 
> So the plan is to implement this noop tests extention as a non-voting CI
> gate after I make an example workflow update for developers to the
> Fuel wiki. Thoughts?
> 
> [0]
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
> [1]
> https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
> [2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
> [3] https://review.openstack.org/240015
> [4]
> https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
> [5] https://review.openstack.org/247989
> [6] https://github.com/bogdando/fuel-library-1/commits/data_checks
> 
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-20 Thread Bogdan Dobrelya
> Hi,
> 
> let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
> or missing something.
> 
> We have a set of top-scope manifests (called Fuel puppet tasks) that we use
> for OpenStack deployment. We execute those tasks with "puppet apply". Each
> task supposed to bring target system into some desired state, so puppet
> compiles a catalog and applies it. So basically, puppet catalog = desired
> system state.
> 
> So we can compile* catalogs for all top-scope manifests in master branch
> and store those compiled* catalogs in fuel-library repo. Then for each
> proposed patch CI will compare new catalogs with stored ones and print out
> the difference if any. This will pretty much show what is going to be
> changed in system configuration by proposed patch.
> 
> We were discussing such checks before several times, iirc, but we did not
> have right tools to implement such thing before. Well, now we do :) I think
> it could be quite useful even in non-voting mode.
> 
> * By saying compiled catalogs I don't mean actual/real puppet catalogs, I
> mean sorted lists of all classes/resources with all parameters that we find
> during puppet-rspec tests in our noop test framework, something like
> standard puppet-rspec coverage. See example [0] for networks.pp task [1].
> 
> Regards,
> Alex
> 
> [0] http://paste.openstack.org/show/477839/
> [1] 
> https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp

Thank you, Alex.
Yes, the composition layer is a top-scope manifests, known as a Fuel
library modular tasks [0].

The "deployment data checks", is nothing more than comparing the
committed vs changed states of fixtures [1] of puppet catalogs for known
deployment paths under test with rspecs written for each modular task [2].

And the *current status* is:
- the script for data layer checks now implemented [3]
- how-to is being documented here [4]
- a fix to make catalogs compilation idempotent submitted [5]
- and there is my WIP branch [6] with the initial committed state of
deploy data pre-generated. So, you can checkout, make any test changes
to manifests and run the data check (see the README [4]). It works for
me, there is no issues with idempotent re-checks of a clean committed
state or tests failing when unexpected.

So the plan is to implement this noop tests extention as a non-voting CI
gate after I make an example workflow update for developers to the
Fuel wiki. Thoughts?

[0]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular
[1]
https://github.com/openstack/fuel-library/tree/master/tests/noop/astute.yaml
[2] https://github.com/openstack/fuel-library/tree/master/tests/noop/spec
[3] https://review.openstack.org/240015
[4]
https://github.com/openstack/fuel-library/blob/master/tests/noop/README.rst
[5] https://review.openstack.org/247989
[6] https://github.com/bogdando/fuel-library-1/commits/data_checks


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-03 Thread Aleksandr Didenko
Hi,

let me try to rephrase this a bit and Bogdan will correct me if I'm wrong
or missing something.

We have a set of top-scope manifests (called Fuel puppet tasks) that we use
for OpenStack deployment. We execute those tasks with "puppet apply". Each
task supposed to bring target system into some desired state, so puppet
compiles a catalog and applies it. So basically, puppet catalog = desired
system state.

So we can compile* catalogs for all top-scope manifests in master branch
and store those compiled* catalogs in fuel-library repo. Then for each
proposed patch CI will compare new catalogs with stored ones and print out
the difference if any. This will pretty much show what is going to be
changed in system configuration by proposed patch.

We were discussing such checks before several times, iirc, but we did not
have right tools to implement such thing before. Well, now we do :) I think
it could be quite useful even in non-voting mode.

* By saying compiled catalogs I don't mean actual/real puppet catalogs, I
mean sorted lists of all classes/resources with all parameters that we find
during puppet-rspec tests in our noop test framework, something like
standard puppet-rspec coverage. See example [0] for networks.pp task [1].

Regards,
Alex

[0] http://paste.openstack.org/show/477839/
[1]
https://github.com/openstack/fuel-library/blob/master/deployment/puppet/osnailyfacter/modular/openstack-network/networks.pp


On Mon, Nov 2, 2015 at 5:35 PM, Bogdan Dobrelya 
wrote:

> Here is a docs update [0] for the patch [1] - which is rather a
> framework - being discussed here.
> Note, that the tool fuel_noop_tests.rb Dmitry Ilyin wrote became a Noop
> testing framework, which is Fuel specific. But the same approach may be
> used for any set of puppet modules and a composition layer manifests
> with a dataset of deployment parameters you may want it to be tracked
> against potential regressions.
>
> I believe we should think about how that Noop testing framework (and
> the deployment data checks under discussion as well) might benefit the
> puppet community.
>
> [1] https://review.openstack.org/240901
> [2] https://review.openstack.org/240015
>
> On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> > Hello.
> > There are few types of a deployment regressions possible. When changing
> > a module version to be used from upstream (or internal module repo), for
> > example from Liberty to Mitaka. Or when changing the composition layer
> > (modular tasks in Fuel). Specifically, adding/removing/changing classes
> > and a class parameters.
> >
> > An example regression for swift deployment data [0]. Something was
> > changed unnoticed by existing noop tests and as a result
> > the swift data became being stored in root partition.
> >
> > Suggested per-commit based regressions detection [1] for deployment data
> > assumes to automatically detect if a class in a noop catalog run has
> > gained or lost a parameter or if it has been updated to another value by
> > a patch under test. Later, this check could even replace existing noop
> > tests, everything will be checked automatically, unless every deployment
> > scenario is covered by a corresponding template, which are represented
> > as YAML files [2] in Fuel.
> > Note: The tool [3] can help to get all deployment cases (-Y) and all
> > deployment tasks (-S) as well.
> >
> > I propose to review the patch [1], understand how it works (see tl;dr
> > section below) and to start using it ASAP. The earlier we commit the
> > "initial" data layer state, less regressions would pop up.
> >
> > (tl;dr)
> > The check should be done for every modular component (aka deployment
> > task). Data generated in the noop catalog run for all classes and
> > defines of a given deployment task should be verified against its
> > "acknowledged" (committed) state.
> > And fail the test gate, if changes has been found, like new parameter
> > with a defined value, removed a parameter, changed a parameter's value.
> >
> > In order to remove a regression, a patch author will have to add (and
> > reviewers should acknowledge) detected changes in the committed state of
> > the deployment data. This may be done manually, with a tool like [3] or
> > by a pre-commit hook, or even at the CI side!
> > The regression check should show the diff between committed state and a
> > new state proposed in a patch. Changed state should be *reviewed* and
> > accepted with a patch, to became a committed one. So the deployment data
> > will evolve with *only* approved changes. And those changes would be
> > very easy to be discovered for each patch under review process!
> > No more regressions, everyone happy.
> >
> > Examples:
> >
> > - A. A patch author removed the mpm_module parameter from the
> > composition layer (apache modular task). The test should fail with a
> >
> > Diff:
> >   @@ -90,7 +90,7 @@
> >  manage_user=> 'true',
> >  max_keepalive_requests => '100',
> 

Re: [openstack-dev] [Fuel][library] CI gate for regressions detection in deployment data

2015-11-02 Thread Bogdan Dobrelya
Here is a docs update [0] for the patch [1] - which is rather a
framework - being discussed here.
Note, that the tool fuel_noop_tests.rb Dmitry Ilyin wrote became a Noop
testing framework, which is Fuel specific. But the same approach may be
used for any set of puppet modules and a composition layer manifests
with a dataset of deployment parameters you may want it to be tracked
against potential regressions.

I believe we should think about how that Noop testing framework (and
the deployment data checks under discussion as well) might benefit the
puppet community.

[1] https://review.openstack.org/240901
[2] https://review.openstack.org/240015

On 29.10.2015 15:24, Bogdan Dobrelya wrote:
> Hello.
> There are few types of a deployment regressions possible. When changing
> a module version to be used from upstream (or internal module repo), for
> example from Liberty to Mitaka. Or when changing the composition layer
> (modular tasks in Fuel). Specifically, adding/removing/changing classes
> and a class parameters.
> 
> An example regression for swift deployment data [0]. Something was
> changed unnoticed by existing noop tests and as a result
> the swift data became being stored in root partition.
> 
> Suggested per-commit based regressions detection [1] for deployment data
> assumes to automatically detect if a class in a noop catalog run has
> gained or lost a parameter or if it has been updated to another value by
> a patch under test. Later, this check could even replace existing noop
> tests, everything will be checked automatically, unless every deployment
> scenario is covered by a corresponding template, which are represented
> as YAML files [2] in Fuel.
> Note: The tool [3] can help to get all deployment cases (-Y) and all
> deployment tasks (-S) as well.
> 
> I propose to review the patch [1], understand how it works (see tl;dr
> section below) and to start using it ASAP. The earlier we commit the
> "initial" data layer state, less regressions would pop up.
> 
> (tl;dr)
> The check should be done for every modular component (aka deployment
> task). Data generated in the noop catalog run for all classes and
> defines of a given deployment task should be verified against its
> "acknowledged" (committed) state.
> And fail the test gate, if changes has been found, like new parameter
> with a defined value, removed a parameter, changed a parameter's value.
> 
> In order to remove a regression, a patch author will have to add (and
> reviewers should acknowledge) detected changes in the committed state of
> the deployment data. This may be done manually, with a tool like [3] or
> by a pre-commit hook, or even at the CI side!
> The regression check should show the diff between committed state and a
> new state proposed in a patch. Changed state should be *reviewed* and
> accepted with a patch, to became a committed one. So the deployment data
> will evolve with *only* approved changes. And those changes would be
> very easy to be discovered for each patch under review process!
> No more regressions, everyone happy.
> 
> Examples:
> 
> - A. A patch author removed the mpm_module parameter from the
> composition layer (apache modular task). The test should fail with a
> 
> Diff:
>   @@ -90,7 +90,7 @@
>  manage_user=> 'true',
>  max_keepalive_requests => '100',
>  mod_dir=> '/etc/httpd/conf.d',
>   -  mpm_module => 'false',
>   +  mpm_module => 'prefork',
>  name   => 'Apache',
>  package_ensure => 'installed',
>  ports_file => '/etc/httpd/conf/ports.conf',
> 
> It illustrates that the mpm_module's committed value was "false".
> But the new one came as the 'prefork', likely from the apache class
> defaults.
> The solution:
> Follow the failed build link and see for detected changes (a diff).
> Acknowledge the changes and include rebuilt templates in the patch as a
> new revision. The tool [3] (use -h for help) example command:
> ./utils/jenkins/fuel_noop_tests.rb -q -b -s api-proxy/api-proxy_spec.rb
> 
> Or edit the committed templates manually and include data changes in the
> patch as well.
> 
> -B. An upstream module author added the new parameter mpm_mode with a
> default '123'. The test should fail with a
> 
> Diff:
>@@ -90,6 +90,7 @@
>   manage_user=> 'true',
>   max_keepalive_requests => '100',
>   mod_dir=> '/etc/httpd/conf.d',
>+  mpm_mode   => '123',
>   mpm_module => 'false',
>   name   => 'Apache',
>   package_ensure => 'installed',
> 
> It illustrates that the composition layer is not consistent with the
> upstream module data schema, and that could be a potential regression in
> deployment (new parameter added upstream and goes with defaults, being
> ignored by the composition manifest).
> The solution is the