Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-10-06 Thread milanisko k
čt 6. 10. 2016 v 16:41 odesílatel Andrea Frittoli 
napsal:

> The difficulty with integration testing is that the services under test
> run in processes separated from the test one(s).
>
> There is not obvious / existing mechanism to collect coverage data in this
> case. Several cycles back used to be a backdoor built into nova to enable
> coverage data collection during integration testing, but it has been
> removed long ago.
>
> andrea, AFAIK it is possible to create a hook like this[1]:

   path = os.path.join(sysconfig.get_python_lib(), 'hack.pth')
with open(path, 'w') as fd:
 fd.write("""
 import coverage; coverage.process_startup()
 """

to always enable instrumenting any python process.
This can get more fancy with white/black listing paths to consider.
The project I've referenced in the beginning of this thread used to enable
tracing this way (with some filtering applied).

Cheers,
milan

[1]
https://coverage.readthedocs.io/en/coverage-4.2/subprocess.html#configuring-python-for-sub-process-coverage


> andrea
>
> On Thu, Sep 29, 2016 at 12:12 PM Assaf Muller  wrote:
>
> On Thu, Sep 29, 2016 at 5:27 AM, milanisko k  wrote:
>
>
>
> út 27. 9. 2016 v 20:12 odesílatel Assaf Muller  napsal:
>
> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:
>
>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
>
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>
>
> Sorry, replace integration with unit + functional.
>
>
> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>
>
> To reiterate:
>
> Run coverage from integration tests, let this be c
> Run coverage from unit and functional tests, let this be c'
>
> Let diff = c \ c'
>
> 'diff' is where you're missing unit and functional tests coverage.
>
>
> Assaf, the tool I linked is a monkey-patched coverage.py but the collector
> stores the stats in Redis --- gives the same accumulative collecting.
> Is there any interest/effort to collect coverage stats from selected jobs
> in CI, no matter the tool used?
>
>
> Some projects already collect coverage stats on their post-merge queue:
>
> http://logs.openstack.org/61/61af70a734b99e61e751cfb494ddc93a85eec394/post/nova-coverage-db-ubuntu-xenial/55210aa/
>
> It's invoked with 'tox -e cover' which you define in your project's
> tox.ini file, I imagine most projects if not all have it set up to gather
> coverage from a unit tests run.
>
>
>
>
>
>
>
>
>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>
> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
> I am working on such a tool with mixed results so far. Here's my approach
> taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object:  nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) 
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am currently
> using a tons of regex :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or so),
> but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-10-06 Thread Andrea Frittoli
The difficulty with integration testing is that the services under test run
in processes separated from the test one(s).

There is not obvious / existing mechanism to collect coverage data in this
case. Several cycles back used to be a backdoor built into nova to enable
coverage data collection during integration testing, but it has been
removed long ago.

andrea

On Thu, Sep 29, 2016 at 12:12 PM Assaf Muller  wrote:

> On Thu, Sep 29, 2016 at 5:27 AM, milanisko k  wrote:
>
>
>
> út 27. 9. 2016 v 20:12 odesílatel Assaf Muller  napsal:
>
> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:
>
>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
>
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>
>
> Sorry, replace integration with unit + functional.
>
>
> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>
>
> To reiterate:
>
> Run coverage from integration tests, let this be c
> Run coverage from unit and functional tests, let this be c'
>
> Let diff = c \ c'
>
> 'diff' is where you're missing unit and functional tests coverage.
>
>
> Assaf, the tool I linked is a monkey-patched coverage.py but the collector
> stores the stats in Redis --- gives the same accumulative collecting.
> Is there any interest/effort to collect coverage stats from selected jobs
> in CI, no matter the tool used?
>
>
> Some projects already collect coverage stats on their post-merge queue:
>
> http://logs.openstack.org/61/61af70a734b99e61e751cfb494ddc93a85eec394/post/nova-coverage-db-ubuntu-xenial/55210aa/
>
> It's invoked with 'tox -e cover' which you define in your project's
> tox.ini file, I imagine most projects if not all have it set up to gather
> coverage from a unit tests run.
>
>
>
>
>
>
>
>
>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>
> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
> I am working on such a tool with mixed results so far. Here's my approach
> taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object:  nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) 
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am currently
> using a tons of regex :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or so),
> but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-29 Thread Assaf Muller
On Thu, Sep 29, 2016 at 5:27 AM, milanisko k  wrote:

>
>
> út 27. 9. 2016 v 20:12 odesílatel Assaf Muller  napsal:
>
>> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:
>>
>>>
>>>
>>> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
>>> tnurlygaya...@mirantis.com> wrote:
>>>
 Hi milan,

 we have measured the test coverage for OpenStack components with
 coverage.py tool [1]. It is very easy tool and it allows measure the
 coverage by lines of code and etc. (several metrics are available).

 [1] https://coverage.readthedocs.io/en/coverage-4.2/

>>>
>>> coverage also supports aggregating results from multiple runs, so you
>>> can measure results from combinations such as:
>>>
>>>
>>
>>> 1) Unit tests
>>> 2) Functional tests
>>> 3) Integration tests
>>> 4) 1 + 2
>>> 5) 1 + 2 + 3
>>>
>>> To my eyes 3 and 4 make the most sense. Unit and functional tests are
>>> supposed to give you low level coverage, keeping in mind that 'functional
>>> tests' is an overloaded term and actually means something else in every
>>> community. Integration tests aren't about code coverage, they're about user
>>> facing flows, so it'd be interesting to measure coverage
>>> from integration tests,
>>>
>>
>> Sorry, replace integration with unit + functional.
>>
>>
>>> then comparing coverage coming from integration tests, and getting the
>>> set difference between the two: That's the area that needs more unit and
>>> functional tests.
>>>
>>
>> To reiterate:
>>
>> Run coverage from integration tests, let this be c
>> Run coverage from unit and functional tests, let this be c'
>>
>> Let diff = c \ c'
>>
>> 'diff' is where you're missing unit and functional tests coverage.
>>
>
> Assaf, the tool I linked is a monkey-patched coverage.py but the collector
> stores the stats in Redis --- gives the same accumulative collecting.
> Is there any interest/effort to collect coverage stats from selected jobs
> in CI, no matter the tool used?
>

Some projects already collect coverage stats on their post-merge queue:
http://logs.openstack.org/61/61af70a734b99e61e751cfb494ddc93a85eec394/post/nova-coverage-db-ubuntu-xenial/55210aa/

It's invoked with 'tox -e cover' which you define in your project's tox.ini
file, I imagine most projects if not all have it set up to gather coverage
from a unit tests run.


>
>
>>
>>
>>>
>>>

 On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
 jordan.pitt...@scality.com> wrote:

> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k 
> wrote:
>
>> Dear Stackers,
>> I'd like to gather some overview on the $Sub: is there some
>> infrastructure in place to gather such stats? Are there any groups
>> interested in it? Any plans to establish such infrastructure?
>>
> I am working on such a tool with mixed results so far. Here's my
> approach taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object:  nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) 
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am
> currently using a tons of regex :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or
> so), but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --

 Timur,
 Senior QA Manager
 OpenStack Projects
 Mirantis Inc

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
 unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> 

Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-29 Thread milanisko k
út 27. 9. 2016 v 20:12 odesílatel Assaf Muller  napsal:

> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:
>
>>
>>
>> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
>> tnurlygaya...@mirantis.com> wrote:
>>
>>> Hi milan,
>>>
>>> we have measured the test coverage for OpenStack components with
>>> coverage.py tool [1]. It is very easy tool and it allows measure the
>>> coverage by lines of code and etc. (several metrics are available).
>>>
>>> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>>>
>>
>> coverage also supports aggregating results from multiple runs, so you can
>> measure results from combinations such as:
>>
>>
>
>> 1) Unit tests
>> 2) Functional tests
>> 3) Integration tests
>> 4) 1 + 2
>> 5) 1 + 2 + 3
>>
>> To my eyes 3 and 4 make the most sense. Unit and functional tests are
>> supposed to give you low level coverage, keeping in mind that 'functional
>> tests' is an overloaded term and actually means something else in every
>> community. Integration tests aren't about code coverage, they're about user
>> facing flows, so it'd be interesting to measure coverage
>> from integration tests,
>>
>
> Sorry, replace integration with unit + functional.
>
>
>> then comparing coverage coming from integration tests, and getting the
>> set difference between the two: That's the area that needs more unit and
>> functional tests.
>>
>
> To reiterate:
>
> Run coverage from integration tests, let this be c
> Run coverage from unit and functional tests, let this be c'
>
> Let diff = c \ c'
>
> 'diff' is where you're missing unit and functional tests coverage.
>

Assaf, the tool I linked is a monkey-patched coverage.py but the collector
stores the stats in Redis --- gives the same accumulative collecting.
Is there any interest/effort to collect coverage stats from selected jobs
in CI, no matter the tool used?


>
>
>>
>>
>>>
>>> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
>>> jordan.pitt...@scality.com> wrote:
>>>
 Hi,

 On Tue, Sep 27, 2016 at 11:43 AM, milanisko k 
 wrote:

> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some
> infrastructure in place to gather such stats? Are there any groups
> interested in it? Any plans to establish such infrastructure?
>
 I am working on such a tool with mixed results so far. Here's my
 approach taking let's say Nova as an example:

 1) Print all the routes known to nova (available as a python-routes
 object:  nova.api.openstack.compute.APIRouterV21())
 2) "Normalize" the Nova routes
 3) Take the logs produced by Tempest during a tempest run (in
 logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
 8774)
 4) "Normalize" the tested-by-tempest Nova routes.
 5) Compare the two sets of routes
 6) 
 7) Profit !!

 So the hard part is obviously the normalizing of the URLs. I am
 currently using a tons of regex :) That's not fun.

 I'll let you guys know if I have something to show.

 I think there's real interest on the topic (it comes up every year or
 so), but no definitive answer/tool.

 Cheers,
 Jordan




 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>>
>>> Timur,
>>> Senior QA Manager
>>> OpenStack Projects
>>> Mirantis Inc
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-29 Thread milanisko k
út 27. 9. 2016 v 18:21 odesílatel Timur Nurlygayanov <
tnurlygaya...@mirantis.com> napsal:

> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>
> Timur, the project  I linked was, besides an experimental AST-based stats
processor, a monkey-patch to the coverage.py module that allowed  remote
code coverage collection through a Redis store. It would allow one to
instrument multiple running processes from couple of nodes at the same time
while executing e.g functional tests.
The main point of it was to create a service that one would run if they
wanted to collect the stats from a project; of course all would get slowed
down 100-fold.

I'm curious if the OS projects want to execute similar code measurements on
e.g daily basis w/ selected DSVM jobs to collect stats.



> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
>> Hi,
>>
>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>>
>>> Dear Stackers,
>>> I'd like to gather some overview on the $Sub: is there some
>>> infrastructure in place to gather such stats? Are there any groups
>>> interested in it? Any plans to establish such infrastructure?
>>>
>> I am working on such a tool with mixed results so far. Here's my approach
>> taking let's say Nova as an example:
>>
>> 1) Print all the routes known to nova (available as a python-routes
>> object:  nova.api.openstack.compute.APIRouterV21())
>> 2) "Normalize" the Nova routes
>> 3) Take the logs produced by Tempest during a tempest run (in
>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>> 8774)
>> 4) "Normalize" the tested-by-tempest Nova routes.
>> 5) Compare the two sets of routes
>> 6) 
>> 7) Profit !!
>>
> So the hard part is obviously the normalizing of the URLs. I am currently
>> using a tons of regex :) That's not fun.
>>
> I took simpler approach that just collects the stats (code execution hit
count) through coverage.py but with a "central" store, not sure that would
satisfy your use case.


> I'll let you guys know if I have something to show.
>>
>> I think there's real interest on the topic (it comes up every year or
>> so), but no definitive answer/tool.
>>
>> That wouldn't imply much interest :)


> Cheers,
>> Jordan
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller  wrote:

>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>> Hi milan,
>>
>> we have measured the test coverage for OpenStack components with
>> coverage.py tool [1]. It is very easy tool and it allows measure the
>> coverage by lines of code and etc. (several metrics are available).
>>
>> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>

Sorry, replace integration with unit + functional.


> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>

To reiterate:

Run coverage from integration tests, let this be c
Run coverage from unit and functional tests, let this be c'

Let diff = c \ c'

'diff' is where you're missing unit and functional tests coverage.


>
>
>>
>> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
>> jordan.pitt...@scality.com> wrote:
>>
>>> Hi,
>>>
>>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k 
>>> wrote:
>>>
 Dear Stackers,
 I'd like to gather some overview on the $Sub: is there some
 infrastructure in place to gather such stats? Are there any groups
 interested in it? Any plans to establish such infrastructure?

>>> I am working on such a tool with mixed results so far. Here's my
>>> approach taking let's say Nova as an example:
>>>
>>> 1) Print all the routes known to nova (available as a python-routes
>>> object:  nova.api.openstack.compute.APIRouterV21())
>>> 2) "Normalize" the Nova routes
>>> 3) Take the logs produced by Tempest during a tempest run (in
>>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>>> 8774)
>>> 4) "Normalize" the tested-by-tempest Nova routes.
>>> 5) Compare the two sets of routes
>>> 6) 
>>> 7) Profit !!
>>>
>>> So the hard part is obviously the normalizing of the URLs. I am
>>> currently using a tons of regex :) That's not fun.
>>>
>>> I'll let you guys know if I have something to show.
>>>
>>> I think there's real interest on the topic (it comes up every year or
>>> so), but no definitive answer/tool.
>>>
>>> Cheers,
>>> Jordan
>>>
>>>
>>>
>>>
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Timur,
>> Senior QA Manager
>> OpenStack Projects
>> Mirantis Inc
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>

coverage also supports aggregating results from multiple runs, so you can
measure results from combinations such as:

1) Unit tests
2) Functional tests
3) Integration tests
4) 1 + 2
5) 1 + 2 + 3

To my eyes 3 and 4 make the most sense. Unit and functional tests are
supposed to give you low level coverage, keeping in mind that 'functional
tests' is an overloaded term and actually means something else in every
community. Integration tests aren't about code coverage, they're about user
facing flows, so it'd be interesting to measure coverage
from integration tests, then comparing coverage coming from integration
tests, and getting the set difference between the two: That's the area that
needs more unit and functional tests.


>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
>> Hi,
>>
>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>>
>>> Dear Stackers,
>>> I'd like to gather some overview on the $Sub: is there some
>>> infrastructure in place to gather such stats? Are there any groups
>>> interested in it? Any plans to establish such infrastructure?
>>>
>> I am working on such a tool with mixed results so far. Here's my approach
>> taking let's say Nova as an example:
>>
>> 1) Print all the routes known to nova (available as a python-routes
>> object:  nova.api.openstack.compute.APIRouterV21())
>> 2) "Normalize" the Nova routes
>> 3) Take the logs produced by Tempest during a tempest run (in
>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>> 8774)
>> 4) "Normalize" the tested-by-tempest Nova routes.
>> 5) Compare the two sets of routes
>> 6) 
>> 7) Profit !!
>>
>> So the hard part is obviously the normalizing of the URLs. I am currently
>> using a tons of regex :) That's not fun.
>>
>> I'll let you guys know if I have something to show.
>>
>> I think there's real interest on the topic (it comes up every year or
>> so), but no definitive answer/tool.
>>
>> Cheers,
>> Jordan
>>
>>
>>
>>
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Timur Nurlygayanov
Hi milan,

we have measured the test coverage for OpenStack components with
coverage.py tool [1]. It is very easy tool and it allows measure the
coverage by lines of code and etc. (several metrics are available).

[1] https://coverage.readthedocs.io/en/coverage-4.2/

On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier 
wrote:

> Hi,
>
> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>
>> Dear Stackers,
>> I'd like to gather some overview on the $Sub: is there some
>> infrastructure in place to gather such stats? Are there any groups
>> interested in it? Any plans to establish such infrastructure?
>>
> I am working on such a tool with mixed results so far. Here's my approach
> taking let's say Nova as an example:
>
> 1) Print all the routes known to nova (available as a python-routes
> object:  nova.api.openstack.compute.APIRouterV21())
> 2) "Normalize" the Nova routes
> 3) Take the logs produced by Tempest during a tempest run (in
> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
> 8774)
> 4) "Normalize" the tested-by-tempest Nova routes.
> 5) Compare the two sets of routes
> 6) 
> 7) Profit !!
>
> So the hard part is obviously the normalizing of the URLs. I am currently
> using a tons of regex :) That's not fun.
>
> I'll let you guys know if I have something to show.
>
> I think there's real interest on the topic (it comes up every year or so),
> but no definitive answer/tool.
>
> Cheers,
> Jordan
>
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Timur,
Senior QA Manager
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Jordan Pittier
Hi,

On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:

> Dear Stackers,
> I'd like to gather some overview on the $Sub: is there some infrastructure
> in place to gather such stats? Are there any groups interested in it? Any
> plans to establish such infrastructure?
>
I am working on such a tool with mixed results so far. Here's my approach
taking let's say Nova as an example:

1) Print all the routes known to nova (available as a python-routes object:
 nova.api.openstack.compute.APIRouterV21())
2) "Normalize" the Nova routes
3) Take the logs produced by Tempest during a tempest run (in
logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
8774)
4) "Normalize" the tested-by-tempest Nova routes.
5) Compare the two sets of routes
6) 
7) Profit !!

So the hard part is obviously the normalizing of the URLs. I am currently
using a tons of regex :) That's not fun.

I'll let you guys know if I have something to show.

I think there's real interest on the topic (it comes up every year or so),
but no definitive answer/tool.

Cheers,
Jordan

-- 
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev