On Sat, Apr 18, 2015 at 9:30 PM, Boris Pavlovic bo...@pavlovic.me wrote:
Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well written unit tests.
One of the nice thing is
On 20 April 2015 at 07:40, Boris Pavlovic bo...@pavlovic.me wrote:
Dan,
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is
It'd be nice to having something like https://coveralls.io/features
which afaik just reports back on pull requests (and doesn't try to
enforce much of anything, aka non-voting).
For example: https://github.com/aliles/funcsigs/pull/13
In general it'd be neat if we could more easily
Ian,
If you were thinking instead to provide coverage *tools* that were easy for
developers to use,
Hm, seems like you missed the point. This gate job can be run like unit
tests tox -e cover. That will point you on the missing lines that are
introduced in your patch.
As a dev, I would not
Morgan,
Thank you for your input. I improved coverage job in this patch:
https://review.openstack.org/#/c/175557/1
Now:
* It is based on missing lines and not coverage percentage.
* It shows nice messages and coverage diffs:
Allowed to introduce missing lines : 8
Missing lines in master
On 04/20/2015 07:13 AM, Sean Dague wrote:
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
Hi stackers,
Code coverage is one of the very important metric of overall code
quality especially in case of Python. It's quite important to ensure
that code is covered fully with well written unit tests.
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
Hi stackers,
Code coverage is one of the very important metric of overall code
quality especially in case of Python. It's quite important to ensure
that code is covered fully with well written unit tests.
One of the nice thing is coverage
:14 PM, gordon chung g...@live.ca wrote:
Date: Mon, 20 Apr 2015 07:13:31 -0400
From: s...@dague.net
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][code quality] Voting coverage job (-1
if coverage get worse after patch
Dan,
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is actually being added. It's a huge maintenance burden with very
little value,
Well, I think there are very few cases where *less* coverage is better.
IMHO, most of the test coverage we have for nova's neutronapi is more
than useless. It's so synthetic that it provides no regression
protection, and often requires significantly more work than the change
that is actually
Date: Mon, 20 Apr 2015 07:13:31 -0400
From: s...@dague.net
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][code quality] Voting coverage job (-1 if
coverage get worse after patch)
On 04/18/2015 09:30 PM, Boris Pavlovic wrote
Let's not mix the bad unit tests in Nova with the fact that code should
be fully covered by well written unit tests.
I'm not using bad tests in nova to justify not having coverage testing.
I'm saying that the argument that more coverage is always better has
some real-life counter examples.
On 20/04/15 18:01, Clint Byrum wrote:
Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
Hi stackers,
Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well
On 09:30 Apr 20, Jay Pipes wrote:
On 04/20/2015 07:13 AM, Sean Dague wrote:
On 04/18/2015 09:30 PM, Boris Pavlovic wrote:
Hi stackers,
Code coverage is one of the very important metric of overall code
quality especially in case of Python. It's quite important to ensure
that code is covered
Excerpts from Boris Pavlovic's message of 2015-04-18 18:30:02 -0700:
Hi stackers,
Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well written unit tests.
One of the nice
Clint,
Anyway, interesting thoughts from everyone. I have to agree with those
that say this isn't reliable enough to make it vote. Non-voting would be
interesting though, if it gave a clear score difference, and a diff of
the two coverage reports. I think this is more useful as an automated
This is an interesting idea, but just a note on implementation:
It is absolutely possible to reduce the % of coverage without losing (or even
gaining) coverage of the code base. This can occur if deprecated code is
removed and no new unit tests are added. Overall % of code covered by tests can
Morgan,
Good catch. This can be easily fixed if we add special tag in commit
message: e.g. #no-coverage-check
Best regards,
Boris Pavlovic
On Sun, Apr 19, 2015 at 9:33 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:
This is an interesting idea, but just a note on implementation:
It
Hi stackers,
Code coverage is one of the very important metric of overall code quality
especially in case of Python. It's quite important to ensure that code is
covered fully with well written unit tests.
One of the nice thing is coverage job.
In Rally we are running it against every check
19 matches
Mail list logo