Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-12 Thread Michael Bright
+1 also.
I spent less than half the time on my first fix (so far) understanding the
problem, reproducing it, coding it and learning about the code review
system.

Much more than half the time was spent on reverse engineering existing
tests to be able to add new ones (which had to use features not used by the
existing tests) and asking for advice even on where to add the tests.

It would have been more efficient for everyone had some test examples been
proposed to me.



On 12 November 2013 03:34, Ed Leafe e...@openstack.org wrote:

 On Nov 11, 2013, at 6:42 PM, Vishvananda Ishaya vishvana...@gmail.com
 wrote:

  It also gives the submitter a specific example of a well-written test,
 which
  can be a faster way to learn than forcing them to get there via trial
 and error.

 +1. Implementing a policy that has as the end effect more knowledgeable
 contributors is a big win.


 -- Ed Leafe




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-12 Thread Michael Bright
To be clear, that was a +1 for Mark's suggestion:

 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but it's
 clear the current code is broken so I'm fine with merging the fix
 without a test. You could say it's now the reviewers responsibility to
 merge a test, but if that requirement then turns off reviewers even
 reviewing such a patch, then that doesn't help either.


On 12 November 2013 11:29, Michael Bright mjbrigh...@gmail.com wrote:


 +1 also.
 I spent less than half the time on my first fix (so far) understanding the
 problem, reproducing it, coding it and learning about the code review
 system.

 Much more than half the time was spent on reverse engineering existing
 tests to be able to add new ones (which had to use features not used by the
 existing tests) and asking for advice even on where to add the tests.

 It would have been more efficient for everyone had some test examples been
 proposed to me.



 On 12 November 2013 03:34, Ed Leafe e...@openstack.org wrote:

 On Nov 11, 2013, at 6:42 PM, Vishvananda Ishaya vishvana...@gmail.com
 wrote:

  It also gives the submitter a specific example of a well-written test,
 which
  can be a faster way to learn than forcing them to get there via trial
 and error.

 +1. Implementing a policy that has as the end effect more knowledgeable
 contributors is a big win.


 -- Ed Leafe




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-11 Thread Vishvananda Ishaya

On Oct 31, 2013, at 6:56 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Mark McLoughlin's message of 2013-10-31 06:30:32 -0700:
 On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests
 
 Nice breakdown.
 
 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.
 
 I totally agree with the sentiment but, especially when it's a newcomer
 to the project, I try to put myself in the shoes of the patch submitter
 and double-check whether what we're asking is reasonable.
 
 
 Even with a long time contributor, empathy is an important part of
 constructing reviews. We could make more robotic things that review for
 test coverage, but we haven't, because this is a gray area. The role of
 a reviewer isn't just get patches merged and stop defects. It is also to
 grow the other developers.
 
 For example, if someone shows up to Nova with their first OpenStack
 contribution, it fixes something which is unquestionably a bug - think
 typo like raise NotFund('foo') - and testing this code patch requires
 more than adding a simple new scenario to existing tests ...
 
 
 This goes back to my recent suggestion to help the person not with a -1
 or a +2, but with an additional patch that fixes it.
 
 That, for me, is an example of -1, we need a test! untested code is
 broken! is really shooting the messenger, not valuing the newcomers
 contribution and risks turning that person off the project forever.
 Reviewers being overly aggressive about this where the project doesn't
 have full test coverage to begin with really makes us seem unwelcoming.
 
 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but it's
 clear the current code is broken so I'm fine with merging the fix
 without a test. You could say it's now the reviewers responsibility to
 merge a test, but if that requirement then turns off reviewers even
 reviewing such a patch, then that doesn't help either.
 
 
 I understand entirely why you choose this, and I think that is fine. I,
 however, see this as a massive opportunity to teach. That code was only
 broken because it was allowed it to be merged without tests. By letting
 that situation continue, we only fix it today. The next major refactoring
 has a high chance now of breaking that part of the code again.
 
 So, rather than +2, I suggest -1 with compassion. Engage with the
 submitter. If you don't know them, take a look at how hard it would be
 to write a test for the behavior and give pointers to the exact test
 suite that would need to be changed, or suggest a new test suite and
 point at a good example to copy.

I prefer Mark's approach here. I have seen him comment that the code could
use a test, and submitted a dependent patch with the test implemented. We have
a very long average time from 1st patch to the gate in nova especially, and
going back and forth on simple bug fixes doesn't just cost the initial submitter
time, it increases the backlog and adds extra time for other reviewers as well.

It also gives the submitter a specific example of a well-written test, which
can be a faster way to learn than forcing them to get there via trial and error.

Vish

 
 So, with all of this, let's make sure we don't forget to first
 appreciate the effort that went into submitting the patch that lacks
 tests.
 
 
 I'm not going to claim that I've always practiced -1 with compassion,
 so thanks for reminding us all that we're not just reviewing code, we
 are having a dialog with real live people.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-11 Thread Ed Leafe
On Nov 11, 2013, at 6:42 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

 It also gives the submitter a specific example of a well-written test, which
 can be a faster way to learn than forcing them to get there via trial and 
 error.

+1. Implementing a policy that has as the end effect more knowledgeable 
contributors is a big win.


-- Ed Leafe




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-01 Thread Rosa, Andrea (HP Cloud Services)

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: 31 October 2013 17:07
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] When is it okay for submitters to say 'I don't
want to add tests' ?

On 10/31/2013 06:04 AM, Rosa, Andrea (HP Cloud Services) wrote:

 A - there is no test suite at all, adding one in unreasonable B -
 this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree) C - this particular thing is very
 hard to test
  D - testing this won't offer benefit

 In my opinion  C instead of being an acceptable reason for not having tests 
 is
a symptom of one of the two things:
 1) F = Submitter doesn't know how to write tests, in this case
 someone else can help with suggestions
 2) The code we are trying to test is too complicated so it's time to
 refactor it

 And about D, In my opinion  tests always offer benefits, like code coverage
or helping in understanding the code.

I think there are actually cases where C is valid.  It's difficult to test 
certain
kinds of race conditions, for example, unless you have very low-level hooks
into the guts of the system in order to force the desired conditions to 
reliably
occur at exactly the right time.

Well depends which kind of tests we are talking about. 
I was talking about unit tests and I totally agree with Sandy when he said that 
everything can be tested and should be.
Test certain kinds of race conditions those kind of tests not always are unit 
tests, I'd consider them functional tests.

Regards
--
Andrea Rosa



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-01 Thread Khanh-Toan Tran
Hey thanks a lot!

- Original Message -
From: Clint Byrum cl...@fewbar.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Thursday, October 31, 2013 7:49:55 PM
Subject: Re: [openstack-dev] When is it okay for submitters to say 'I don't 
want to add tests' ?

Excerpts from Khanh-Toan Tran's message of 2013-10-31 07:22:06 -0700:
 Hi all,
 
 As a newbie of the community, I'm not familiar with unittest and how to use 
 it here. I've learned that Jenkins runs tests
 everytime we submit some code. But how to write the test and what is a 'good 
 test' and a 'bad test'? I saw some commits
 in gerrit but am unable to say if the written test is enough to judge the 
 code, since it is the author of the code who writes
 the test. Is there a framework to follow or some rules/pratices to respect?
 
 Do you have some links to help me out?
 

This is a nice synopsis of the concept of test driven development:

http://net.tutsplus.com/tutorials/python-tutorials/test-driven-development-in-python/

In OpenStack we always put tests in  _base_module_name_/tests, So if you
are working on nova, you can see the unit tests in:

nova/tests

You can generally always run the tests by installing the 'tox' python
module/command on your system and running 'tox' in the root of the git
repository.

Projects use various testing helpers to make tests easier to read and
write. The most common one is testtools. A typical test will look like
this:


import testtools

from basemodule import submodule


class TestSubmoduleFoo(testtools.TestCase):
def test_foo_apple(self):
self.assertEquals(1, submodule.foo('apple'))

def test_foo_banana(self):
self.assertEquals(0, submodule.foo('banana'))


Often unit tests will include mocks and fakes to hide real world
interfacing code from the unit tests. You would do well to read up on
how those concepts work as well, google for 'python test mocking' and
'python test fakes'.

Good luck, and #openstack-dev is always there to try and help. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Rosa, Andrea (HP Cloud Services)
Hi all, 

This is a bit of a social norms thread

I've been consistently asking for tests in reviews for a while now, and I get 
the
occasional push-back. I think this falls into a few broad camps:

A - there is no test suite at all, adding one in unreasonable B - this thing 
cannot
be tested in this context (e.g. functional tests are defined in a different 
tree)
C - this particular thing is very hard to test D - testing this won't offer 
benefit E
- other things like this in the project don't have tests F - submitter doesn't
know how to write tests G - submitter doesn't have time to write tests

Now, of these, I think it's fine not add tests in cases A, B, C in combination
with D, and D.

In my opinion  C instead of being an acceptable reason for not having tests is 
a symptom of one of the two things:
1) F = Submitter doesn't know how to write tests, in this case someone else 
can help with suggestions
2) The code we are trying to test is too complicated so it's time to refactor it

And about D, In my opinion  tests always offer benefits, like code coverage or 
helping in understanding the code.

What do you think?

Regards
--
Andrea Rosa

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Mark McLoughlin
On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests

Nice breakdown.

 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.

I totally agree with the sentiment but, especially when it's a newcomer
to the project, I try to put myself in the shoes of the patch submitter
and double-check whether what we're asking is reasonable.

For example, if someone shows up to Nova with their first OpenStack
contribution, it fixes something which is unquestionably a bug - think
typo like raise NotFund('foo') - and testing this code patch requires
more than adding a simple new scenario to existing tests ...

That, for me, is an example of -1, we need a test! untested code is
broken! is really shooting the messenger, not valuing the newcomers
contribution and risks turning that person off the project forever.
Reviewers being overly aggressive about this where the project doesn't
have full test coverage to begin with really makes us seem unwelcoming.

In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
catching this! It would be great to have a unit test for this, but it's
clear the current code is broken so I'm fine with merging the fix
without a test. You could say it's now the reviewers responsibility to
merge a test, but if that requirement then turns off reviewers even
reviewing such a patch, then that doesn't help either.

So, with all of this, let's make sure we don't forget to first
appreciate the effort that went into submitting the patch that lacks
tests.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2013-10-31 06:30:32 -0700:
 On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
  This is a bit of a social norms thread
  
  I've been consistently asking for tests in reviews for a while now,
  and I get the occasional push-back. I think this falls into a few
  broad camps:
  
  A - there is no test suite at all, adding one in unreasonable
  B - this thing cannot be tested in this context (e.g. functional tests
  are defined in a different tree)
  C - this particular thing is very hard to test
  D - testing this won't offer benefit
  E - other things like this in the project don't have tests
  F - submitter doesn't know how to write tests
  G - submitter doesn't have time to write tests
 
 Nice breakdown.
 
  Now, of these, I think it's fine not add tests in cases A, B, C in
  combination with D, and D.
  
  I don't think E, F or G are sufficient reasons to merge something
  without tests, when reviewers are asking for them. G in the special
  case that the project really wants the patch landed - but then I'd
  expect reviewers to not ask for tests or to volunteer that they might
  be optional.
 
 I totally agree with the sentiment but, especially when it's a newcomer
 to the project, I try to put myself in the shoes of the patch submitter
 and double-check whether what we're asking is reasonable.
 

Even with a long time contributor, empathy is an important part of
constructing reviews. We could make more robotic things that review for
test coverage, but we haven't, because this is a gray area. The role of
a reviewer isn't just get patches merged and stop defects. It is also to
grow the other developers.

 For example, if someone shows up to Nova with their first OpenStack
 contribution, it fixes something which is unquestionably a bug - think
 typo like raise NotFund('foo') - and testing this code patch requires
 more than adding a simple new scenario to existing tests ...


This goes back to my recent suggestion to help the person not with a -1
or a +2, but with an additional patch that fixes it.

 That, for me, is an example of -1, we need a test! untested code is
 broken! is really shooting the messenger, not valuing the newcomers
 contribution and risks turning that person off the project forever.
 Reviewers being overly aggressive about this where the project doesn't
 have full test coverage to begin with really makes us seem unwelcoming.
 
 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but it's
 clear the current code is broken so I'm fine with merging the fix
 without a test. You could say it's now the reviewers responsibility to
 merge a test, but if that requirement then turns off reviewers even
 reviewing such a patch, then that doesn't help either.
 

I understand entirely why you choose this, and I think that is fine. I,
however, see this as a massive opportunity to teach. That code was only
broken because it was allowed it to be merged without tests. By letting
that situation continue, we only fix it today. The next major refactoring
has a high chance now of breaking that part of the code again.

So, rather than +2, I suggest -1 with compassion. Engage with the
submitter. If you don't know them, take a look at how hard it would be
to write a test for the behavior and give pointers to the exact test
suite that would need to be changed, or suggest a new test suite and
point at a good example to copy.

 So, with all of this, let's make sure we don't forget to first
 appreciate the effort that went into submitting the patch that lacks
 tests.
 

I'm not going to claim that I've always practiced -1 with compassion,
so thanks for reminding us all that we're not just reviewing code, we
are having a dialog with real live people.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Kyle Mestery (kmestery)
On Oct 31, 2013, at 8:56 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Mark McLoughlin's message of 2013-10-31 06:30:32 -0700:
 On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests
 
 Nice breakdown.
 
 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.
 
 I totally agree with the sentiment but, especially when it's a newcomer
 to the project, I try to put myself in the shoes of the patch submitter
 and double-check whether what we're asking is reasonable.
 
 
 Even with a long time contributor, empathy is an important part of
 constructing reviews. We could make more robotic things that review for
 test coverage, but we haven't, because this is a gray area. The role of
 a reviewer isn't just get patches merged and stop defects. It is also to
 grow the other developers.
 
 For example, if someone shows up to Nova with their first OpenStack
 contribution, it fixes something which is unquestionably a bug - think
 typo like raise NotFund('foo') - and testing this code patch requires
 more than adding a simple new scenario to existing tests ...
 
 
 This goes back to my recent suggestion to help the person not with a -1
 or a +2, but with an additional patch that fixes it.
 
 That, for me, is an example of -1, we need a test! untested code is
 broken! is really shooting the messenger, not valuing the newcomers
 contribution and risks turning that person off the project forever.
 Reviewers being overly aggressive about this where the project doesn't
 have full test coverage to begin with really makes us seem unwelcoming.
 
 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but it's
 clear the current code is broken so I'm fine with merging the fix
 without a test. You could say it's now the reviewers responsibility to
 merge a test, but if that requirement then turns off reviewers even
 reviewing such a patch, then that doesn't help either.
 
 
 I understand entirely why you choose this, and I think that is fine. I,
 however, see this as a massive opportunity to teach. That code was only
 broken because it was allowed it to be merged without tests. By letting
 that situation continue, we only fix it today. The next major refactoring
 has a high chance now of breaking that part of the code again.
 
 So, rather than +2, I suggest -1 with compassion. Engage with the
 submitter. If you don't know them, take a look at how hard it would be
 to write a test for the behavior and give pointers to the exact test
 suite that would need to be changed, or suggest a new test suite and
 point at a good example to copy.
 
 So, with all of this, let's make sure we don't forget to first
 appreciate the effort that went into submitting the patch that lacks
 tests.
 
 
 I'm not going to claim that I've always practiced -1 with compassion,
 so thanks for reminding us all that we're not just reviewing code, we
 are having a dialog with real live people.
 
I think this is the key thing here, thanks for bringing this up Clint. At the
end of the day, patches are submitted by real people. If we want to grow
the committer base and help people to become better reviewers, taking
the time to show them the ropes is part of the game. I'd argue that is in
fact part of what being a core is about in fact.

Thanks,
Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Khanh-Toan Tran
Hi all,

As a newbie of the community, I'm not familiar with unittest and how to use it 
here. I've learned that Jenkins runs tests
everytime we submit some code. But how to write the test and what is a 'good 
test' and a 'bad test'? I saw some commits
in gerrit but am unable to say if the written test is enough to judge the code, 
since it is the author of the code who writes
the test. Is there a framework to follow or some rules/pratices to respect?

Do you have some links to help me out?

Thanks,

Toan

- Original Message -
From: Kyle Mestery (kmestery) kmest...@cisco.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, October 31, 2013 3:05:27 PM
Subject: Re: [openstack-dev] When is it okay for submitters to say 'I   don't   
want to add tests' ?

On Oct 31, 2013, at 8:56 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Mark McLoughlin's message of 2013-10-31 06:30:32 -0700:
 On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests
 
 Nice breakdown.
 
 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.
 
 I totally agree with the sentiment but, especially when it's a newcomer
 to the project, I try to put myself in the shoes of the patch submitter
 and double-check whether what we're asking is reasonable.
 
 
 Even with a long time contributor, empathy is an important part of
 constructing reviews. We could make more robotic things that review for
 test coverage, but we haven't, because this is a gray area. The role of
 a reviewer isn't just get patches merged and stop defects. It is also to
 grow the other developers.
 
 For example, if someone shows up to Nova with their first OpenStack
 contribution, it fixes something which is unquestionably a bug - think
 typo like raise NotFund('foo') - and testing this code patch requires
 more than adding a simple new scenario to existing tests ...
 
 
 This goes back to my recent suggestion to help the person not with a -1
 or a +2, but with an additional patch that fixes it.
 
 That, for me, is an example of -1, we need a test! untested code is
 broken! is really shooting the messenger, not valuing the newcomers
 contribution and risks turning that person off the project forever.
 Reviewers being overly aggressive about this where the project doesn't
 have full test coverage to begin with really makes us seem unwelcoming.
 
 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but it's
 clear the current code is broken so I'm fine with merging the fix
 without a test. You could say it's now the reviewers responsibility to
 merge a test, but if that requirement then turns off reviewers even
 reviewing such a patch, then that doesn't help either.
 
 
 I understand entirely why you choose this, and I think that is fine. I,
 however, see this as a massive opportunity to teach. That code was only
 broken because it was allowed it to be merged without tests. By letting
 that situation continue, we only fix it today. The next major refactoring
 has a high chance now of breaking that part of the code again.
 
 So, rather than +2, I suggest -1 with compassion. Engage with the
 submitter. If you don't know them, take a look at how hard it would be
 to write a test for the behavior and give pointers to the exact test
 suite that would need to be changed, or suggest a new test suite and
 point at a good example to copy.
 
 So, with all of this, let's make sure we don't forget to first
 appreciate the effort that went into submitting the patch that lacks
 tests.
 
 
 I'm not going to claim that I've always practiced -1 with compassion,
 so thanks for reminding us all that we're not just reviewing code, we
 are having a dialog with real live people.
 
I think this is the key thing here, thanks for bringing this up Clint. At the
end of the day, patches are submitted by real people. If we want to grow
the committer base and help people to become better reviewers, taking
the time to show them 

Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Chris Friesen

On 10/31/2013 06:04 AM, Rosa, Andrea (HP Cloud Services) wrote:


A - there is no test suite at all, adding one in unreasonable B - this thing 
cannot
be tested in this context (e.g. functional tests are defined in a different 
tree)
C - this particular thing is very hard to test

 D - testing this won't offer benefit


In my opinion  C instead of being an acceptable reason for not having tests is 
a symptom of one of the two things:
1) F = Submitter doesn't know how to write tests, in this case someone else 
can help with suggestions
2) The code we are trying to test is too complicated so it's time to refactor it

And about D, In my opinion  tests always offer benefits, like code coverage or 
helping in understanding the code.


I think there are actually cases where C is valid.  It's difficult to 
test certain kinds of race conditions, for example, unless you have very 
low-level hooks into the guts of the system in order to force the 
desired conditions to reliably occur at exactly the right time.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Ben Nemec

On 2013-10-31 09:05, Kyle Mestery (kmestery) wrote:

On Oct 31, 2013, at 8:56 AM, Clint Byrum cl...@fewbar.com wrote:

Excerpts from Mark McLoughlin's message of 2013-10-31 06:30:32 -0700:

On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:

This is a bit of a social norms thread

I've been consistently asking for tests in reviews for a while now,
and I get the occasional push-back. I think this falls into a few
broad camps:

A - there is no test suite at all, adding one in unreasonable
B - this thing cannot be tested in this context (e.g. functional 
tests

are defined in a different tree)
C - this particular thing is very hard to test
D - testing this won't offer benefit
E - other things like this in the project don't have tests
F - submitter doesn't know how to write tests
G - submitter doesn't have time to write tests


Nice breakdown.


Now, of these, I think it's fine not add tests in cases A, B, C in
combination with D, and D.

I don't think E, F or G are sufficient reasons to merge something
without tests, when reviewers are asking for them. G in the special
case that the project really wants the patch landed - but then I'd
expect reviewers to not ask for tests or to volunteer that they 
might

be optional.


I totally agree with the sentiment but, especially when it's a 
newcomer
to the project, I try to put myself in the shoes of the patch 
submitter

and double-check whether what we're asking is reasonable.



Even with a long time contributor, empathy is an important part of
constructing reviews. We could make more robotic things that review 
for
test coverage, but we haven't, because this is a gray area. The role 
of
a reviewer isn't just get patches merged and stop defects. It is also 
to

grow the other developers.


For example, if someone shows up to Nova with their first OpenStack
contribution, it fixes something which is unquestionably a bug - 
think
typo like raise NotFund('foo') - and testing this code patch 
requires

more than adding a simple new scenario to existing tests ...



This goes back to my recent suggestion to help the person not with a 
-1

or a +2, but with an additional patch that fixes it.


That, for me, is an example of -1, we need a test! untested code is
broken! is really shooting the messenger, not valuing the newcomers
contribution and risks turning that person off the project forever.
Reviewers being overly aggressive about this where the project 
doesn't
have full test coverage to begin with really makes us seem 
unwelcoming.


In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
catching this! It would be great to have a unit test for this, but 
it's

clear the current code is broken so I'm fine with merging the fix
without a test. You could say it's now the reviewers responsibility 
to

merge a test, but if that requirement then turns off reviewers even
reviewing such a patch, then that doesn't help either.



I understand entirely why you choose this, and I think that is fine. 
I,
however, see this as a massive opportunity to teach. That code was 
only
broken because it was allowed it to be merged without tests. By 
letting
that situation continue, we only fix it today. The next major 
refactoring

has a high chance now of breaking that part of the code again.

So, rather than +2, I suggest -1 with compassion. Engage with the
submitter. If you don't know them, take a look at how hard it would be
to write a test for the behavior and give pointers to the exact test
suite that would need to be changed, or suggest a new test suite and
point at a good example to copy.


So, with all of this, let's make sure we don't forget to first
appreciate the effort that went into submitting the patch that lacks
tests.



I'm not going to claim that I've always practiced -1 with 
compassion,

so thanks for reminding us all that we're not just reviewing code, we
are having a dialog with real live people.

I think this is the key thing here, thanks for bringing this up Clint. 
At the
end of the day, patches are submitted by real people. If we want to 
grow

the committer base and help people to become better reviewers, taking
the time to show them the ropes is part of the game. I'd argue that is 
in

fact part of what being a core is about in fact.


When in doubt, I tend to err on the side of no score with comments.  
It's not ideal for every situation, but I like to think it gets my point 
across without making it sound like I disapprove of the change itself.  
If my suggestions are ignored completely, I've always got the -1 in my 
back pocket to press the issue. :-)


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Clint Byrum
Excerpts from Khanh-Toan Tran's message of 2013-10-31 07:22:06 -0700:
 Hi all,
 
 As a newbie of the community, I'm not familiar with unittest and how to use 
 it here. I've learned that Jenkins runs tests
 everytime we submit some code. But how to write the test and what is a 'good 
 test' and a 'bad test'? I saw some commits
 in gerrit but am unable to say if the written test is enough to judge the 
 code, since it is the author of the code who writes
 the test. Is there a framework to follow or some rules/pratices to respect?
 
 Do you have some links to help me out?
 

This is a nice synopsis of the concept of test driven development:

http://net.tutsplus.com/tutorials/python-tutorials/test-driven-development-in-python/

In OpenStack we always put tests in  _base_module_name_/tests, So if you
are working on nova, you can see the unit tests in:

nova/tests

You can generally always run the tests by installing the 'tox' python
module/command on your system and running 'tox' in the root of the git
repository.

Projects use various testing helpers to make tests easier to read and
write. The most common one is testtools. A typical test will look like
this:


import testtools

from basemodule import submodule


class TestSubmoduleFoo(testtools.TestCase):
def test_foo_apple(self):
self.assertEquals(1, submodule.foo('apple'))

def test_foo_banana(self):
self.assertEquals(0, submodule.foo('banana'))


Often unit tests will include mocks and fakes to hide real world
interfacing code from the unit tests. You would do well to read up on
how those concepts work as well, google for 'python test mocking' and
'python test fakes'.

Good luck, and #openstack-dev is always there to try and help. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Steven Hardy
On Thu, Oct 31, 2013 at 01:30:32PM +, Mark McLoughlin wrote:
 On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
  This is a bit of a social norms thread
  
  I've been consistently asking for tests in reviews for a while now,
  and I get the occasional push-back. I think this falls into a few
  broad camps:
  
  A - there is no test suite at all, adding one in unreasonable
  B - this thing cannot be tested in this context (e.g. functional tests
  are defined in a different tree)
  C - this particular thing is very hard to test
  D - testing this won't offer benefit
  E - other things like this in the project don't have tests
  F - submitter doesn't know how to write tests
  G - submitter doesn't have time to write tests
 
 Nice breakdown.
 
  Now, of these, I think it's fine not add tests in cases A, B, C in
  combination with D, and D.
  
  I don't think E, F or G are sufficient reasons to merge something
  without tests, when reviewers are asking for them. G in the special
  case that the project really wants the patch landed - but then I'd
  expect reviewers to not ask for tests or to volunteer that they might
  be optional.
 
 I totally agree with the sentiment but, especially when it's a newcomer
 to the project, I try to put myself in the shoes of the patch submitter
 and double-check whether what we're asking is reasonable.
 
 For example, if someone shows up to Nova with their first OpenStack
 contribution, it fixes something which is unquestionably a bug - think
 typo like raise NotFund('foo') - and testing this code patch requires
 more than adding a simple new scenario to existing tests ...
 
 That, for me, is an example of -1, we need a test! untested code is
 broken! is really shooting the messenger, not valuing the newcomers
 contribution and risks turning that person off the project forever.
 Reviewers being overly aggressive about this where the project doesn't
 have full test coverage to begin with really makes us seem unwelcoming.

The approach the Heat team have sometimes taken in this situation is to
merge the patch, but raise a bug (targetted at the next milestone)
identifying the missing coverage.

This has a few benefits (provided the patch in question is sufficiently
simple that patches aren't deemed essential)
- The bug gets fixed quickly
- The coverage still gets added, and we keep track of the gaps identified
- Less chance of discouraging new submitters
- Provides some low hanging fruit bugs, which new contributors can pick
  up

I'm not saying we do this routinely, but it's a possible middle ground to
consider between -1 fix all our tests and +2 shipit!

I think something that's important to recognise is that sometimes (often?)
writing tests is *hard*.  I mean, look at the barriers to entry, you need
to have some idea about tox, nose, testr, mox, mock, testscenarios, etc etc

The learning curve is really steep, and thats before you start considering
project specific test abstractions/fixtures/mocking-patterns which can all
be complex and non-obvious.

So +1 on a bit of compassion when reviewing, particularly if the
contributor is a new or casual contributor to the project.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Paul Belanger

On 13-10-30 10:37 PM, Robert Collins wrote:

This is a bit of a social norms thread

I've been consistently asking for tests in reviews for a while now,
and I get the occasional push-back. I think this falls into a few
broad camps:

A - there is no test suite at all, adding one in unreasonable
B - this thing cannot be tested in this context (e.g. functional tests
are defined in a different tree)
C - this particular thing is very hard to test
D - testing this won't offer benefit
E - other things like this in the project don't have tests
F - submitter doesn't know how to write tests
G - submitter doesn't have time to write tests

Now, of these, I think it's fine not add tests in cases A, B, C in
combination with D, and D.

I don't think E, F or G are sufficient reasons to merge something
without tests, when reviewers are asking for them. G in the special
case that the project really wants the patch landed - but then I'd
expect reviewers to not ask for tests or to volunteer that they might
be optional.

Now, if I'm wrong, and folk have different norms about when to accept
'reason X not to write tests' as a response from the submitter -
please let me know!

-Rob

I recently hit option A for nodepool.  My patch was accepted, but I 
didn't know where to start for adding the testsuite for the project.


So, that said, if option A keeps coming up, then I think the obvious 
choice is development needs to refocus to take the option of the table.


--
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: 
https://twitter.com/pabelanger


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Michael Bright
I like Steve's suggestion:

 The approach the Heat team have sometimes taken in this situation is to
 merge the patch, but raise a bug (targetted at the next milestone)
 identifying the missing coverage.

I'm (almost!) a first time contributor and I've put a fix on the backburner
as I find the time to ramp up on the unit tests suite.  The fix was
reviewed 2 weeks ago, requesting unit tests - very reasonable but I
needed help to even start (I got no response from requests on IRC, ML, or
e-mail to the bug reporter).

If it wasn't for the good Upstream University people mentoring me - heh,
they deserve a plug! - I'm sure I would have moved on.

Thankfully, a cunning commit message provoked the guidance I needed so I
just need a long weekend to get back to the tests - which I have now.

I'm not sure in which cases you would/wouldn't want to split the bug for
the implementation of tests but I'm sure it would help other first timers
and encourage new contributors.

Regards,
Mike





On 31 October 2013 19:51, Steven Hardy sha...@redhat.com wrote:

 On Thu, Oct 31, 2013 at 01:30:32PM +, Mark McLoughlin wrote:
  On Thu, 2013-10-31 at 15:37 +1300, Robert Collins wrote:
   This is a bit of a social norms thread
  
   I've been consistently asking for tests in reviews for a while now,
   and I get the occasional push-back. I think this falls into a few
   broad camps:
  
   A - there is no test suite at all, adding one in unreasonable
   B - this thing cannot be tested in this context (e.g. functional tests
   are defined in a different tree)
   C - this particular thing is very hard to test
   D - testing this won't offer benefit
   E - other things like this in the project don't have tests
   F - submitter doesn't know how to write tests
   G - submitter doesn't have time to write tests
 
  Nice breakdown.
 
   Now, of these, I think it's fine not add tests in cases A, B, C in
   combination with D, and D.
  
   I don't think E, F or G are sufficient reasons to merge something
   without tests, when reviewers are asking for them. G in the special
   case that the project really wants the patch landed - but then I'd
   expect reviewers to not ask for tests or to volunteer that they might
   be optional.
 
  I totally agree with the sentiment but, especially when it's a newcomer
  to the project, I try to put myself in the shoes of the patch submitter
  and double-check whether what we're asking is reasonable.
 
  For example, if someone shows up to Nova with their first OpenStack
  contribution, it fixes something which is unquestionably a bug - think
  typo like raise NotFund('foo') - and testing this code patch requires
  more than adding a simple new scenario to existing tests ...
 
  That, for me, is an example of -1, we need a test! untested code is
  broken! is really shooting the messenger, not valuing the newcomers
  contribution and risks turning that person off the project forever.
  Reviewers being overly aggressive about this where the project doesn't
  have full test coverage to begin with really makes us seem unwelcoming.

 The approach the Heat team have sometimes taken in this situation is to
 merge the patch, but raise a bug (targetted at the next milestone)
 identifying the missing coverage.

 This has a few benefits (provided the patch in question is sufficiently
 simple that patches aren't deemed essential)
 - The bug gets fixed quickly
 - The coverage still gets added, and we keep track of the gaps identified
 - Less chance of discouraging new submitters
 - Provides some low hanging fruit bugs, which new contributors can pick
   up

 I'm not saying we do this routinely, but it's a possible middle ground to
 consider between -1 fix all our tests and +2 shipit!

 I think something that's important to recognise is that sometimes (often?)
 writing tests is *hard*.  I mean, look at the barriers to entry, you need
 to have some idea about tox, nose, testr, mox, mock, testscenarios, etc etc

 The learning curve is really steep, and thats before you start considering
 project specific test abstractions/fixtures/mocking-patterns which can all
 be complex and non-obvious.

 So +1 on a bit of compassion when reviewing, particularly if the
 contributor is a new or casual contributor to the project.

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Sandy Walsh


On 10/30/2013 11:37 PM, Robert Collins wrote:
 This is a bit of a social norms thread
 
 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:
 
 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests
 
 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.
 
 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.
 
 Now, if I'm wrong, and folk have different norms about when to accept
 'reason X not to write tests' as a response from the submitter -
 please let me know!

I've done a lot of thinking around this topic [1][2] and really it comes
down to this: everything can be tested and should be.

There is an argument to A, but that goes beyond the scope of our use
case I think. If I hear B, I would suspect the tests aren't unit tests,
but are functional/integration tests (a common problem in OpenStack).
Functional tests are brittle and usually have painful setup sequences.
The other cases fall into the -1 camp for me. Tests required.

That said, recently I was -1'ed for not updating a test, because I added
code that didn't change the program flow, but introduced a new call.
According to my rules, that didn't need a test, but I agreed with the
logic that people would be upset if the call wasn't made (it was a
notification). So a test was added. Totally valid argument.

TL;DR: Tests are always required. We need to fix our tests to be proper
unit tests and not functional/integration tests so it's easy to add new
ones.

-S


[1]
http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.html
[2]
http://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html


 -Rob
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Jeremy Stanley
On 2013-10-31 13:30:32 + (+), Mark McLoughlin wrote:
[...]
 In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
 catching this! It would be great to have a unit test for this, but
 it's clear the current code is broken so I'm fine with merging the
 fix without a test. You could say it's now the reviewers
 responsibility to merge a test, but if that requirement then turns
 off reviewers even reviewing such a patch, then that doesn't help
 either.
[...]

I get the impression it was one of my +2s in a situation like this
which spawned the thread (or at least was the straw which broke the
camel's back), so apologies for stirring the beehive. Someone was
trying to set up their own copy of several of our infrastructure
projects, spotted a place in one where we had hard-coded something
specific to our environment, and wanted to parameterize it upstream
rather than paper over it on their end.

The bug-reporter-turned-patch-contributor didn't know how to write a
test which would confirm that parameter got passed through and we
weren't performing tests yet for any similar parameters already in
the daemon which I could point to as an example. I agree it's a
judgement call between risking discouraging a new contributor vs.
reducing test coverage further, but I was probably still a bit too
hasty to suggest that we could add tests for those in a separate
commit.

In my case I didn't have the available time to instruct the
contributor on how to write tests for this, but that also probably
meant that I didn't really have time to be reviewing the change
properly to begin with. I'm quite grateful to Robert for stepping in
and helping walk them through it! We got more tests, I think they
got a lot of benefit from the new experience as well, and I was
appropriately humbled for my lax attitude over the situation which
nearly allowed us to miss a great opportunity at educating another
developer on the merits of test coverage.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-31 Thread Adam Young

On 10/31/2013 03:24 PM, Jeremy Stanley wrote:

On 2013-10-31 13:30:32 + (+), Mark McLoughlin wrote:
[...]

In cases like that, I'd be of a mind to go +2 Awesome! Thanks for
catching this! It would be great to have a unit test for this, but
it's clear the current code is broken so I'm fine with merging the
fix without a test. You could say it's now the reviewers
responsibility to merge a test, but if that requirement then turns
off reviewers even reviewing such a patch, then that doesn't help
either.

[...]

I get the impression it was one of my +2s in a situation like this
which spawned the thread (or at least was the straw which broke the
camel's back), so apologies for stirring the beehive. Someone was
trying to set up their own copy of several of our infrastructure
projects, spotted a place in one where we had hard-coded something
specific to our environment, and wanted to parameterize it upstream
rather than paper over it on their end.

The bug-reporter-turned-patch-contributor didn't know how to write a
test which would confirm that parameter got passed through and we
weren't performing tests yet for any similar parameters already in
the daemon which I could point to as an example. I agree it's a
judgement call between risking discouraging a new contributor vs.
reducing test coverage further, but I was probably still a bit too
hasty to suggest that we could add tests for those in a separate
commit.

In my case I didn't have the available time to instruct the
contributor on how to write tests for this, but that also probably
meant that I didn't really have time to be reviewing the change
properly to begin with. I'm quite grateful to Robert for stepping in
and helping walk them through it! We got more tests, I think they
got a lot of benefit from the new experience as well, and I was
appropriately humbled for my lax attitude over the situation which
nearly allowed us to miss a great opportunity at educating another
developer on the merits of test coverage.
It is probably the best approach to work with a new patch submitter to 
show them how to write the tests.  Another approach is to write a test 
yourself.  CHeck for some precondition that shows the patch is appalied, 
and if it is not met, throw a Skip exception.  That test should pass 
until their bug comes in.  Then, once htier patch comes in, test test 
should be triggered.  You can remove the skip code in a future commit.


I would like to see more of this kind of coding:  tests that show a 
feature is missing etc.  We discussed it a bit in the Keystone/Client 
discussions where we wanted to run tests against a live sever for a 
client change, but couldn't really check in new tests in the server as 
it was after code freeze.  I made the decision to push for Tempest tests 
there...and we broke some new ground in our workflow.


One tool that is really valuable is the test coverage tool.  It can show 
the lines of code that have no coverage, and will help confirm if a 
given patch is tested or not.   It used to run on each commit, or  so I 
was told.  I was running it periodically for Keystone tests: here is an 
old run http://admiyo.fedorapeople.org/openstack/keystone/coverage/










___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-30 Thread Robert Collins
This is a bit of a social norms thread

I've been consistently asking for tests in reviews for a while now,
and I get the occasional push-back. I think this falls into a few
broad camps:

A - there is no test suite at all, adding one in unreasonable
B - this thing cannot be tested in this context (e.g. functional tests
are defined in a different tree)
C - this particular thing is very hard to test
D - testing this won't offer benefit
E - other things like this in the project don't have tests
F - submitter doesn't know how to write tests
G - submitter doesn't have time to write tests

Now, of these, I think it's fine not add tests in cases A, B, C in
combination with D, and D.

I don't think E, F or G are sufficient reasons to merge something
without tests, when reviewers are asking for them. G in the special
case that the project really wants the patch landed - but then I'd
expect reviewers to not ask for tests or to volunteer that they might
be optional.

Now, if I'm wrong, and folk have different norms about when to accept
'reason X not to write tests' as a response from the submitter -
please let me know!

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-30 Thread Morgan Fainberg
On Wed, Oct 30, 2013 at 7:37 PM, Robert Collins
robe...@robertcollins.net wrote:
 This is a bit of a social norms thread

 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:

 A - there is no test suite at all, adding one in unreasonable
 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test
 D - testing this won't offer benefit
 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests

 Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.


I think that C is not a really valid case to allow no tests (there are
always exceptions), I strongly believe that we can collaborate
(perhaps include some people who are better at test writing) to build
the tests and add them as a co-author in the commit message (or at the
very least have the test patchset be dependent on the main patchset.
Hard to test is too similar to F.

Just my general feelings on the topic.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-30 Thread Christopher Yeoh
 When is it okay for submitters to say 'I don't want to add tests' ?

When they don't want their patch merged :-) But more seriously

On Thu, Oct 31, 2013 at 1:07 PM, Robert Collins
robe...@robertcollins.netwrote:

 This is a bit of a social norms thread

 I've been consistently asking for tests in reviews for a while now,
 and I get the occasional push-back. I think this falls into a few
 broad camps:

 A - there is no test suite at all, adding one in unreasonable


Do you mean no test suite at all for the specific project or the feature
they are modifying?
For the latter I think its fine to require that they start one for the
feature even if it just tests
the behavior they just modified.


 B - this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree)
 C - this particular thing is very hard to test


Yea I have a lot of sympathy for this one. I run into this occasionally
myself
where it can be really difficult to cleanly, yet still vaguely
realistically inject
an error. I'm sort of stuck on one at the moment and I've definitely spent
a lot more
time thinking about how to write an adequate test than the fix itself.


 D - testing this won't offer benefit


The test will at least always act as a check that the specific behaviour
doesn't change
accidentally in the future won't it (in the context of bug fixes) ?



 E - other things like this in the project don't have tests
 F - submitter doesn't know how to write tests
 G - submitter doesn't have time to write tests


There is perhaps also:

H - a test in another project (say tempest) picks up this failure already.

and H is in combination with say C. I'd have a lot of sympathy for that
situation.


Now, of these, I think it's fine not add tests in cases A, B, C in
 combination with D, and D.

 I don't think E, F or G are sufficient reasons to merge something
 without tests, when reviewers are asking for them. G in the special
 case that the project really wants the patch landed - but then I'd
 expect reviewers to not ask for tests or to volunteer that they might
 be optional.


I agree with you on E, F and G. But in those circumstances we should
be offering those people help in writing tests, especially if they're new.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev