Re: [Openstack] [nova-testing] Efforts for Essex

2011-12-05 Thread Duncan McGreggor
On 30 Nov 2011 - 11:07, Duncan McGreggor wrote:
> On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
> > It's been a bit over a week since I started this thread. So far we've
> > agreed that running the test suite is too slow, mostly because there
> > are too many things in there that aren't unit tests.
> >
> > We've also discussed my fake db implementation at length. I think
> > we've generally agreed that it isn't completely insane, so that's
> > moving along nicely.
> >
> > Duncan has taken the first steps needed to split the test suite into
> > unit tests and everything else:
> >
> > ? https://review.openstack.org/#change,1879
> >
> > Just one more core +1 needed. Will someone beat me to it? Only time
> > will tell :) Thanks, Duncan!
> >
> > Anything else around unit testing anyone wants to get into The Great
> > Big Plan[tm]?
>
> Actually, yeah... one more thing :-)
>
> Jay and I were chatting about organization of infrastructure last
> night/this morning (on the review comments for the branch I
> submitted). He said that I should raise a concern I expressed for
> wider discussion: right now, tests are all piled into the tests
> directory. Below are my thoughts on this.
>
> I think such an approach is just fine for smaller projects; there's
> not a lot there, and it's all pretty easy to find. For large projects,
> this seems like not such a good idea for the following reasons:
>
>  * tests are kept separate from the code they relate to
>  * there are often odd test module file naming practices required
> (e.g., nova/a/api.py and nova/b/api.py both needing test cases in
> nova/tests/)
>  * there's no standard exercised for whether a subpackage gets a
> single test case module or whether it gets a test case subpackage
>  * test modules tend to be very long (and thus hard to navigate) due
> to the awkwardness of naming modules when all the code lives together
>  * it makes it harder for newcomers to find code; when they live
> together, it's a no-brainer
>
> OpenStack is definitely not a small project, and as our test coverage
> becomes more complete, these issues will have increased impact. I
> would like to clean all of this up :-) And I'm volunteering to do the
> work! Here's the sort of thing I envision, using nova.volume as an
> example:
>
>  * create nova/volume/tests
>  * move all scheduler-related tests (there are several) from
> nova/tests into nova/volume/tests
>  * break out tests on a per-module basis (e.g., nova/volume/driver.py
> would get the test module nova/volume/tests/test_driver.py, etc.)
>  * for modules that have already been broken out at a more
> fine-grained level, keep (smaller test case modules are nice!)
>  * only nova/*.py files will have a test case module in nova/tests
>  * bonus: update the test runner to print the full dotted path so it's
> immediately (and visually) clear where one has to go to address any
> failures
>
> Given approval, this work would be done in its own blueprint.

I've created a blueprint for this proposed work here:
  https://blueprints.launchpad.net/nova/+spec/unit-test-reorg

d

> All this
> work would be done in small chunks (probably one branch per module) so
> that it will be easy to review and adjust the approach as needed.
>
> Thoughts?
>
> d
>
> >
> > --
> > Soren Hansen ? ? ? ?| http://linux2go.dk/
> > Ubuntu Developer ? ?| http://www.ubuntu.com/
> > OpenStack Developer | http://www.openstack.org/
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to ? ? : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help ? : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-12-01 Thread Duncan McGreggor
On 30 Nov 2011 - 11:07, Duncan McGreggor wrote:
> On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
> > It's been a bit over a week since I started this thread. So far we've
> > agreed that running the test suite is too slow, mostly because there
> > are too many things in there that aren't unit tests.
> >
> > We've also discussed my fake db implementation at length. I think
> > we've generally agreed that it isn't completely insane, so that's
> > moving along nicely.
> >
> > Duncan has taken the first steps needed to split the test suite into
> > unit tests and everything else:
> >
> > ? https://review.openstack.org/#change,1879
> >
> > Just one more core +1 needed. Will someone beat me to it? Only time
> > will tell :) Thanks, Duncan!
> >
> > Anything else around unit testing anyone wants to get into The Great
> > Big Plan[tm]?
>
> Actually, yeah... one more thing :-)
>
> Jay and I were chatting about organization of infrastructure last
> night/this morning (on the review comments for the branch I
> submitted). He said that I should raise a concern I expressed for
> wider discussion: right now, tests are all piled into the tests
> directory. Below are my thoughts on this.
>
> I think such an approach is just fine for smaller projects; there's
> not a lot there, and it's all pretty easy to find. For large projects,
> this seems like not such a good idea for the following reasons:
>
>  * tests are kept separate from the code they relate to
>  * there are often odd test module file naming practices required
> (e.g., nova/a/api.py and nova/b/api.py both needing test cases in
> nova/tests/)
>  * there's no standard exercised for whether a subpackage gets a
> single test case module or whether it gets a test case subpackage
>  * test modules tend to be very long (and thus hard to navigate) due
> to the awkwardness of naming modules when all the code lives together
>  * it makes it harder for newcomers to find code; when they live
> together, it's a no-brainer
>
> OpenStack is definitely not a small project, and as our test coverage
> becomes more complete, these issues will have increased impact. I
> would like to clean all of this up :-) And I'm volunteering to do the
> work! Here's the sort of thing I envision, using nova.volume as an
> example:
>
>  * create nova/volume/tests
>  * move all scheduler-related tests (there are several) from
> nova/tests into nova/volume/tests

This was a typo, and should have read:

 * move all volume-related tests ...

d

>  * break out tests on a per-module basis (e.g., nova/volume/driver.py
> would get the test module nova/volume/tests/test_driver.py, etc.)
>  * for modules that have already been broken out at a more
> fine-grained level, keep (smaller test case modules are nice!)
>  * only nova/*.py files will have a test case module in nova/tests
>  * bonus: update the test runner to print the full dotted path so it's
> immediately (and visually) clear where one has to go to address any
> failures
>
> Given approval, this work would be done in its own blueprint. All this
> work would be done in small chunks (probably one branch per module) so
> that it will be easy to review and adjust the approach as needed.
>
> Thoughts?
>
> d
>
> >
> > --
> > Soren Hansen ? ? ? ?| http://linux2go.dk/
> > Ubuntu Developer ? ?| http://www.ubuntu.com/
> > OpenStack Developer | http://www.openstack.org/
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to ? ? : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help ? : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Duncan McGreggor
On Wed, Nov 30, 2011 at 11:35 AM, Jason Kölker  wrote:
> On Wed, 2011-11-30 at 11:07 -0800, Duncan McGreggor wrote:
>>  * create nova/volume/tests
>>  * move all scheduler-related tests (there are several) from
>> nova/tests into nova/volume/tests
>>  * break out tests on a per-module basis (e.g., nova/volume/driver.py
>> would get the test module nova/volume/tests/test_driver.py, etc.)
>>  * for modules that have already been broken out at a more
>> fine-grained level, keep (smaller test case modules are nice!)
>>  * only nova/*.py files will have a test case module in nova/tests
>>  * bonus: update the test runner to print the full dotted path so it's
>> immediately (and visually) clear where one has to go to address any
>> failures
>>
>> Given approval, this work would be done in its own blueprint. All this
>> work would be done in small chunks (probably one branch per module) so
>> that it will be easy to review and adjust the approach as needed.
>>
>> Thoughts?
>
> I like this. It paves the way being able to break nova up into smaller
> inter-changeable packages. My only hesitation is stubs and fakes
> sharing.
>
> I don't want to bring up the unit vs func test debate again, but
> currently if a change happens on one side of the rpc layer, there is
> *hopefully* only one fake/stub set to change. If each module contains
> it's own set of tests, I worry that each module will start having their
> own set of fakes which will have to be updated. (I know this is already
> the case in many places, but hopefully we are all working on fixing
> that, right...? ;)

Yup! That's currently part of the work I'm doing in this blueprint:
  https://blueprints.launchpad.net/nova/+spec/consolidate-testing-infrastructure

Note this line:
  [oubiwann] moving other fakes in various subpackages into
nova.testing.fake: TODO

:-)

d

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Chris Behrens
It'll be a couple days yet.  I was refactoring a few things in the scheduler 
and while re-doing some tests, I ended up going down this rabbit hole of 
re-doing all of the tests.  It's turned into a 6500 line diff so far... :) 
which is a bit much for just the refactoring that I need to get in first.  So, 
I'm currently splitting these out into a couple of different reviews.

- Chris


On Nov 30, 2011, at 1:53 PM, Duncan McGreggor wrote:

> On 30 Nov 2011 - 19:26, Chris Behrens wrote:
>> I need to catch up a bit with this thread, but I wanted to mention I
>> have a huge patch coming that refactors almost all of the scheduler
>> tests into true unit tests.
> 
> Nice!
> 
>> I'd started this for other reasons and I
>> hope it jives with the plans here.  But if anyone is looking at the
>> scheduler tests, we should sync up.
> 
> I was going to actually use the scheduler as the example when I sent
> this email out, but I switched to something a bit cleaner instead... so
> this is great news! Can't wait to see it :-)
> 
> d
> 
>> - Chris
>> 
>> On Nov 30, 2011, at 1:07 PM, Duncan McGreggor  wrote:
>> 
>>> On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
 It's been a bit over a week since I started this thread. So far we've
 agreed that running the test suite is too slow, mostly because there
 are too many things in there that aren't unit tests.
 
 We've also discussed my fake db implementation at length. I think
 we've generally agreed that it isn't completely insane, so that's
 moving along nicely.
 
 Duncan has taken the first steps needed to split the test suite into
 unit tests and everything else:
 
  https://review.openstack.org/#change,1879
 
 Just one more core +1 needed. Will someone beat me to it? Only time
 will tell :) Thanks, Duncan!
 
 Anything else around unit testing anyone wants to get into The Great
 Big Plan[tm]?
>>> 
>>> Actually, yeah... one more thing :-)
>>> 
>>> Jay and I were chatting about organization of infrastructure last
>>> night/this morning (on the review comments for the branch I
>>> submitted). He said that I should raise a concern I expressed for
>>> wider discussion: right now, tests are all piled into the tests
>>> directory. Below are my thoughts on this.
>>> 
>>> I think such an approach is just fine for smaller projects; there's
>>> not a lot there, and it's all pretty easy to find. For large projects,
>>> this seems like not such a good idea for the following reasons:
>>> 
>>> * tests are kept separate from the code they relate to
>>> * there are often odd test module file naming practices required
>>> (e.g., nova/a/api.py and nova/b/api.py both needing test cases in
>>> nova/tests/)
>>> * there's no standard exercised for whether a subpackage gets a
>>> single test case module or whether it gets a test case subpackage
>>> * test modules tend to be very long (and thus hard to navigate) due
>>> to the awkwardness of naming modules when all the code lives together
>>> * it makes it harder for newcomers to find code; when they live
>>> together, it's a no-brainer
>>> 
>>> OpenStack is definitely not a small project, and as our test coverage
>>> becomes more complete, these issues will have increased impact. I
>>> would like to clean all of this up :-) And I'm volunteering to do the
>>> work! Here's the sort of thing I envision, using nova.volume as an
>>> example:
>>> 
>>> * create nova/volume/tests
>>> * move all scheduler-related tests (there are several) from
>>> nova/tests into nova/volume/tests
>>> * break out tests on a per-module basis (e.g., nova/volume/driver.py
>>> would get the test module nova/volume/tests/test_driver.py, etc.)
>>> * for modules that have already been broken out at a more
>>> fine-grained level, keep (smaller test case modules are nice!)
>>> * only nova/*.py files will have a test case module in nova/tests
>>> * bonus: update the test runner to print the full dotted path so it's
>>> immediately (and visually) clear where one has to go to address any
>>> failures
>>> 
>>> Given approval, this work would be done in its own blueprint. All this
>>> work would be done in small chunks (probably one branch per module) so
>>> that it will be easy to review and adjust the approach as needed.
>>> 
>>> Thoughts?
>>> 
>>> d
>>> 
 
 --
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
>>> 
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://

Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Duncan McGreggor
On 30 Nov 2011 - 19:26, Chris Behrens wrote:
> I need to catch up a bit with this thread, but I wanted to mention I
> have a huge patch coming that refactors almost all of the scheduler
> tests into true unit tests.

Nice!

> I'd started this for other reasons and I
> hope it jives with the plans here.  But if anyone is looking at the
> scheduler tests, we should sync up.

I was going to actually use the scheduler as the example when I sent
this email out, but I switched to something a bit cleaner instead... so
this is great news! Can't wait to see it :-)

d

> - Chris
>
> On Nov 30, 2011, at 1:07 PM, Duncan McGreggor  wrote:
>
> > On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
> >> It's been a bit over a week since I started this thread. So far we've
> >> agreed that running the test suite is too slow, mostly because there
> >> are too many things in there that aren't unit tests.
> >>
> >> We've also discussed my fake db implementation at length. I think
> >> we've generally agreed that it isn't completely insane, so that's
> >> moving along nicely.
> >>
> >> Duncan has taken the first steps needed to split the test suite into
> >> unit tests and everything else:
> >>
> >>   https://review.openstack.org/#change,1879
> >>
> >> Just one more core +1 needed. Will someone beat me to it? Only time
> >> will tell :) Thanks, Duncan!
> >>
> >> Anything else around unit testing anyone wants to get into The Great
> >> Big Plan[tm]?
> >
> > Actually, yeah... one more thing :-)
> >
> > Jay and I were chatting about organization of infrastructure last
> > night/this morning (on the review comments for the branch I
> > submitted). He said that I should raise a concern I expressed for
> > wider discussion: right now, tests are all piled into the tests
> > directory. Below are my thoughts on this.
> >
> > I think such an approach is just fine for smaller projects; there's
> > not a lot there, and it's all pretty easy to find. For large projects,
> > this seems like not such a good idea for the following reasons:
> >
> > * tests are kept separate from the code they relate to
> > * there are often odd test module file naming practices required
> > (e.g., nova/a/api.py and nova/b/api.py both needing test cases in
> > nova/tests/)
> > * there's no standard exercised for whether a subpackage gets a
> > single test case module or whether it gets a test case subpackage
> > * test modules tend to be very long (and thus hard to navigate) due
> > to the awkwardness of naming modules when all the code lives together
> > * it makes it harder for newcomers to find code; when they live
> > together, it's a no-brainer
> >
> > OpenStack is definitely not a small project, and as our test coverage
> > becomes more complete, these issues will have increased impact. I
> > would like to clean all of this up :-) And I'm volunteering to do the
> > work! Here's the sort of thing I envision, using nova.volume as an
> > example:
> >
> > * create nova/volume/tests
> > * move all scheduler-related tests (there are several) from
> > nova/tests into nova/volume/tests
> > * break out tests on a per-module basis (e.g., nova/volume/driver.py
> > would get the test module nova/volume/tests/test_driver.py, etc.)
> > * for modules that have already been broken out at a more
> > fine-grained level, keep (smaller test case modules are nice!)
> > * only nova/*.py files will have a test case module in nova/tests
> > * bonus: update the test runner to print the full dotted path so it's
> > immediately (and visually) clear where one has to go to address any
> > failures
> >
> > Given approval, this work would be done in its own blueprint. All this
> > work would be done in small chunks (probably one branch per module) so
> > that it will be easy to review and adjust the approach as needed.
> >
> > Thoughts?
> >
> > d
> >
> >>
> >> --
> >> Soren Hansen| http://linux2go.dk/
> >> Ubuntu Developer| http://www.ubuntu.com/
> >> OpenStack Developer | http://www.openstack.org/
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Jason Kölker
On Wed, 2011-11-30 at 11:07 -0800, Duncan McGreggor wrote:
>  * create nova/volume/tests
>  * move all scheduler-related tests (there are several) from
> nova/tests into nova/volume/tests
>  * break out tests on a per-module basis (e.g., nova/volume/driver.py
> would get the test module nova/volume/tests/test_driver.py, etc.)
>  * for modules that have already been broken out at a more
> fine-grained level, keep (smaller test case modules are nice!)
>  * only nova/*.py files will have a test case module in nova/tests
>  * bonus: update the test runner to print the full dotted path so it's
> immediately (and visually) clear where one has to go to address any
> failures
> 
> Given approval, this work would be done in its own blueprint. All this
> work would be done in small chunks (probably one branch per module) so
> that it will be easy to review and adjust the approach as needed.
> 
> Thoughts?

I like this. It paves the way being able to break nova up into smaller
inter-changeable packages. My only hesitation is stubs and fakes
sharing.

I don't want to bring up the unit vs func test debate again, but
currently if a change happens on one side of the rpc layer, there is
*hopefully* only one fake/stub set to change. If each module contains
it's own set of tests, I worry that each module will start having their
own set of fakes which will have to be updated. (I know this is already
the case in many places, but hopefully we are all working on fixing
that, right...? ;)

TS;DR Is Bueno! I'm lazy.

Happy Hacking!

7-11


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Chris Behrens
I need to catch up a bit with this thread, but I wanted to mention I have a 
huge patch coming that refactors almost all of the scheduler tests into true 
unit tests.  I'd started this for other reasons and I hope it jives with the 
plans here.  But if anyone is looking at the scheduler tests, we should sync up.

- Chris

On Nov 30, 2011, at 1:07 PM, Duncan McGreggor  wrote:

> On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
>> It's been a bit over a week since I started this thread. So far we've
>> agreed that running the test suite is too slow, mostly because there
>> are too many things in there that aren't unit tests.
>> 
>> We've also discussed my fake db implementation at length. I think
>> we've generally agreed that it isn't completely insane, so that's
>> moving along nicely.
>> 
>> Duncan has taken the first steps needed to split the test suite into
>> unit tests and everything else:
>> 
>>   https://review.openstack.org/#change,1879
>> 
>> Just one more core +1 needed. Will someone beat me to it? Only time
>> will tell :) Thanks, Duncan!
>> 
>> Anything else around unit testing anyone wants to get into The Great
>> Big Plan[tm]?
> 
> Actually, yeah... one more thing :-)
> 
> Jay and I were chatting about organization of infrastructure last
> night/this morning (on the review comments for the branch I
> submitted). He said that I should raise a concern I expressed for
> wider discussion: right now, tests are all piled into the tests
> directory. Below are my thoughts on this.
> 
> I think such an approach is just fine for smaller projects; there's
> not a lot there, and it's all pretty easy to find. For large projects,
> this seems like not such a good idea for the following reasons:
> 
> * tests are kept separate from the code they relate to
> * there are often odd test module file naming practices required
> (e.g., nova/a/api.py and nova/b/api.py both needing test cases in
> nova/tests/)
> * there's no standard exercised for whether a subpackage gets a
> single test case module or whether it gets a test case subpackage
> * test modules tend to be very long (and thus hard to navigate) due
> to the awkwardness of naming modules when all the code lives together
> * it makes it harder for newcomers to find code; when they live
> together, it's a no-brainer
> 
> OpenStack is definitely not a small project, and as our test coverage
> becomes more complete, these issues will have increased impact. I
> would like to clean all of this up :-) And I'm volunteering to do the
> work! Here's the sort of thing I envision, using nova.volume as an
> example:
> 
> * create nova/volume/tests
> * move all scheduler-related tests (there are several) from
> nova/tests into nova/volume/tests
> * break out tests on a per-module basis (e.g., nova/volume/driver.py
> would get the test module nova/volume/tests/test_driver.py, etc.)
> * for modules that have already been broken out at a more
> fine-grained level, keep (smaller test case modules are nice!)
> * only nova/*.py files will have a test case module in nova/tests
> * bonus: update the test runner to print the full dotted path so it's
> immediately (and visually) clear where one has to go to address any
> failures
> 
> Given approval, this work would be done in its own blueprint. All this
> work would be done in small chunks (probably one branch per module) so
> that it will be easy to review and adjust the approach as needed.
> 
> Thoughts?
> 
> d
> 
>> 
>> --
>> Soren Hansen| http://linux2go.dk/
>> Ubuntu Developer| http://www.ubuntu.com/
>> OpenStack Developer | http://www.openstack.org/
>> 
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-30 Thread Duncan McGreggor
On Tue, Nov 29, 2011 at 12:21 PM, Soren Hansen  wrote:
> It's been a bit over a week since I started this thread. So far we've
> agreed that running the test suite is too slow, mostly because there
> are too many things in there that aren't unit tests.
>
> We've also discussed my fake db implementation at length. I think
> we've generally agreed that it isn't completely insane, so that's
> moving along nicely.
>
> Duncan has taken the first steps needed to split the test suite into
> unit tests and everything else:
>
>   https://review.openstack.org/#change,1879
>
> Just one more core +1 needed. Will someone beat me to it? Only time
> will tell :) Thanks, Duncan!
>
> Anything else around unit testing anyone wants to get into The Great
> Big Plan[tm]?

Actually, yeah... one more thing :-)

Jay and I were chatting about organization of infrastructure last
night/this morning (on the review comments for the branch I
submitted). He said that I should raise a concern I expressed for
wider discussion: right now, tests are all piled into the tests
directory. Below are my thoughts on this.

I think such an approach is just fine for smaller projects; there's
not a lot there, and it's all pretty easy to find. For large projects,
this seems like not such a good idea for the following reasons:

 * tests are kept separate from the code they relate to
 * there are often odd test module file naming practices required
(e.g., nova/a/api.py and nova/b/api.py both needing test cases in
nova/tests/)
 * there's no standard exercised for whether a subpackage gets a
single test case module or whether it gets a test case subpackage
 * test modules tend to be very long (and thus hard to navigate) due
to the awkwardness of naming modules when all the code lives together
 * it makes it harder for newcomers to find code; when they live
together, it's a no-brainer

OpenStack is definitely not a small project, and as our test coverage
becomes more complete, these issues will have increased impact. I
would like to clean all of this up :-) And I'm volunteering to do the
work! Here's the sort of thing I envision, using nova.volume as an
example:

 * create nova/volume/tests
 * move all scheduler-related tests (there are several) from
nova/tests into nova/volume/tests
 * break out tests on a per-module basis (e.g., nova/volume/driver.py
would get the test module nova/volume/tests/test_driver.py, etc.)
 * for modules that have already been broken out at a more
fine-grained level, keep (smaller test case modules are nice!)
 * only nova/*.py files will have a test case module in nova/tests
 * bonus: update the test runner to print the full dotted path so it's
immediately (and visually) clear where one has to go to address any
failures

Given approval, this work would be done in its own blueprint. All this
work would be done in small chunks (probably one branch per module) so
that it will be easy to review and adjust the approach as needed.

Thoughts?

d

>
> --
> Soren Hansen        | http://linux2go.dk/
> Ubuntu Developer    | http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-29 Thread Nachi Ueno
Hi folks

>Jay
Thank you for your pointing this!:)
Hey OpenStackers,please help forward-porting :P

>Soren
>Anything else around unit testing anyone wants to get into The Great
Big Plan[tm]?

And also, we should have a policy for unit test.
Something like this

- New code should have a concrete specification doc , and all unit
test should be written based on the specs
- New code should include negative test case for each parameters.
- New code don't lower coverages

Cheers
Nati

2011/11/29 Jay Pipes :
> On Tue, Nov 29, 2011 at 3:21 PM, Soren Hansen  wrote:
>> Anything else around unit testing anyone wants to get into The Great
>> Big Plan[tm]?
>
> Well, NTT has written well over a thousand new unit tests for Nova. It
> would be great to get some more help from everyone in forward-porting
> them. To date, we've been a bit stymied by lack of resources to do the
> forward-porting, so if anyone has some spare cycles, there are an
> absolute ton of both new tests and bug fixes needing
> forward-porting...
>
> >From another email, here is instructions for how to do the
> forward-porting for those interested in helping out.
>
> This is the NTT bug fix + unit test branch. Note that this is the
> branch that is based on stable/diablo:
>
> https://github.com/ntt-pf-lab/nova/branches
>
> All the bugs in the OpenStack QA project (and Nova project) that need
> forward-porting are tagged with "forward-port-needed". You can see a
> list of the unassigned ones needing forward-porting here:
>
> http://bit.ly/rPVjCf
>
>
> The workflow for forward-porting these fixes/new tests is like this:
>
> A. Pick a bug from above list (http://bit.ly/rPVjCf)
>
> B. Assign yourself to bug
>
> C. Fix problem and review request
>
> I believe folks need some further instructions on this step.
> Basically, the NTT team has *already* fixed the bug, but we need to
> apply the bug fix to trunk and propose this fix for merging into
> trunk.
>
> The following steps are how to do this. I'm going to take a bug fix as
> an example, and show the steps needed to forward port it to trunk.
> Here is the bug and associated fix I will forward port:
>
> https://bugs.launchpad.net/openstack-qa/+bug/883293
>
> Nati's original bug fix branch is linked on the bug report:
>
> https://github.com/ntt-pf-lab/nova/tree/openstack-qa-nova-883293
>
> When looking at the branch, you can see the latest commits by clicking
> the "Commits" tab near the top of the page:
>
> https://github.com/ntt-pf-lab/nova/commits/openstack-qa-nova-883293
>
> As you can see, the top 2 commits form the bug fix from Nati -- the
> last commit being a test case, and the second to last commit being a
> fix for the infinite loop references in the bug report. The two
> commits have the following two SHA1 identifiers:
>
> 9cf5945c9e64d1c6a2eb6d9499e80d6c19aed058
> 2a95311263cbda5886b9409284fea2d155b3cada
>
> These are the two commits I need to apply to my local *trunk* branch
> of Nova. To do so, I do the following locally:
>
> 1) Before doing anything, we first need to set up a remote for the NTT
> team repo on GitHub:
>
> jpipes@uberbox:~/repos/nova$ git remote add ntt
> https://github.com/ntt-pf-lab/nova.git
> jpipes@uberbox:~/repos/nova$ git fetch ntt
> remote: Counting objects: 2255, done.
> remote: Compressing objects: 100% (432/432), done.
> remote: Total 2120 (delta 1694), reused 2108 (delta 1686)
> Receiving objects: 100% (2120/2120), 547.09 KiB | 293 KiB/s, done.
> Resolving deltas: 100% (1694/1694), completed with 81 local objects.
> >From https://github.com/ntt-pf-lab/nova
>  * [new branch]      int001     -> ntt/int001
>  * [new branch]      int001_base -> ntt/int001_base
>  * [new branch]      int002.d1  -> ntt/int002.d1
>  * [new branch]      int003     -> ntt/int003
>  * [new branch]      ntt/stable/diablo -> ntt/ntt/stable/diablo
>  * [new branch]      openstack-qa-api-validation ->
> ntt/openstack-qa-api-validation
> 
>  * [new branch]      openstack-qa-nova-888229 -> ntt/openstack-qa-nova-888229
>  * [new branch]      openstack-qa-test-branch -> ntt/openstack-qa-test-branch
>  * [new branch]      stable/diablo -> ntt/stable/diablo
>
> 2) Now that we have fetched the NTT branches (containing all the bug
> fixes we need to forward-port), we create a local branch based off of
> Essex trunk. On my machine, this local Essex trunk branch is called
> master:
>
> jpipes@uberbox:~/repos/nova$ git branch
> * diablo
>  master
> jpipes@uberbox:~/repos/nova$ git checkout master
> Switched to branch 'master'
> jpipes@uberbox:~/repos/nova$ git checkout -b bug883293
> Switched to a new branch 'bug883293'
>
> 3) We now need to cherry-pick the two commits from above. I do so in
> reverse order, as I want to apply the patch with the bug fix first and
> then the patch with the test case:
>
> jpipes@uberbox:~/repos/nova$ git cherry-pick
> 2a95311263cbda5886b9409284fea2d155b3cada
> [bug883293 81e49b7] combination of log_notifier and
> log.PublishErrorsHandler causes infinite loop Fixes bug 883293.

Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-29 Thread Soren Hansen
It's been a bit over a week since I started this thread. So far we've
agreed that running the test suite is too slow, mostly because there
are too many things in there that aren't unit tests.

We've also discussed my fake db implementation at length. I think
we've generally agreed that it isn't completely insane, so that's
moving along nicely.

Duncan has taken the first steps needed to split the test suite into
unit tests and everything else:

   https://review.openstack.org/#change,1879

Just one more core +1 needed. Will someone beat me to it? Only time
will tell :) Thanks, Duncan!

Anything else around unit testing anyone wants to get into The Great
Big Plan[tm]?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-28 Thread Trey Morris
I retract any concerns I brought up earlier as Soren gracefully answered
them in last week's meeting. +1 from me. I look forward to the review.

-tr3buchet

On Thu, Nov 24, 2011 at 3:30 AM, Soren Hansen  wrote:

> 2011/11/24 Sandy Walsh :
> > haha ... worse email thread ever.
> >
> > I'll catch you on IRC ... we've diverged too far to make sense.
>
> For anyone interested, this conversation continued on IRC yesterday.
> You can read it at the very end of
>
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-23.log
>
> as well as the very beginning of
>
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-24.log
>
> My internet connection disappeared just as we were finishing our
> discussion.
>
> --
> Soren Hansen| http://linux2go.dk/
> Ubuntu Developer| http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-24 Thread Yuriy Taraday
Forgot to CC maillist again.

Kind regards, Yuriy.

On Thu, Nov 24, 2011 at 14:13, Yuriy Taraday  wrote:

> > 2011-11-24T00:12:57   I guess eventually we need to push
> that to all nova..api layers
>
> I think, there should be some common package that should allow do this
> with any joint where we can plug lots of different modules. Such joints
> exist in all OpenStack projects, so this package should be used in every
> one of them. It's function should be like "make sure that all this modules
> look and behave like this etalon (fake) one, and let tests use the fake
> one".
>
> By the way, speaking about other projects. I was developing LDAP store for
> Keystone and found the structure of the store API there as not ideal, but
> much better than the one in Nova. The one strong point there is that all
> parts of storage are (or strive for being) loose-coupled, so that we can
> just tune config to store different parts of knowledge in separate
> independent storages.
> Such loose coupling makes it easier to test and develop storage backend
> piece-by-piece.
>
> Kind regards, Yuriy.
>
> PS: I strongly believe in this community and dream about OpenStack Common
> project.
>
>
> On Thu, Nov 24, 2011 at 13:30, Soren Hansen  wrote:
>
>> 2011/11/24 Sandy Walsh :
>> > haha ... worse email thread ever.
>> >
>> > I'll catch you on IRC ... we've diverged too far to make sense.
>>
>> For anyone interested, this conversation continued on IRC yesterday.
>> You can read it at the very end of
>>
>>
>> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-23.log
>>
>> as well as the very beginning of
>>
>>
>> http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-24.log
>>
>> My internet connection disappeared just as we were finishing our
>> discussion.
>>
>> --
>> Soren Hansen| http://linux2go.dk/
>> Ubuntu Developer| http://www.ubuntu.com/
>> OpenStack Developer | http://www.openstack.org/
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-24 Thread Soren Hansen
2011/11/24 Sandy Walsh :
> haha ... worse email thread ever.
>
> I'll catch you on IRC ... we've diverged too far to make sense.

For anyone interested, this conversation continued on IRC yesterday.
You can read it at the very end of

   
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-23.log

as well as the very beginning of

   
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2011-11-24.log

My internet connection disappeared just as we were finishing our discussion.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
haha ... worse email thread ever. 

I'll catch you on IRC ... we've diverged too far to make sense.

-S

From: Soren Hansen [so...@linux2go.dk]
Sent: Wednesday, November 23, 2011 6:30 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/23 Sandy Walsh :
> I understand what you're proposing, but I'm backtracking a little.
> (my kingdom for you and a whiteboard in the same room :)

Well, IRC would be a good start. :) I haven't seen you on IRC for days?

> I think that you could have a hybrid of your
> db.do_something("desired_return_value")

I may be reading too much into this, but this example suggests you're
not following me, to be honest.

db.instance_create is not a method I'm adding. It is an existing
method. It's the method you use to add an instance to the data store,
so it's not so much about passing it "desired return values". It's
about adding an instance to the database in the exactly same fashion
as production could would have done it, thus allowing subsequent calls
to instance_get (or any of the ~30 other methods that return one or
more Instance objects) to return it appropriately. And by
appropriately, I mean: with all the same attributes as one of the real
db drivers would have returned.

> to produce:
> self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or that',
>'instance_type_id': 42})

We have this functionality today. My example:

   self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

was copied from one of our existing tests.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Soren Hansen
2011/11/23 Sandy Walsh :
> I understand what you're proposing, but I'm backtracking a little.
> (my kingdom for you and a whiteboard in the same room :)

Well, IRC would be a good start. :) I haven't seen you on IRC for days?

> I think that you could have a hybrid of your
> db.do_something("desired_return_value")

I may be reading too much into this, but this example suggests you're
not following me, to be honest.

db.instance_create is not a method I'm adding. It is an existing
method. It's the method you use to add an instance to the data store,
so it's not so much about passing it "desired return values". It's
about adding an instance to the database in the exactly same fashion
as production could would have done it, thus allowing subsequent calls
to instance_get (or any of the ~30 other methods that return one or
more Instance objects) to return it appropriately. And by
appropriately, I mean: with all the same attributes as one of the real
db drivers would have returned.

> to produce:
> self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or that',
>                'instance_type_id': 42})

We have this functionality today. My example:

   self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

was copied from one of our existing tests.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
I understand what you're proposing, but I'm backtracking a little. 
(my kingdom for you and a whiteboard in the same room :)

I think that you could have a hybrid of your
db.do_something("desired_return_value")
and 
self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)
(which I don't think is terrible other than requiring a nested method)

to produce: 
self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or that',
'instance_type_id': 42})

which would work with things other than just the db.



---

> So, you've made a much better StubOutWithMock() and slightly better 
> stubs.Set() by (essentially) ignoring the method parameter checks and just 
> focusing on the return type.

No, no. Read my e-mail again. I don't want to do it that way either. I
showed two examples of what I'd like to get rid of, followed by what I'd
like to do instead.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Soren Hansen
2011/11/23 Sandy Walsh :
> :) yeah, you're completely misunderstanding me.

Likewise! :)

> So, you've made a much better StubOutWithMock() and slightly better 
> stubs.Set() by (essentially) ignoring the method parameter checks and just 
> focusing on the return type.

No, no. Read my e-mail again. I don't want to do it that way either. I
showed two examples of what I'd like to get rid of, followed by what I'd
like to do instead.

> Side note:
> I don't view tests that permit
> exercise_the_routine_that_will_eventually_do_an_instance_get()
> calls to be unit tests ... they're integration tests and the source of all 
> this headache in the first place.

I meant "eventually" as in "it'll probably do a bunch of other things,
but also do an instance_get", not as in "some number of layers down,
it'll do an instance_get".

> A unit test should be
> exercise_the_routine_that_will_directly_call_instance_get()
>
> Hopefully we're saying the same thing on this last point?

Absolutely.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
:) yeah, you're completely misunderstanding me.

So, you've made a much better StubOutWithMock() and slightly better stubs.Set() 
by (essentially) ignoring the method parameter checks and just focusing on the 
return type. 

Using your example:

def test_something(self):
def fake_instance_get(context, instance_uuid):
return {'name': 'this or that',
'instance_type_id': 42}

self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

Could your library be expanded to allow:

def test_something(self):
self.sorens_mox.Set(nova.db, 'instance_get_by_uuid', {'name': 'this or 
that',
'instance_type_id': 42})
self.sorens_mox.Set(nova.my_module, 'get_list_of_things', range(10))

exercise_the_routine_that_will_eventually_do_an_instance_get_and_get_list()
verify_that_the_system_is_now_in_the_desired_state()

See what I mean?

Side note: 
I don't view tests that permit 
exercise_the_routine_that_will_eventually_do_an_instance_get()
calls to be unit tests ... they're integration tests and the source of all this 
headache in the first place.

A unit test should be
exercise_the_routine_that_will_directly_call_instance_get()

Hopefully we're saying the same thing on this last point?

-S

From: Soren Hansen [so...@linux2go.dk]

Am I completely misunderstanding you?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Soren Hansen
2011/11/23 Sandy Walsh :
> Thanks Soren, I see what you're doing now and it makes perfect sense.
> It'll be a nice helper class.

Cool.

> My only snipe would be that mox is generic to any library and this
> fake only gives the benefit to db operations. We have to remember
> "It's a db operation, so I have to do this. It's another method call
> so I need to do that"

I think if it more like "for db, I don't need to concern myself with
test doubles. There's still a bunch of other stuff where that's not
true, but for db, it Just Works[tm]."

> How much effort would it be to make it into a better/more generic mox library?

I don't see how that would even be possible? I'm writing a complete db
driver, backed by Python dicts rather than sqlalchemy+sqlite. I can't
imagine how you'd build something that generally can expose an API and
behaviour identical to an arbitrary module, which seems to be what
you're suggesting.

Ok, that's not entirely true. I can imagine injecting a proxy in front
of the real DB driver, record its behaviour, and on subsequent test
runs, I'd return canned responses, but I really wouldn't recommend
something like that. It's great of getting some insight into how a
particular module is used. You can use that information when writing
stubs, mocks, fakes, whatever based on it, but I wouldn't rely on being
able to just replay its traffic and have everything work.

Am I completely misunderstanding you?

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-23 Thread Sandy Walsh
Thanks Soren, I see what you're doing now and it makes perfect sense. It'll be 
a nice helper class.

My only snipe would be that mox is generic to any library and this fake only 
gives the benefit to db operations. We have to remember "It's a db operation, 
so I have to do this. It's another method call so I need to do that"

How much effort would it be to make it into a better/more generic mox library?

-S


From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 7:38 PM
To: Sandy Walsh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/22 Sandy Walsh :
> I suspect the problem is coming in with our definition of "unit
> tests". I don't think a unit test should be calling out of the method
> being tested at all. So anything beyond stubbing out the methods
> within the method being tested seems like noise to me. What you're
> describing sounds more like integration tests.

If I'm testing a method that includes a call to the db api, the strategy
with which I choose to replace that call with a double does not change
whether the test is a unit test or not.

I'm simply replacing this:

def test_something(self):
self.mox.StubOutWithMock(db, 'instance_get')
db.instance_get(mox.IgnoreArg(), mox.IgnoreArg()
).AndReturn({'name': 'this or that',
 'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

or this:

def test_something(self):
def fake_instance_get(context, instance_uuid):
return {'name': 'this or that',
'instance_type_id': 42}

self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

with this:

def test_something(self):
ctxt = _get_context()
db.instance_create(ctxt, {'name': 'this or that',
  'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

Not only is this -- to my eye -- much more readable, but because the
fake db driver has been proven (by the db test suite) to give responses
that are exactly like what the real db driver would return, we have
better confidence in the output of the test. E.g. if the real db driver
always sets a particular attribute to a particular default value, it's
remarkably easy to forget to follow suit in an ad-hoc mock, and it's
even easier to forget to update the countless ad-hoc mocks later on, if
such a new attribute is added. This may or may not affect the tested
code's behaviour, but if that was easy to see/predict, we wouldn't need
tests to begin with :)

Over the course of this thread, I've heard many people raise concerns
about whether we'd really be testing the fake or testing the thing that
depends on the fake. I just don't get that at all. Surely a fake DB
driver that is proven to be true to its real counterpart should make us
*more* sure that we're testing our code correctly than an ad-hoc mock
whose correctness is very difficult to verify?

> I thought the motive of your thread was to create
> fast/small/readable/non-brittle/maintainable tests.

The motive was to gather testing related goals, action items, thoughts,
complaints, whatever. It just so happens that a lot of people (myself
included) think that speeding up the test suite and categorising tests
into "true unit tests" and "everything else" are important things to
look at.

> Integration tests, while important, make this goal difficult.

I agree. I'm very happy that there's a lot of people doing a lot of work
on the integration test suite so that I can focus more on unit tests. As
I think I've mentioned before, unit tests are really all we can expect
people to run.

> So, if we're both talking about real unit tests, I don't seen the
> benefit of the fake.

Please elaborate (with my above comments in mind).

> As for my example of 123 vs "abc", that was a bad example. Let me
> rephrase ... in one test I may want to have an environment that has no
> pre-existing instances in the db. In another test I may want to have
> an environment with a hundred instances.
>
> I'd like to understand how configuring the fake for both of these
> scenarios will be any easier than just having a stub. It seems like an
> unnecessary abstraction.

First of all, the DB is blown away between each individual test, so we
don't have to worry about its initial state.

Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Sandy Walsh :
> I suspect the problem is coming in with our definition of "unit
> tests". I don't think a unit test should be calling out of the method
> being tested at all. So anything beyond stubbing out the methods
> within the method being tested seems like noise to me. What you're
> describing sounds more like integration tests.

If I'm testing a method that includes a call to the db api, the strategy
with which I choose to replace that call with a double does not change
whether the test is a unit test or not.

I'm simply replacing this:

def test_something(self):
self.mox.StubOutWithMock(db, 'instance_get')
db.instance_get(mox.IgnoreArg(), mox.IgnoreArg()
).AndReturn({'name': 'this or that',
 'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

or this:

def test_something(self):
def fake_instance_get(context, instance_uuid):
return {'name': 'this or that',
'instance_type_id': 42}

self.stubs.Set(nova.db, 'instance_get_by_uuid', fake_instance_get)

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

with this:

def test_something(self):
ctxt = _get_context()
db.instance_create(ctxt, {'name': 'this or that',
  'instance_type_id': 42})

exercise_the_routine_that_will_eventually_do_an_instance_get()
verify_that_the_system_is_now_in_the_desired_state()

Not only is this -- to my eye -- much more readable, but because the
fake db driver has been proven (by the db test suite) to give responses
that are exactly like what the real db driver would return, we have
better confidence in the output of the test. E.g. if the real db driver
always sets a particular attribute to a particular default value, it's
remarkably easy to forget to follow suit in an ad-hoc mock, and it's
even easier to forget to update the countless ad-hoc mocks later on, if
such a new attribute is added. This may or may not affect the tested
code's behaviour, but if that was easy to see/predict, we wouldn't need
tests to begin with :)

Over the course of this thread, I've heard many people raise concerns
about whether we'd really be testing the fake or testing the thing that
depends on the fake. I just don't get that at all. Surely a fake DB
driver that is proven to be true to its real counterpart should make us
*more* sure that we're testing our code correctly than an ad-hoc mock
whose correctness is very difficult to verify?

> I thought the motive of your thread was to create
> fast/small/readable/non-brittle/maintainable tests.

The motive was to gather testing related goals, action items, thoughts,
complaints, whatever. It just so happens that a lot of people (myself
included) think that speeding up the test suite and categorising tests
into "true unit tests" and "everything else" are important things to
look at.

> Integration tests, while important, make this goal difficult.

I agree. I'm very happy that there's a lot of people doing a lot of work
on the integration test suite so that I can focus more on unit tests. As
I think I've mentioned before, unit tests are really all we can expect
people to run.

> So, if we're both talking about real unit tests, I don't seen the
> benefit of the fake.

Please elaborate (with my above comments in mind).

> As for my example of 123 vs "abc", that was a bad example. Let me
> rephrase ... in one test I may want to have an environment that has no
> pre-existing instances in the db. In another test I may want to have
> an environment with a hundred instances.
>
> I'd like to understand how configuring the fake for both of these
> scenarios will be any easier than just having a stub. It seems like an
> unnecessary abstraction.

First of all, the DB is blown away between each individual test, so we
don't have to worry about its initial state.

In the first scenario, I'd do nothing. I have a clean slate, so I'm good
to go. In the second scenario, I'd just do 100 calls to
db.instance_create.  With the mock approach, I'd write a custom
instance_get with a 100 if/elif clauses, returning whatever makes sense
for the given instance_id. Mind you, the objects that I return from my
mock may or may not be valid Instance objects. I can only hope that they
are close enough.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
Yeah, email is making this tricky.

I suspect the problem is coming in with our definition of "unit tests". I don't 
think a unit test should be calling out of the method being tested at all. So 
anything beyond stubbing out the methods within the method being tested seems 
like noise to me. What you're describing sounds more like integration tests. I 
thought the motive of your thread was to create 
fast/small/readable/non-brittle/maintainable tests. Integration tests, while 
important, make this goal difficult. So, if we're both talking about real unit 
tests, I don't seen the benefit of the fake.

As for my example of 123 vs "abc", that was a bad example. Let me rephrase ... 
in one test I may want to have an environment that has no pre-existing 
instances in the db. In another test I may want to have an environment with a 
hundred instances. 

I'd like to understand how configuring the fake for both of these scenarios 
will be any easier than just having a stub. It seems like an unnecessary 
abstraction.

-S




From: Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 4:37 PM
To: Sandy Walsh
Cc: Jay Pipes; openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

2011/11/22 Sandy Walsh :
> I suppose there is a matter of preference here. I prefer to look in the 
> setup() and teardown() methods of my test suite to find out how everything 
> hangs together. Otherwise I have to check nova.TestCase when things break. 
> The closer my test can stay to my expectations from unittest.TestCase the 
> happier I am.

Sorry, I don't follow. The unit tests would use the fake db driver by
default. No per-test-specific setup necessary. Creating the instance
in the fake DB would happen explicitly in the individual tests (by way
of either calling db.instance_create directly, or by way of some
utility function).

> I can't comment on your fake db implementation, but my fear is this scenario:
>
> Test1 assumes db.create_foo() will return 123 and Test2 assumes it will 
> return "abc". How do they both comfortably co-exist? And whatever the 
> mechanism, why is it better than just stubs.Set("db.create_foo", 
> _my_create_foo)?

I'm confused. That's *exactly* what I want to avoid. By everything
sharing the same fake db driver, you can never have one mock that
returns one style of response, and another mock that returns another
style of response.

> It's local and it makes sense in the context of that file.

But it has to make sense globally. If something you're testing only
ever sees an Instance object with a couple of "hardcoded" attributes
on it, because that's what its mock gives it, you'll never know if
it'll fail if it gets a more complete, real Instance object.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Mark Washenberger :
> I think we all agree that we have a mess with the database stubbing that is 
> going on. And I'm confident that the db fake would make that mess more 
> manageable.

> But the way I see the mess, it comes from having a giant flat db interface 
> and really large testcases.

Those are definitely problems, too. The lack of an alternative DB
driver (be it a fake or a real one) hasn't helped, either :(

> Also, by having such large test cases, the generality & complexity of setUp() 
> as well as its distance from any given test method increase tremendously.

I agree completely.

> All of this makes it harder to understand the impact of any one line in setUp 
> and nearly impossible to tell which parts of setUp are relevant to a given 
> test.

Absolutely. At the moment, it's extremely difficult to tell which
parts of the setUp is actually needed for each individual test.

> But, I have to admit that I'm not backing up the way I think we could make 
> the best improvements with any code at the moment--and Soren is. So I'm not 
> sure I really want to stand in the way.

:) I'm hoping the code and the changes it will bring along will speak
for itself once I'm done with it. I've implemented 138 of 291 db api
methods in my fake driver. I don't know if I'll need all 291 for the
unit tests to pass. I hope so, but I'm not sure at all.

> My only detraction from a big fake is that it in general has to be a pretty 
> faithful implementation of the real interface.

Again, with my fake db driver, I'll submit a test suite for it. A test
suite that we can run against *any* db driver (real, fake, whatever)
to verify that they're all faithful.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Soren Hansen :
>> (real) Unit tests are our documentation and having to jump around to find 
>> out how the plumbing works doesn't make for good documentation.
> I agree. That's exactly why I *don't* want mocks (for this) in the unit tests.

It's been pointed out to me that I'm not making sense here. Thanks, Devin. :)

The point is that I don't want to deal with the plumbing *at all*. I
just want to be able to stick stuff in the data store and then let my
production (as in non-test) code be able to pull it out again without
mucking about with mocks (har har).

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Mark Washenberger
I'm tending to agree with Sandy's comments.

I think we all agree that we have a mess with the database stubbing that is 
going on. And I'm confident that the db fake would make that mess more 
manageable.

But the way I see the mess, it comes from having a giant flat db interface and 
really large testcases. The giant flat db.api class means it can be daunting to 
create a nicely packaged collection of test double functions.

Also, by having such large test cases, the generality & complexity of setUp() 
as well as its distance from any given test method increase tremendously. All 
of this makes it harder to understand the impact of any one line in setUp and 
nearly impossible to tell which parts of setUp are relevant to a given test.

But, I have to admit that I'm not backing up the way I think we could make the 
best improvements with any code at the moment--and Soren is. So I'm not sure I 
really want to stand in the way.

My only detraction from a big fake is that it in general has to be a pretty 
faithful implementation of the real interface. However, if I use a bunch of 
small special-purpose fakes, I can usually get the same coverage without 
working as hard.

"Sandy Walsh"  said:

> I suppose there is a matter of preference here. I prefer to look in the 
> setup()
> and teardown() methods of my test suite to find out how everything hangs 
> together.
> Otherwise I have to check nova.TestCase when things break. The closer my test 
> can
> stay to my expectations from unittest.TestCase the happier I am.
> 
> I can't comment on your fake db implementation, but my fear is this scenario:
> 
> Test1 assumes db.create_foo() will return 123 and Test2 assumes it will return
> "abc". How do they both comfortably co-exist? And whatever the mechanism, why 
> is
> it better than just stubs.Set("db.create_foo", _my_create_foo)?
> 
> It's local and it makes sense in the context of that file.
> 
> -S
> 
> 
> From: Soren Hansen [so...@linux2go.dk]
> 
> 2011/11/22 Sandy Walsh :
>> I'm not a big fan of faking a database, not only for the reasons outlined
>> already, but because it makes the tests harder to understand.
> 
> Can you give me an example? I find the opposite to be true, so I'd
> love to see counterexamples. Most of the time, the data store is not
> relevant for the tests. I just need to stick an instance into the db,
> do some stuff, and verify that I get the correct (direct and indirect)
> output. I don't see how having a mocked db.instance_get is any more
> readable than a db.instance_create() (or a parameterised
> create_instance utility method for testing purposes or whatever).
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
> 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Sandy Walsh :
> I suppose there is a matter of preference here. I prefer to look in the 
> setup() and teardown() methods of my test suite to find out how everything 
> hangs together. Otherwise I have to check nova.TestCase when things break. 
> The closer my test can stay to my expectations from unittest.TestCase the 
> happier I am.

Sorry, I don't follow. The unit tests would use the fake db driver by
default. No per-test-specific setup necessary. Creating the instance
in the fake DB would happen explicitly in the individual tests (by way
of either calling db.instance_create directly, or by way of some
utility function).

> I can't comment on your fake db implementation, but my fear is this scenario:
>
> Test1 assumes db.create_foo() will return 123 and Test2 assumes it will 
> return "abc". How do they both comfortably co-exist? And whatever the 
> mechanism, why is it better than just stubs.Set("db.create_foo", 
> _my_create_foo)?

I'm confused. That's *exactly* what I want to avoid. By everything
sharing the same fake db driver, you can never have one mock that
returns one style of response, and another mock that returns another
style of response.

> It's local and it makes sense in the context of that file.

But it has to make sense globally. If something you're testing only
ever sees an Instance object with a couple of "hardcoded" attributes
on it, because that's what its mock gives it, you'll never know if
it'll fail if it gets a more complete, real Instance object.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
I suppose there is a matter of preference here. I prefer to look in the setup() 
and teardown() methods of my test suite to find out how everything hangs 
together. Otherwise I have to check nova.TestCase when things break. The closer 
my test can stay to my expectations from unittest.TestCase the happier I am. 

I can't comment on your fake db implementation, but my fear is this scenario:

Test1 assumes db.create_foo() will return 123 and Test2 assumes it will return 
"abc". How do they both comfortably co-exist? And whatever the mechanism, why 
is it better than just stubs.Set("db.create_foo", _my_create_foo)? 

It's local and it makes sense in the context of that file.

-S


From: Soren Hansen [so...@linux2go.dk]

2011/11/22 Sandy Walsh :
> I'm not a big fan of faking a database, not only for the reasons outlined 
> already, but because it makes the tests harder to understand.

Can you give me an example? I find the opposite to be true, so I'd
love to see counterexamples. Most of the time, the data store is not
relevant for the tests. I just need to stick an instance into the db,
do some stuff, and verify that I get the correct (direct and indirect)
output. I don't see how having a mocked db.instance_get is any more
readable than a db.instance_create() (or a parameterised
create_instance utility method for testing purposes or whatever).

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Sandy Walsh :
> I'm not a big fan of faking a database, not only for the reasons outlined 
> already, but because it makes the tests harder to understand.

Can you give me an example? I find the opposite to be true, so I'd
love to see counterexamples. Most of the time, the data store is not
relevant for the tests. I just need to stick an instance into the db,
do some stuff, and verify that I get the correct (direct and indirect)
output. I don't see how having a mocked db.instance_get is any more
readable than a db.instance_create() (or a parameterised
create_instance utility method for testing purposes or whatever).

> I much prefer to mock the db call on a per-unit-test basis so you can see 
> everything you need in a single file. Yes, this could mean some duplication 
> across test suites. But that is better than changes to the fake busting some 
> other test that has different assumptions.

That's why I'm adding tests for the fake. To make sure that the fake
and the real db drivers act the same.

> Are we testing the code or are we testing the fake?

The code. We have *other* tests for the fake.

> (real) Unit tests are our documentation and having to jump around to find out 
> how the plumbing works doesn't make for good documentation.

I agree. That's exactly why I *don't* want mocks (for this) in the unit tests.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
I'm not a big fan of faking a database, not only for the reasons outlined 
already, but because it makes the tests harder to understand.

I much prefer to mock the db call on a per-unit-test basis so you can see 
everything you need in a single file. Yes, this could mean some duplication 
across test suites. But that is better than changes to the fake busting some 
other test that has different assumptions. 

Are we testing the code or are we testing the fake?

(real) Unit tests are our documentation and having to jump around to find out 
how the plumbing works doesn't make for good documentation.

-S

From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Tuesday, November 22, 2011 3:09 PM
To: Jay Pipes
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

Ok, this seems like a good time to repeat what I posted to nova-database
the other day.

tl;dr: I'm adding a fake DB driver as well as a DB test suite that we
can run against any of the backends to verify that they act the same.
This should address all the concerns I've heard so far.


Hi.

I just want to let you know that I'm working on a fake DB driver. The
two primary goals are to reduce the time it takes to run the test
suite (my results so far are very impressive) and simply to have
another, independent DB implementation. Once I'm done, I'll start
adding tests for it all, and finally, I'll take a stab at adding an
alternative, real DB backend.

In case you're wondering why I don't write the tests first, it's
simply because I don't know how all these things are supposed to work.
I hope to have a much better understanding of this once I've written
the fake DB driver, and then I'll add a generic test suite that should
be able to validate any DB backend.



--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
Ok, this seems like a good time to repeat what I posted to nova-database
the other day.

tl;dr: I'm adding a fake DB driver as well as a DB test suite that we
can run against any of the backends to verify that they act the same.
This should address all the concerns I've heard so far.


Hi.

I just want to let you know that I'm working on a fake DB driver. The
two primary goals are to reduce the time it takes to run the test
suite (my results so far are very impressive) and simply to have
another, independent DB implementation. Once I'm done, I'll start
adding tests for it all, and finally, I'll take a stab at adding an
alternative, real DB backend.

In case you're wondering why I don't write the tests first, it's
simply because I don't know how all these things are supposed to work.
I hope to have a much better understanding of this once I've written
the fake DB driver, and then I'll add a generic test suite that should
be able to validate any DB backend.



-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Jay Pipes
On Tue, Nov 22, 2011 at 12:50 PM, Kevin L. Mitchell
 wrote:
> On Tue, 2011-11-22 at 12:12 -0500, Jay Pipes wrote:
>> FWIW, I created a fake database store for Glance originally, and
>> indeed the unit tests ran quicker than they do now because we use an
>> in-memory SQLite database in the unit tests. That said, switching from
>> a faked datastore to a real one that actually used the SQLAlchemy code
>> exposed at least 4 bugs that weren't being caught by the fake data
>> store.
>
> I don't think anyone disagrees that testing with actual database code is
> essential.  However, it seems to me that testing actual database code at
> the integration level, rather than at the unit level, would catch these
> kinds of bugs while still keeping the unit tests fast, which is part of
> the allure of this approach for me.

Sure, I don't disagree with you, just pointing out that there be
dangers in faking everything all the time ;)

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Kevin L. Mitchell
On Tue, 2011-11-22 at 12:12 -0500, Jay Pipes wrote:
> FWIW, I created a fake database store for Glance originally, and
> indeed the unit tests ran quicker than they do now because we use an
> in-memory SQLite database in the unit tests. That said, switching from
> a faked datastore to a real one that actually used the SQLAlchemy code
> exposed at least 4 bugs that weren't being caught by the fake data
> store.

I don't think anyone disagrees that testing with actual database code is
essential.  However, it seems to me that testing actual database code at
the integration level, rather than at the unit level, would catch these
kinds of bugs while still keeping the unit tests fast, which is part of
the allure of this approach for me.
-- 
Kevin L. Mitchell 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Jay Pipes
On Tue, Nov 22, 2011 at 8:31 AM, Soren Hansen  wrote:
> 2011/11/22 Rohit Karajgi :
>> Also I really  should not feel the need to install a DB on my box to run a 
>> unit-test suite. None, but the DB API tests should have the need to perform 
>> any database operations and validations. It slows down the overall execution.
>> Such things should be replaced with test doubles.
>
> I'm making great progress on a fake DB implementation for tests. The
> first 1031 tests (out of ~1900) pass now (in 73 seconds). Stay tuned
> for progress reports :)

FWIW, I created a fake database store for Glance originally, and
indeed the unit tests ran quicker than they do now because we use an
in-memory SQLite database in the unit tests. That said, switching from
a faked datastore to a real one that actually used the SQLAlchemy code
exposed at least 4 bugs that weren't being caught by the fake data
store.

I understand the desire to speed up the unit test suite, but this
desire must always be tempered with a need to test the actual
interfaces, not fake ones that mimick an interface but not necessarily
duplicate it...

Just a thought,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Trey Morris
Add me as a +1 to ensuring that unittests are actually unittests instead of
a mix'n'match of different styles of test. This would make writing and
updating tests much more straightforward and actually catch problems in the
correct layer of tests, for example a change in network should not under
any circumstance cause an api unittest to fail. I think we all understand
the nightmare of trying to fix tests in unrelated areas of the code that
have failed for no immediately apparent reason. How can I help make this
happen?

-tr3buchet

On Tue, Nov 22, 2011 at 8:45 AM, Sandy Walsh wrote:

> Excellent!
>
> I wrote a few blog posts recently, mostly based on my experience with
> openstack automated tests:
>
>
> http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.html
> http://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html
>
> Would love to see some of those changes make it in.
>
> -Sandy
>
> 
> From: 
> openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net[openstack-bounces+sandy.walsh=
> rackspace@lists.launchpad.net] on behalf of Soren Hansen [
> so...@linux2go.dk]
> Sent: Monday, November 21, 2011 8:24 AM
> To: openstack@lists.launchpad.net
> Subject: [Openstack] [nova-testing] Efforts for Essex
>
> Hi, guys.
>
> We're scattered across enough different timezones to make real-time
> communication really awkward, so let's see if we can get by using e-mail
> instead.
>
> A good test suite will let you keep up the pace of development. It will
> offer confidence that your change didn't break any expectations, and
> will help you understand how things are supposed to work, etc. A bad
> test suite, on the other hand, will actually do the opposite, so the
> quality of the unit test suite is incredibly important to the overall
> health of the project. Integration tests are important as well, but unit
> tests all we can expect people to run on a regular basis.
>
> I'd like to start a bit of discussion around efforts around unit testing
> for Essex. Think of it as brainstorming. Input can be anything from
> small, actionable items, to broad ideas, to measurable goals, random
> thoughts etc. Anything goes. At some point, we can distill this input to
> a set of common themes, set goals, define action items, etc.
>
> A few things from the back of my mind to get the discussion going:
>
> = Speed up the test suite =
>
> A slow test suite gets run much less frequently than a fast one.
> Currently, when wrapped in eatmydata, a complete test run takes more
> than 6 minutes.
>
> Goal: We should get that down to less than one minute.
>
> = Review of existing tests =
> Our current tests have a lot of problems:
>
>  * They overlap (multiple tests effectively testing the same code),
>  * They're hard to understand. Not only are their intent not always
>   clear, but it's often hard to tell how they're doing it.
>  * They're slow.
>  * They're interdependent. The failure of one test often cascades and
>   makes others fail, too.
>  * They're riddled with duplicated code.
>
> I think it would be great if we could come up with some guidelines for
> good tests and then go through the existing tests and highlight where
> they violate these guidelines.
>
> = Test coverage =
> We should increase test coverage.
>
> Adding tests to legacy code is hard. Generally, it's much easier to
> write tests for new production code as you go along. The two primary
> reasons are:
>
>  * If you're writing tests and production code at the same time (or
>   perhaps even writing the tests first), the code will almost
>   automatically designed to be easily tested.
>  * You (hopefully1) know how the code is supposed to work. This is not
>   always obvious for existing code (often written by someone else).
>
> Therefore, the most approachable strategy for increasing test coverage
> is simply to ensure that any new code added is accompanied by tests, but
> of course new tests for currently untested code is fantastically
> welcome.
>
> --
> Soren Hansen| http://linux2go.dk/
> Ubuntu Developer| http://www.ubuntu.com/
> OpenStack Developer | http://www.openstack.org/
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Sandy Walsh
Excellent!

I wrote a few blog posts recently, mostly based on my experience with openstack 
automated tests:

http://www.sandywalsh.com/2011/06/effective-units-tests-and-integration.html
http://www.sandywalsh.com/2011/08/pain-of-unit-tests-and-dynamically.html

Would love to see some of those changes make it in.

-Sandy


From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net 
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of 
Soren Hansen [so...@linux2go.dk]
Sent: Monday, November 21, 2011 8:24 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] [nova-testing] Efforts for Essex

Hi, guys.

We're scattered across enough different timezones to make real-time
communication really awkward, so let's see if we can get by using e-mail
instead.

A good test suite will let you keep up the pace of development. It will
offer confidence that your change didn't break any expectations, and
will help you understand how things are supposed to work, etc. A bad
test suite, on the other hand, will actually do the opposite, so the
quality of the unit test suite is incredibly important to the overall
health of the project. Integration tests are important as well, but unit
tests all we can expect people to run on a regular basis.

I'd like to start a bit of discussion around efforts around unit testing
for Essex. Think of it as brainstorming. Input can be anything from
small, actionable items, to broad ideas, to measurable goals, random
thoughts etc. Anything goes. At some point, we can distill this input to
a set of common themes, set goals, define action items, etc.

A few things from the back of my mind to get the discussion going:

= Speed up the test suite =

A slow test suite gets run much less frequently than a fast one.
Currently, when wrapped in eatmydata, a complete test run takes more
than 6 minutes.

Goal: We should get that down to less than one minute.

= Review of existing tests =
Our current tests have a lot of problems:

 * They overlap (multiple tests effectively testing the same code),
 * They're hard to understand. Not only are their intent not always
   clear, but it's often hard to tell how they're doing it.
 * They're slow.
 * They're interdependent. The failure of one test often cascades and
   makes others fail, too.
 * They're riddled with duplicated code.

I think it would be great if we could come up with some guidelines for
good tests and then go through the existing tests and highlight where
they violate these guidelines.

= Test coverage =
We should increase test coverage.

Adding tests to legacy code is hard. Generally, it's much easier to
write tests for new production code as you go along. The two primary
reasons are:

 * If you're writing tests and production code at the same time (or
   perhaps even writing the tests first), the code will almost
   automatically designed to be easily tested.
 * You (hopefully1) know how the code is supposed to work. This is not
   always obvious for existing code (often written by someone else).

Therefore, the most approachable strategy for increasing test coverage
is simply to ensure that any new code added is accompanied by tests, but
of course new tests for currently untested code is fantastically
welcome.

--
Soren Hansen| http://linux2go.dk/
Ubuntu Developer| http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-22 Thread Soren Hansen
2011/11/22 Rohit Karajgi :
> Also I really  should not feel the need to install a DB on my box to run a 
> unit-test suite. None, but the DB API tests should have the need to perform 
> any database operations and validations. It slows down the overall execution.
> Such things should be replaced with test doubles.

I'm making great progress on a fake DB implementation for tests. The
first 1031 tests (out of ~1900) pass now (in 73 seconds). Stay tuned
for progress reports :)

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-21 Thread Rohit Karajgi
Also I really  should not feel the need to install a DB on my box to run a 
unit-test suite. None, but the DB API tests should have the need to perform any 
database operations and validations. It slows down the overall execution.
Such things should be replaced with test doubles.

Thanks,
Rohit

-Original Message-
From: openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net] On 
Behalf Of Kevin L. Mitchell
Sent: Monday, November 21, 2011 10:47 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] [nova-testing] Efforts for Essex

On Mon, 2011-11-21 at 13:24 +0100, Soren Hansen wrote:
> = Speed up the test suite =

+1; it can take about 6 minutes for the full suite to run on my dev box,
which can definitely slow me down.

> = Review of existing tests =
> = Test coverage =

One other consideration: We have some sort of memory leaking going on.
I fixed one issue, but it was clearly a small issue, as the overall memory 
usage did not change.  By the end of the test suite, up to 45% of my 1 Gig 
system memory can be in use by the test run.  This actually inhibited the 
crypto tests because they try to start an external process, which failed due to 
memory allocation.  I corrected this by adding some swap, but this isn't the 
first time I've had this issue: the problem's getting worse as the test suite 
gets larger.  My observations suggest that a big chunk of the memory used is 
being allocated in ServersTest (and presumably others like it), but there are 
clearly other contributors; unfortunately, I haven't been able to determine 
what precisely is allocating all this memory and not releasing it...
--
Kevin L. Mitchell 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-21 Thread Soren Hansen
2011/11/21 Duncan McGreggor :
> Soren, I'm delighted to see this email come across the list -- thanks
> for jump-starting such a discussion. It probably goes without saying
> that I completely agree with your assessments above :-)

Great! :)

> The one thing that has bothered me most about the OpenStack unit tests
> has been that many of them are not really unit tests; the presence of
> functional or integration testing precludes that possibility.

Very true. Do you think we should spend some time splitting our tests
into "true unit tests" and all the rest? That way, we can easily choose
to only run unit tests or the whole thing.

> To counter up-front any objections to being too much of a purist or
> drinking too much of the TDD koolaid, let me say that I do believe in
> testing against real databases, across real networks, etc., but those
> should be tests that can be run optionally and explicitly stated as
> being integration tests or functional tests.

I think it depends on the type of dependency. Some things should
definitely not be involved when running unit tests (sqlalchemy+sqlite
for instance), but other things are perfectly fine IMO. E.g. a function
that validates whether something is a valid IP address may have started
its life as a homegrown routine, but eventually gotten replaced by a
routine that uses python-ipy or python-netaddr. That means the test
suite has a hard dependency on python-ipy or python-netaddr, but I still
think this falls squarely in unit testing territory.

> By using mocked objects (e.g., Michael Foord's mock library), unit
> test speeds will go WAY up.

We use python-mox quite a bit and have half a bajillion ad-hoc mock
implementations of various things.

I'm actually on a mission to replace many of these with more thorough
fake objects. I have a fake db implementation I've been working on for a
couple of days that gets me half way through the test suite in less than
a minute. It's supposed to eventually replace the countless ad-hoc mock
db implementations scattered across our test suite. They were great when
they were added, but IMO they're much more a liability now, because
they're impossible to maintain.

> If we could separate out everything that currently hits other systems
> in the OpenStack amalgam so that they use a separate runner
> (integration_tests.sh, functional_tests.sh, whatever), and just
> keep the unit tests in the main test runner, not only will things run
> much more rapidly, but it will be a no-brainer for folks to start
> running them compulsively (before submitting patches, before
> submitting bugs, before asking for help on IRC, etc.).

Sounds like a great idea.

> As for the duplicate code, it's probably fairly obvious to everyone
> that we could build up a subpackage for testing, that pulls together
> all the commonly used code and simply import as necessary (as base
> classes -- OOP -- or instantiate as attribtues -- AOP).

That would be fantastic.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova-testing] Efforts for Essex

2011-11-21 Thread Kevin L. Mitchell
On Mon, 2011-11-21 at 13:24 +0100, Soren Hansen wrote:
> = Speed up the test suite =

+1; it can take about 6 minutes for the full suite to run on my dev box,
which can definitely slow me down.

> = Review of existing tests =
> = Test coverage =

One other consideration: We have some sort of memory leaking going on.
I fixed one issue, but it was clearly a small issue, as the overall
memory usage did not change.  By the end of the test suite, up to 45% of
my 1 Gig system memory can be in use by the test run.  This actually
inhibited the crypto tests because they try to start an external
process, which failed due to memory allocation.  I corrected this by
adding some swap, but this isn't the first time I've had this issue: the
problem's getting worse as the test suite gets larger.  My observations
suggest that a big chunk of the memory used is being allocated in
ServersTest (and presumably others like it), but there are clearly other
contributors; unfortunately, I haven't been able to determine what
precisely is allocating all this memory and not releasing it...
-- 
Kevin L. Mitchell 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [nova-testing] Efforts for Essex

2011-11-21 Thread Soren Hansen
Hi, guys.

We're scattered across enough different timezones to make real-time
communication really awkward, so let's see if we can get by using e-mail
instead.

A good test suite will let you keep up the pace of development. It will
offer confidence that your change didn't break any expectations, and
will help you understand how things are supposed to work, etc. A bad
test suite, on the other hand, will actually do the opposite, so the
quality of the unit test suite is incredibly important to the overall
health of the project. Integration tests are important as well, but unit
tests all we can expect people to run on a regular basis.

I'd like to start a bit of discussion around efforts around unit testing
for Essex. Think of it as brainstorming. Input can be anything from
small, actionable items, to broad ideas, to measurable goals, random
thoughts etc. Anything goes. At some point, we can distill this input to
a set of common themes, set goals, define action items, etc.

A few things from the back of my mind to get the discussion going:

= Speed up the test suite =

A slow test suite gets run much less frequently than a fast one.
Currently, when wrapped in eatmydata, a complete test run takes more
than 6 minutes.

Goal: We should get that down to less than one minute.

= Review of existing tests =
Our current tests have a lot of problems:

 * They overlap (multiple tests effectively testing the same code),
 * They're hard to understand. Not only are their intent not always
   clear, but it's often hard to tell how they're doing it.
 * They're slow.
 * They're interdependent. The failure of one test often cascades and
   makes others fail, too.
 * They're riddled with duplicated code.

I think it would be great if we could come up with some guidelines for
good tests and then go through the existing tests and highlight where
they violate these guidelines.

= Test coverage =
We should increase test coverage.

Adding tests to legacy code is hard. Generally, it's much easier to
write tests for new production code as you go along. The two primary
reasons are:

 * If you're writing tests and production code at the same time (or
   perhaps even writing the tests first), the code will almost
   automatically designed to be easily tested.
 * You (hopefully1) know how the code is supposed to work. This is not
   always obvious for existing code (often written by someone else).

Therefore, the most approachable strategy for increasing test coverage
is simply to ensure that any new code added is accompanied by tests, but
of course new tests for currently untested code is fantastically
welcome.

-- 
Soren Hansen        | http://linux2go.dk/
Ubuntu Developer    | http://www.ubuntu.com/
OpenStack Developer | http://www.openstack.org/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp