Re: [openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-09 Thread Chris Dent

On Sat, 8 Nov 2014, Robert Collins wrote:


What changes do you want to see in the ui?


I don't want to hijack the thread too much so I hope Dmitriy will join
back in but for me there are two aspects of the existing experience
that don't work out well. I suspect many of these situations can be
resolved with more info (that is, the bug is in my ignorance, not in
the software).

* Lack of transparency on how to manage verbosity and output handling
  during a test run. Obviously the output during an unobserved run is
  going to need to be different from what I as a devloper want while
  doing an observed run.

  In the latter case I want to know, while it is happening, which
  tests have been discovered, which one is happening right now, and a
  sense of the status of the current assert.

  I want, at my option, to spew stderr and stdout directly without
  interference so I can do unhygenic debugging.

  Essentially, I want to be able to discover the flags, arguments and
  toosl that allow me to use tests as an ad hoc development aid, not
  post hoc.

  I know this is possible with the existing tools, it's just not easy
  nor easy (for me) to discover.

* The current testing code and tools are, to use that lovely saw, hard
  to reason about. Which for me equates to hard to read.

  This is perhaps because I'm not particular wed to _unit_ tests,
  _unittest_ formed tests, or concepts such as test isolation and
  hates mocks. I see the value of these things but I think it is
  easy for them to be overused and make other purposes more difficult.

  I threw this[1] up on the list a little while, which is related:
  Same complaints and a hope that we won't have the same over-emphasis
  when we move to in tree testing.

Summary: I think we need to spend some time and thought on improving
the usefulness of tests, testing and testing tools for someone working
on a feature or a bug _right now_. That is: tests run by humans should
be as frictionless as possible so that bugs are caught[2] and fixed
before test suites are ever run by robots.

[1] https://tank.peermore.com/tanks/cdent-rhat/SummitFunctionalTesting

[2] And with luck will help create more effective and usable code[3].

[3] Yes, I believe in TDD.
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-07 Thread Dmitriy Shulyak
Hi guys,
Long time ago i've made patch [1] which added tests distribution between
processes and databases.  It was simple py.test configuration which allows
us to reduce time of test execution almost linearly, on my local machine
one test run (distributed over 4 cores) takes 250 seconds.

At that time idea of using py.test was discarded, because:
1. it is neither nosetests
2. nor openstack community way (testrepository)

There is plugin for nosetests which adds multiprocessing support (maybe it
is even included in default distribution) but i wasnt able to find a normal
way of distribution over databases, just because runner doesnot include
necessery config options like RUNNER_ID. I cant stop you
from trying - so please share your results, if you will find a nice and
easy way to make it work.

As for testrepository - if you have positive experience using this tool,
share them, from my point of view it has very bad UX.

Please consider trying py.test [2], i bet you will notice difference in
reporting, and maybe will use it yourself for day-to-day test executions.
Additionally there is very good
system for parametrizing tests and writing extensions.

The goal of this letter is to solve problem of CI queues for fuel-web
project, so please
share your opinions, It will be nice to solve this at the start of next
week.

[1] https://review.openstack.org/#/c/82284/3/nailgun/conftest.py
[2] http://pytest.readthedocs.org/en/2.1.0/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-07 Thread Evgeniy L
Hi Dmitry,

Thank you for bringing it up, it's not only about CI, it really takes
a lot of developers time to run Nailgun unit/integration tests on local
machine, and it's must have priority task in technical debt scope.

Personally I'm ok with py.test, but we should improve db creation mechanism
in your patch to use psycopg2 instead of running psql in subprocess.

Thanks,

On Fri, Nov 7, 2014 at 5:35 PM, Dmitriy Shulyak dshul...@mirantis.com
wrote:

 Hi guys,
 Long time ago i've made patch [1] which added tests distribution between
 processes and databases.  It was simple py.test configuration which allows
 us to reduce time of test execution almost linearly, on my local machine
 one test run (distributed over 4 cores) takes 250 seconds.

 At that time idea of using py.test was discarded, because:
 1. it is neither nosetests
 2. nor openstack community way (testrepository)

 There is plugin for nosetests which adds multiprocessing support (maybe it
 is even included in default distribution) but i wasnt able to find a normal
 way of distribution over databases, just because runner doesnot include
 necessery config options like RUNNER_ID. I cant stop you
 from trying - so please share your results, if you will find a nice and
 easy way to make it work.

 As for testrepository - if you have positive experience using this tool,
 share them, from my point of view it has very bad UX.

 Please consider trying py.test [2], i bet you will notice difference in
 reporting, and maybe will use it yourself for day-to-day test executions.
 Additionally there is very good
 system for parametrizing tests and writing extensions.

 The goal of this letter is to solve problem of CI queues for fuel-web
 project, so please
 share your opinions, It will be nice to solve this at the start of next
 week.

 [1] https://review.openstack.org/#/c/82284/3/nailgun/conftest.py
 [2] http://pytest.readthedocs.org/en/2.1.0/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-07 Thread Chris Dent

On Fri, 7 Nov 2014, Dmitriy Shulyak wrote:


As for testrepository - if you have positive experience using this tool,
share them, from my point of view it has very bad UX.


+1, but with the caveat that testr and its compatriots (e.g.
subunit) appear to have been optimized for automation of huge test
suites and CI contexts. That's a reasonable thing to be but I think
focusing on that side of things has been to detriment of the
human/developer benefits that happen as a result of writing and
running tests.

This is something I'd love for us (people who make OpenStack), as a
shared culture, to address.


Please consider trying py.test [2], i bet you will notice difference in
reporting, and maybe will use it yourself for day-to-day test executions.
Additionally there is very good
system for parametrizing tests and writing extensions.


I'm with you on this. I love py.test. The user experience for a
human, doing active development, is rather nice indeed.

The difficulty, of course, is that there's been a very large
investment in tools that rely on a particular form of test
discovery that as far as I can tell py.test doesn't want to play
with. If we can overcome that problem...disco.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Test runner for python tests and parallel execution

2014-11-07 Thread Robert Collins
What changes do you want to see in the ui?
On 7 Nov 2014 17:17, Chris Dent chd...@redhat.com wrote:

 On Fri, 7 Nov 2014, Dmitriy Shulyak wrote:

  As for testrepository - if you have positive experience using this tool,
 share them, from my point of view it has very bad UX.


 +1, but with the caveat that testr and its compatriots (e.g.
 subunit) appear to have been optimized for automation of huge test
 suites and CI contexts. That's a reasonable thing to be but I think
 focusing on that side of things has been to detriment of the
 human/developer benefits that happen as a result of writing and
 running tests.

 This is something I'd love for us (people who make OpenStack), as a
 shared culture, to address.

  Please consider trying py.test [2], i bet you will notice difference in
 reporting, and maybe will use it yourself for day-to-day test executions.
 Additionally there is very good
 system for parametrizing tests and writing extensions.


 I'm with you on this. I love py.test. The user experience for a
 human, doing active development, is rather nice indeed.

 The difficulty, of course, is that there's been a very large
 investment in tools that rely on a particular form of test
 discovery that as far as I can tell py.test doesn't want to play
 with. If we can overcome that problem...disco.

 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev