Ok, I see. That's great news, thanks for the info! Cheers, Goncalo
On Mon, Dec 8, 2014 at 5:07 PM, Bruno Oliveira <nicodde...@gmail.com> wrote: > Hi Gonçalo, > > pytest-xdist has its own overhead (starting up slaves, collecting, etc), > plus it might not be very efficient for few tests. If you increase the > number of tests (use params=range(20) for example), you will notice that > it runs faster than running without xdist: > > import pytestimport time > @pytest.fixture(scope="function", params=range(20))def fix_dummy(request): > return request.param > def test_dummy1(fix_dummy): > time.sleep(2) > > Running py.test -n4 test_bar.py: > > ============================= test session starts > ============================= > platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 > plugins: timeout, localserver, xdist, cov, mock > gw0 [20] / gw1 [20] / gw2 [20] / gw3 [20] > scheduling tests via LoadScheduling > .................... > ========================= 20 passed in 11.12 seconds > ========================== > > Running py.test test_bar.py: > > ============================= test session starts > ============================= > platform win32 -- Python 2.7.7 -- py-1.4.26 -- pytest-2.6.4 > plugins: timeout, localserver, xdist, cov, mock > collected 20 items > > test_bar.py .................... > > ========================= 20 passed in 40.06 seconds > ========================== > > As you can see, it is running in parallel for the first case. :) > > > On Sat, Dec 6, 2014 at 7:38 PM, Goncalo Morgado <goncalo.ma...@gmail.com> > wrote: > >> >> On the other hand, if you add another test_ function there will be no >> changes in total duration because pytest is actually running both tests in >> parallel: >> >> #!/usr/bin/env python >> >> import pytest >> import time >> >> @pytest.fixture(scope="function", params=["a1", "a2"]) >> def fix_dummy(request): >> print "fix_dummy: parameter: %s" % (request.param, ) >> return request.param >> >> def test_dummy1(fix_dummy): >> print "parameter from fix_dummy: %s" %(fix_dummy, ) >> time.sleep(2) >> >> def test_dummy2(fix_dummy): >> print "parameter from fix_dummy: %s" %(fix_dummy, ) >> time.sleep(2) >> >> >> >> >> $ py.test -vs test_dummy.py -n 2 >> ====================================================================================== >> test session starts >> ======================================================================================= >> platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- >> /usr/bin/python >> plugins: xdist >> [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >> [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >> gw0 [4] / gw1 [4] >> scheduling tests via LoadScheduling >> >> test_dummy.py::test_dummy2[a1] >> test_dummy.py::test_dummy1[a1] >> [gw1] PASSED test_dummy.py::test_dummy1[a1] >> [gw0] PASSED test_dummy.py::test_dummy2[a1] >> test_dummy.py::test_dummy1[a2] >> test_dummy.py::test_dummy2[a2] >> [gw1] PASSED test_dummy.py::test_dummy1[a2] >> [gw0] PASSED test_dummy.py::test_dummy2[a2] >> >> ==================================================================================== >> 4 passed in 4.66 seconds >> ==================================================================================== >> >> >> I hope I am not confusing you guys. >> >> Cheers, >> Gonçalo >> >> >> >> >> >> >> >> On Sat, Dec 6, 2014 at 9:29 PM, Goncalo Morgado <goncalo.ma...@gmail.com> >> wrote: >> >>> >>> >>> I might be missing something very fundamental here, but tests are not >>> running in parallel on my side, as you may see tests are taking 4 >>> seconds-ish: >>> >>> $ py.test -vs test_dummy.py -n 2 >>> ====================================================================================== >>> test session starts >>> ======================================================================================= >>> platform linux2 -- Python 2.7.3 -- py-1.4.26 -- pytest-2.6.4 -- >>> /usr/bin/python >>> plugins: xdist >>> [gw0] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >>> [gw1] linux2 Python 2.7.3 cwd: /home/gmagno/Desktop/pytest >>> [gw0] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >>> [gw1] Python 2.7.3 (default, Feb 27 2014, 20:00:17) -- [GCC 4.6.3] >>> gw0 [2] / gw1 [2] >>> scheduling tests via LoadScheduling >>> >>> test_dummy.py::test_dummy1[a1] >>> [gw1] PASSED test_dummy.py::test_dummy1[a1] >>> test_dummy.py::test_dummy1[a2] >>> [gw1] PASSED test_dummy.py::test_dummy1[a2] >>> >>> ==================================================================================== >>> 2 passed in 4.47 seconds >>> ==================================================================================== >>> >>> Cheers, >>> Gonçalo >>> >>> >>> >>> On Sat, Dec 6, 2014 at 8:52 PM, Bruno Oliveira <nicodde...@gmail.com> >>> wrote: >>> >>>> Thanks for the example. >>>> >>>> I tried and it seems that xdist is running the tests in parallel: the >>>> more CPUs I use, the fastest the test suite runs. Do you see a different >>>> behavior? >>>> >>>> Cheers, >>>> >>>> On Sat, Dec 6, 2014 at 5:12 PM, Goncalo Morgado < >>>> goncalo.ma...@gmail.com> wrote: >>>> >>>>> To better depict what I am saying follows some code. >>>>> >>>>> $ sudo pip install pytest >>>>> $ sudo pip install pytest-xdist >>>>> $ py.test -vs test_dummy.py -n 2 >>>>> >>>>> >>>>> #!/usr/bin/env python >>>>> >>>>> """ >>>>> This is test_dummy.py >>>>> """ >>>>> import pytest >>>>> import time >>>>> >>>>> @pytest.fixture(scope="function", params=["a1", "a2"]) >>>>> def fix_dummy(request): >>>>> print "fix_dummy: parameter: %s" % (request.param, ) >>>>> return request.param >>>>> >>>>> def test_dummy1(fix_dummy): >>>>> print "parameter from fix_dummy: %s" %(fix_dummy, ) >>>>> time.sleep(2) >>>>> >>>>> >>>>> Cheers, >>>>> Gonçalo >>>>> >>>>> >>>>> >>>>> On Sat, Dec 6, 2014 at 6:45 PM, Goncalo Morgado < >>>>> goncalo.ma...@gmail.com> wrote: >>>>> >>>>>> >>>>>> pytest-xdist helped me to run different tests in multiple CPUs, for >>>>>> instance test_foo() and test_bar() would easily be run in two CPUs with >>>>>> pytest-xdist. The problem is when you have test_foobar(some_fixture), >>>>>> where >>>>>> some_fixture is parametrized with, let's say two parameters ["a", "b"], >>>>>> which will lead to two different test runs of test_foobar(). The >>>>>> challenge >>>>>> is to have these two test runs running in two CPUs. >>>>>> >>>>>> Cheers >>>>>> Gonçalo >>>>>> >>>>>> >>>>>> On Sat, Dec 6, 2014 at 6:30 PM, Bruno Oliveira <nicodde...@gmail.com> >>>>>> wrote: >>>>>> >>>>>>> Have you tried using pytest-xdist? >>>>>>> >>>>>>> Cheers, >>>>>>> >>>>>>> On Sat, Dec 6, 2014 at 4:26 PM, Goncalo Morgado < >>>>>>> goncalo.ma...@gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> Hi pytest community, >>>>>>>> >>>>>>>> I am not sure this is the right place to ask this, and I am sorry >>>>>>>> if it's not. >>>>>>>> I am trying to make use of multiple CPUs to run my tests, which I >>>>>>>> can for separately defined tests (different test_ functions for >>>>>>>> instance). *The >>>>>>>> challenge is to run tests, generated from one single test_ function >>>>>>>> which >>>>>>>> fixture it depends on is parametrized with multiple parameters, on >>>>>>>> multiple >>>>>>>> CPUs*. >>>>>>>> Currently I have this test that generates more than a thousand >>>>>>>> tests (because of the parametrized fixture it depends on) and ends up >>>>>>>> being >>>>>>>> very time consuming to run in one single CPU. >>>>>>>> >>>>>>>> Thanks for this great tool that is pytest! >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Gonçalo >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> pytest-dev mailing list >>>>>>>> pytest-dev@python.org >>>>>>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >
_______________________________________________ pytest-dev mailing list pytest-dev@python.org https://mail.python.org/mailman/listinfo/pytest-dev