Hi all,

For the past two months I've been working on making it possible to run
SymPy's test suite using pytest. It is now possible to do this so I
thought I'd write something here to explain how to use it and why it
can be useful.

There are two options in particular that I find very useful while
writing a patch for SymPy: -n for distributing tests over multiple
cores and --lf for only rerunning failed tests but first I'll explain
how to use pytest right now.


Installation and basic usage
---------------------------------

A couple of things were fixed in pytest along the way so you might see
a couple of spurious fails if you're not using the most recent version
which can be installed with:

    $ pip install pytest

You can then run SymPy's tests from the project root with

    $ pytest -m 'not slow'

This will run all of the tests not marked with slow as is the default
when running using bin/test for SymPy's internal test runner. Pytest
also can accept a path if you want to run tests from a specific file:

    $ pytest -m 'not slow' sympy/solvers/tests/test_ode.py

You can specify a particular test by name with -k

    $ pytest sympy/solvers/tests/test_ode.py -k test_ode_order

You can also use an expression like

    $ pytest sympy/solvers/tests/test_ode.py -k 'not test_linear'

This will run all tests from test_ode.py whose names do not begin with
test_linear.

The -m and -k flags can be combined and you can use more complex
expressions like -m 'not slow and not xfail'. Note that pytest runs
xfail tests by default and reports with they fail or pass (xpass).
SymPy's test runner just skips xfail tests.


Assert rewriting
--------------------

Pytest has extensive assert-rewriting so as a simple example you can
get a fail message like:

__________________________ test_ode_order ____________________

    def test_ode_order():
        x = 0
        from sympy import pi
        f = Function('f')
        g = Function('g')
        assert sin(0) == 0
        assert sin(pi) == 0
        assert sin(-pi) == 0
        assert sin(2*pi) == 0
>       assert sin(x) == 1
E       assert 0 == 1
E        +  where 0 = sin(0)

sympy/solvers/tests/test_ode.py:1124: AssertionError

Note that it shows you the argument that is passed to sin and compare
with the output from bin/test:

Traceback (most recent call last):
  File "/Users/enojb/current/sympy/sympy/sympy/solvers/tests/test_ode.py",
line 1123, in test_ode_order
    assert sin(x) == 1
AssertionError

SymPy's test runner has limited assert rewriting with bin/test -E:

Traceback (most recent call last):
  File "/Users/enojb/current/sympy/sympy/sympy/solvers/tests/test_ode.py",
line 1123, in test_ode_order
    assert sin(x) == 1
AssertionError:
0 ==
1


Remembering failed tests
---------------------------------

Pytest stores which tests have failed persistently between runs which
means that after running many tests you can rerun only the failed
tests with either

    $ pytest --last-failed
    $ pytest --lf

This is useful in the workflow:
1) Make changes
2) Run all tests
3) Look through fails and try to fix
4) Rerun fails with --lf
5) (Loop back to 3 if there are still fails)
6) Rerun all tests
7) Commit/push etc.

Pytest has a number of other options that make use of remembering
which tests have failed but this is the only one I've used.


Multicore test-running
-----------------------------

Something that I find particularly useful about pytest is the xdist
plugin which allows running tests in parallel using worker processes
to take advantage of multiple cores. To use this you first need to
install it separately:

    $ pip install pytest-xdist

Then you can invoke xdist from any pytest command by providing e.g.
-n4 to use 4 cores. For example (on this laptop):

    $ bin/test sympy/solvers/tests/test_ode.py    # 308 seconds
    $ pytest -m 'not slow and not xfail'
sympy/solvers/tests/test_ode.py   # 310 seconds
    $ pytest -m 'not slow and not xfail'
sympy/solvers/tests/test_ode.py -n4   # 166 seconds

On my work computer which has 8 cores I can run with -n7 and all
non-slow tests and doctests are completed in 13 minutes. I use -n7
because it means I can still use the computer: -n8 (all cores) runs
the tests faster but makes it noticeably difficult to do other things
at the same time.


Doctests
------------

To run SymPy's doctests I've made use of the doctestplus pytest plugin:
    https://github.com/astropy/pytest-doctestplus
This has been developed by the astropy project because of deficiencies
in pytest's standard doctest runner that also apply to SymPy. It
provides:
  * inexact comparison for doctests with float output
  * ability to run doctests in rst files
  * skipping doctests based on presence of optional dependencies

To use this you need to install it which you would normally do with

    $ pip install pytest-doctestplus

However in getting this to work I have submitted patches to
doctestplus which are accepted but not yet released so for now you
need the master version:

    $ git clone https://github.com/astropy/pytest-doctestplus
    $ pip install -e pytest-doctestplus

This makes it possible to run the doctests *in addition* to the normal
tests with

    $ pytest --doctest-plus --doctest-rst

You can use all the other argument --lf, -n4 etc with this. To run
*only* the doctests the only way I have right now is

    $ pytest --doctest-plus --doctest-rst -k 'not test_'

So for a full run of all not slow tests and all doctests you would do

    $ pytest --doctest-plus --doctest-rst -m 'not slow' -n4


Outstanding issues:
--------------------------

1) I submitted a large patch to get tests working under Python 2 but
int the time since new integer division bugs crept in. I submitted a
secondary patch but it is now languishing:
https://github.com/sympy/sympy/pull/15587

2) I just submitted a new patch to silence a warning with latest
pytest (just waiting for the tests to finish):
https://github.com/sympy/sympy/pull/15817

3) I opened an issue about implicit imports in doctests under this setup:
https://github.com/sympy/sympy/issues/15819


Closing remarks
---------------------

At this stage I'd like to hear what other people think about all of
this. I'll make my own view clear though:

In general I don't think that SymPy should maintain its own test
runner. Using pytest seems a much better option. My experience with
pytest suggests that more development resource goes into pytest than
the whole of SymPy and I just don't see how it's possible to compete
with that using an in-house solution. The time spent maintaining
SymPy's test runner would be much better spent on shared development
efforts such as pushing patches upstream to pytest/doctestplus.

Also pytest has many more features that SymPy could take advantage of
but only by actually adopting pytest as its test runner so that the
test code can make use of these features .e.g parametrised tests:
https://github.com/sympy/sympy/issues/15497

--
Oscar

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at https://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/CAHVvXxQLd0qNPS1QM%3De-13Phv3EJHQddvLj%3Dne56HyZgntD%3D1A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to