I've been experimenting lately writing pytest tests for leo. I just
published my work at
https://github.com/btheado/leo-editor/tree/pytest-experiment.

You should be able try it out with these commands (untested):

git origin add btheado https://github.com/btheado/leo-editor.git
git checkout btheado pytest-experiment
pip install pytest
pytest leo/test/pytest


The tests I wrote are in leo/test/pytest.leo

The first set of tests I wrote are for testing the leoNode.py file. I was
interested in seeing what it was like to try to get full code coverage by
the tests. There is a project called coverage.py (
https://coverage.readthedocs.io) which can produce nice reports about which
lines of code have been executed and which have not. I tried using this
coverage.py against leo's current unit tests. Somehow coverage.py was not
able to properly mark all the executed code. I suspect something in Leo's
unit tests caused coverage.py to lose track but I'm not sure. I planned on
doing binary search through the unit tests (enable/disable) until I found
which ones caused coverage to lose its way. I never got around to doing
that and instead used the pytest coverage.py plugin on some unit tests I
wrote.

My leo/test/pytest/leoNodes_test.py file contains 24 tests. I picked some
methods of the Position class in leoNodes.py and strove to get full
coverage on them. With 24 tests, I did fully cover them, but it is only a
small percentage of the whole file. You can see the coverage achieved by
running these commands:

pip install pytest-cov
pytest --cov-report html --cov-report term-missing --cov=leo.core.leoNodes
leo/test/pytest/leoNodes_test.py
firefox htmlcov/leo_core_leoNodes_py.html  ;# Or whatever your web browser
is


The resulting web page highlights all the lines of code which haven't been
executed in red. From the tests I wrote, you should see 100% coverage for
several of the Position methods including the comparison
operators, convertTreeToString, moreHead,
moreBody, children, following_siblings, nearest_roots,
and nearest_unique_roots.

In the process of writing those tests, I found what I think is a bug and I
discovered the very nice xfail (
https://docs.pytest.org/en/latest/skipping.html#xfail) feature of pytest.
With it you can mark test that you expect to fail with the xfail decorator.
This will suppress all the verbose information pytest gives when there is a
failure (stack trace, etc), Normally, this verbose information is helpful
in tracking down the reason for failures. But if the bug is one which can't
or won't be fixed right away, all the extra information can get in the way.
So the xfail marker will suppress all the failure details unless you run
using the --runxfail command line option. That way you can keep your pytest
runs clean, but then easily access the failure details when you are ready
to fix the bug.

To see the details of the expected failures I've identified as bugs (2 of
them so far), run it like this:

pytest leo/test/pytest --runxfail


I'm a big fan of writing tests which fail before a bug is fixed and pass
after a bug is fixed. The xfail feature is very helpful in this regard.

Pytest also has the feature of fixtures (
https://docs.pytest.org/en/latest/fixture.html) which I've made use of. It
allows all tests to be written as simple functions. No classes needed.
Anything which normally would be done in a setup method can be implemented
as a fixture instead. It seems there are some tradeoffs here, but overall I
like it a lot.

Fixtures can be defined in any test file, but if they are common to
multiple test files, they can be defined in a file named conftest.py. This
makes the fixtures available to all the files in the directory. I've
defined my fixtures there.

My fixtures include a bridge fixture which can be used to access the leo
bridge. I also have fixtures for several example outlines. I didn't like
that the fixtures ended up "distant" from the tests themselves, so I came
up with a naming convention for the example outlines which allows you to
know the exact structure and contents of the outline just by looking at the
name of the outline. I tried to explain this naming convention in
conftest.py, but I'm not sure if it will be clear to anyone other than
myself.

Using pytest provides a lot of benefits:

   - Information provided about failures is excellent
   - marking expected failures with xfail is very useful for documenting
   bugs before they are fixed
   - fixtures allow all tests to be written as simple functions
   - coverage plugin allows code coverage to be measured
   - there are many, many other plugins

I'm not suggesting leo switch to using pytest. With this work I've shared,
I hope it is easy for those familiar with leo unit tests to be able to
evaluate the nice features of pytest and decide whether it is worth further
consideration.

On Sat, Dec 28, 2019 at 8:40 AM Edward K. Ream <[email protected]> wrote:

>
> On Saturday, December 28, 2019 at 6:07:03 AM UTC-5, vitalije wrote:
>
> For a long time I've been feeling that Leo unit tests don't prove
>> anything. They usually don't exercise real Leo code at all or if they do,
>> they exercise just a small portion of it. So, the fact that unit tests are
>> passing doesn't mean Leo would work properly for real users.
>>
>
> This is a separate issue. As I understand it, unit tests are meant to test
> small portions of code. They can also ensure that specific bugs don't
> happen again.
>
> It might take a huge effort to fully eliminate all `if g.unitTesting`
>> conditionals from Leo core, but it might be worth doing.
>>
>
> A second separate issue. It's not likely to happen, because tests in
> unitTest.leo test outline operations. It's natural to run those tests in a
> real outline.  Indeed, I don't see how else to run those tests.  I've just
> updated the title and first comment of #1467 to indicate that leoTest.py
> and unitTest.leo will likely remain.
>
> *Summary*
>
> unitTest.leo seems necessary to test outline operations. @test nodes are
> natural in that environment.
>
> For all other work, using stand-alone test classes should be easier and
> more natural. Even in unitTest.leo, there will likely be ways of leveraging
> stand-alone test classes. I'll be investigating the possibilities...
>
> Using traditional unit tests where possible will remove another objection
> to using Leo.
>
> Edward
>
> --
> You received this message because you are subscribed to the Google Groups
> "leo-editor" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/leo-editor/dd029cac-6860-4f8f-a565-7a3083f16120%40googlegroups.com
> <https://groups.google.com/d/msgid/leo-editor/dd029cac-6860-4f8f-a565-7a3083f16120%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CxZRaE%3DA-8BdfW12GuNM26am_Bpt6FQ8DW0Q%2BLzLV-8iw%40mail.gmail.com.

Reply via email to