Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-12 Thread Edward K. Ream


On Sunday, January 12, 2020 at 8:18:42 AM UTC-5, Edward K. Ream wrote:
>
>
>
> On Sat, Jan 11, 2020 at 2:11 PM Brian Theado  
> wrote:
>
> > I have doubts about the following entries you are suppressing: assert, 
> except, raise.
>
> Imo, they are fine.  Assert signal that something is seriously wrong with 
> the tool, not the user input.
>

I have just enabled "assert" and "raise" in .coveragerc.  I only needed to 
suppress two calls to raise AssignLinksError.  As you say, most asserts are 
in the same flow path as already-covered code.

I see no reason to enable "except" code  Testing that code would be make 
work. pyflakes and pylint do a good job of detecting code blunders.

Edward

>
>
> In addition to providing coverage data, pytest is actually running the 
> unit tests.  If an assert fails we can deal the failure. Suppressing the 
> coverage tests is not a problem.
>
> The new silent-fstringify-files command suppresses most output, ensuring 
> any serious failures will be obvious.  This command now fstringifies all of 
> Leo's core files without serious messages, including failed asserts.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/3b339728-510d-4e2a-9e6e-6b39939270b3%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-12 Thread Brian Theado
Oops accidentally hit send when I was pasting in the two examples. Here is
the other one:

else:
# Some fields contain ints or strings.
assert isinstance(z, (int, str)), z.__class__.__name__

On Sun, Jan 12, 2020 at 9:49 AM Brian Theado  wrote:

>
> On Sun, Jan 12, 2020 at 8:18 AM Edward K. Ream 
> wrote:
>
>>
>>
>> On Sat, Jan 11, 2020 at 2:11 PM Brian Theado 
>> wrote:
>>
>> > I have doubts about the following entries you are suppressing: assert,
>> except, raise.
>>
>> Imo, they are fine.  Assert signal that something is seriously wrong
>> with the tool, not the user input.
>>
>
> My suggestion was to allow coverage to report on execution of assert
> statements. It sounds to me like you are responding as if I suggested to
> test assertion failures. Code coverage reporting can't distinguish between
> when the assertion fails and when it doesn't.
>
> I only found 2 instances of assert statements not being covered when I
> change .coveragerc to include asserts and re-ran the tests. In both cases
> they are else clauses which your current tests don't cover.
>
> if e is not None: assert e.lower() == 'utf-8', repr(e)
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CxOnYpMcTp3zS%2B%2BqgCL2-1tq0PnxU7mB0AXNHKS_SriCA%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-12 Thread Brian Theado
On Sun, Jan 12, 2020 at 8:18 AM Edward K. Ream  wrote:

>
>
> On Sat, Jan 11, 2020 at 2:11 PM Brian Theado 
> wrote:
>
> > I have doubts about the following entries you are suppressing: assert,
> except, raise.
>
> Imo, they are fine.  Assert signal that something is seriously wrong with
> the tool, not the user input.
>

My suggestion was to allow coverage to report on execution of assert
statements. It sounds to me like you are responding as if I suggested to
test assertion failures. Code coverage reporting can't distinguish between
when the assertion fails and when it doesn't.

I only found 2 instances of assert statements not being covered when I
change .coveragerc to include asserts and re-ran the tests. In both cases
they are else clauses which your current tests don't cover.

if e is not None: assert e.lower() == 'utf-8', repr(e)

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CwTzUtx-NcUqq6NOGMFGW6t%2BEb_OEZ%2BNF1vd_U-aqUYGA%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-12 Thread Edward K. Ream
On Sat, Jan 11, 2020 at 2:11 PM Brian Theado  wrote:

> I have doubts about the following entries you are suppressing: assert,
except, raise.

Imo, they are fine.  Assert signal that something is seriously wrong with
the tool, not the user input.

In addition to providing coverage data, pytest is actually running the unit
tests.  If an assert fails we can deal the failure. Suppressing the
coverage tests is not a problem.

The new silent-fstringify-files command suppresses most output, ensuring
any serious failures will be obvious.  This command now fstringifies all of
Leo's core files without serious messages, including failed asserts.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS0waGxn%3DNiNGxy-RDH3hQ%2BPtfzJw3aVDQoZb8zdEtEazw%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-11 Thread Brian Theado
On Fri, Jan 10, 2020 at 8:33 AM Edward K. Ream  wrote:

> > I still think full line-by-line coverage analysis will be very valuable.
>>
>> I agree. I have just created #1474
>>  for this.
>>
>
> Many thanks for suggesting this. `pytest --cov` has quickly become one of
> my favorite tools, on a par with pyflakes and pylint.
>

Excellent. I'm glad it is helpful.

pytest is easy to configure, and it's easy to suppress coverage test within
> the code itself. leo-editor/.coveragerc now contains the default settings.
>

I was unaware of that feature. Looks useful.

I have doubts about the following entries you are suppressing: assert,
except, raise.

For all 3, it seems like a case-by-case "pragma: no cover" is better than
wholesale suppression.

Especially assert. In general, an assert will precede other code you will
already want cover with tests, so why suppress the coverage report from the
assert statement itself?

With except and raise there might be some cases in which the code can't be
hit unless there is a bug in your library code. Since you can't anticipate
your bugs, you would want to suppress those from the coverage report. But
in most cases, I expect the exception catching and raising will be for
cases in which the user passes in bad data. It would be valuable to
exercise that code in unit tests to ensure it works the way you expect if a
user encounters it.

So my 2 cents says it would be better to have case-by-case suppression for
the exceptions catching and raising whose purpose is to alert you to bugs
in your own code.

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CzCwRF2%3DsiiXOGBJ8F_tgGHEnSnY3DXbz2GA9h2KQEBYg%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-10 Thread Edward K. Ream


On Thursday, January 9, 2020 at 6:04:20 AM UTC-5, Edward K. Ream wrote:
>
> On Tue, Jan 7, 2020 at 1:24 PM Brian Theado  
> wrote:
>
> > I still think full line-by-line coverage analysis will be very valuable.
>
> I agree. I have just created #1474 
>  for this.
>

Many thanks for suggesting this. `pytest --cov` has quickly become one of 
my favorite tools, on a par with pyflakes and pylint. 

pytest is easy to configure, and it's easy to suppress coverage test within 
the code itself. leo-editor/.coveragerc now contains the default settings.

I've retired some cruft based on the coverage tests and suppressed various 
helper classes and debug code. The remaining uncovered code are all 
significant.

The code is now up to 82% coverage. The number should be 100% before 
announcing the project.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/0ab04a4d-cca1-4007-af60-1c8170ac38e3%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-09 Thread Brian Theado
On Thu, Jan 9, 2020 at 5:10 AM Edward K. Ream  wrote:

> On Wed, Jan 8, 2020 at 5:55 AM Brian Theado 
> wrote:
>
> > I often read unit tests in source code projects in the hope of finding
> simple, concrete usage examples. These examples not only serve to test the
> code, but also also documentation on how to use it.
>
> > But from that I don't see what the outputs are. It doesn't show me what
> a TOG object can do and what it is for.
>
> For an overview, see the Theory of Operation in LeoDocs.leo.
>

Thanks, I did read those docs and didn't find what I was looking for. Maybe
after I understand better, I can provide some concrete, constructive
feedback on those docs.


> > Is there some code you can add which make it easy to see how things
> work?
>
> Good question.  The dozen or so tests in the TestFstringify class assert
> that a small example produces the desired result.
>

Yes, those are nice. Concrete inputs and concrete outputs in the test.
Exactly what I was hoping for in a least few of the TOG tests.


> To see the two-way links produced by the TOG class, just call
> dump_tree(tree) at the end of *any* test. For example, in
> TestTOG.test_ClassDef2, replace:
>
> self.make_data(contents)
>
> by:
>
> contents, tokens, tree = self.make_data(contents)
> dump_tree(tree)
>
> The output will be:
>
> Tree...
>
> parent   lines  node   tokens
> ==   =     ==
>  6..7   0.Module:  newline.33(
> 6:0)
>   0.Module1.Expr:
>   1.Expr 1  2.Str: s='ds 1'string.1("""ds
> 1""")
>   0.Module   1..2 3.ClassDef:  newline.2(1
> :11) name.3(class) name.5(TestClass) op.6=:
>   3.ClassDef4.Expr:
>   4.Expr 2..3 5.Str: s='ds 2'  newline.7(2
> :17) string.9("""ds 2""")
>   3.ClassDef 3..4   6.FunctionDef: newline.10(
> 3:15) name.12(def) name.14(long_name) op.23=:
>   6.FunctionDef  47.arguments: op.20==
>   7.arguments4  8.arg: name.16(a)
>   7.arguments4  9.arg: name.19(b)
>   7.arguments4  10.Num: n=2number.21(2
> )
>   6.FunctionDef   11.Expr:
>  11.Expr 4..5   12.Str: s='ds 3'   newline.24(
> 4:27) string.26("""ds 3""")
>   6.FunctionDef   13.Expr:
>  13.Expr14.Call:
>  14.Call 5..6 15.Name: id='print'  newline.27(
> 5:19) name.29(print)
>  14.Call 616.Str: s='done' string.31(
> 'done')
>
>
This looks useful, thanks. I'll take a look.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8Cwu%3DvcOx_Oz3YSd%3DeFvqU30um5jTxFN4VCmHBKiRTr%3DKA%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-09 Thread Edward K. Ream
On Tue, Jan 7, 2020 at 1:24 PM Brian Theado  wrote:

> I still think full line-by-line coverage analysis will be very valuable.

I agree. I have just created #1474
 for this. Thanks for
the detailed report.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS1sVxZU4SSZJzqAx%2BOyhiGBpj_0Kn_v_zw2LrusCvDX5w%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-09 Thread Edward K. Ream
​On Wed, Jan 8, 2020 at 5:55 AM Brian Theado  wrote:

​> ​I often read unit tests in source code projects in the hope of finding 
simple, concrete usage examples. These examples not only serve to test the 
code, but also also documentation on how to use it.

​> ​But from that I don't see what the outputs are. It doesn't show me what 
a TOG object can do and what it is for. 

​For an overview, see the Theory of Operation in LeoDocs.leo.

​> ​Is there some code you can add which make it easy to see how things 
work? 

​Good question.  The dozen or so tests in the TestFstringify class assert 
that a small example produces the desired result.​

To see the two-way links produced by the TOG class, just call 
dump_tree(tree) at the end of *any* test. For example, in 
TestTOG.test_ClassDef2, replace:

self.make_data(contents)

by:

contents, tokens, tree = self.make_data(contents)
dump_tree(tree)

The output will be:

Tree...

parent   lines  node   tokens
==   =     ==
 6..7   0.Module:  newline.33(6:
0)
  0.Module1.Expr:
  1.Expr 1  2.Str: s='ds 1'string.1("""ds 
1""")
  0.Module   1..2 3.ClassDef:  newline.2(1:
11) name.3(class) name.5(TestClass) op.6=:
  3.ClassDef4.Expr:
  4.Expr 2..3 5.Str: s='ds 2'  newline.7(2:
17) string.9("""ds 2""")
  3.ClassDef 3..4   6.FunctionDef: newline.10(3:
15) name.12(def) name.14(long_name) op.23=:
  6.FunctionDef  47.arguments: op.20==
  7.arguments4  8.arg: name.16(a)
  7.arguments4  9.arg: name.19(b)
  7.arguments4  10.Num: n=2number.21(2)
  6.FunctionDef   11.Expr:
 11.Expr 4..5   12.Str: s='ds 3'   newline.24(4:
27) string.26("""ds 3""")
  6.FunctionDef   13.Expr:
 13.Expr14.Call:
 14.Call 5..6 15.Name: id='print'  newline.27(5:
19) name.29(print)
 14.Call 616.Str: s='done' string.31(
'done')


HTH.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/b481ec58-96ff-4da2-97d0-ac8f6a8d5cce%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-09 Thread Edward K. Ream
On Tue, Jan 7, 2020 at 1:32 PM Brian Theado  wrote:
parse tree.

>> The asserts in tog.sync_tokens suffice to create strong unit tests.

>  Maybe if I really want to understand, I can intentionally break the code
in a few places and see how the failures work.

Yes, that should work. The assert in tog.sync_tokens ensures only that the
traversal of the tree visits nodes in the correct order. The coverage tests
you suggest would ensure that all kinds of parse tree nodes are actually
tested, which is a separate issue.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS3WnLm7CHt2cxPN%3DV6%3DZSVw6OO4BOcy0GnVJJfAbq128Q%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-08 Thread Brian Theado
I've been thinking about this more and studying the code and reading some
of your initial documentation.

I often read unit tests in source code projects in the hope of finding
simple, concrete usage examples. These examples not only serve to test the
code, but also also documentation on how to use it.

For being able to see concrete inputs, the test methods of the TestTOG
class do a good job. For example:

def test_attribute(self):
contents = r"""\
open(os.devnull, "w")
"""
self.make_data(contents)


But from that I don't see what the outputs are. It doesn't show me what a
TOG object can do and what it is for. From the "Generator" in the name, I
can guess that somehow there is a generate involved, so I should be able to
look over it or create a list from it. But how is that done. I look at the
make_data method, but that doesn't clear things up for me.

Is there some code you can add which make it easy to see how things work?
Something like this for example:

next(TokenOrderGenerator.fromString(r"""open(os.devnull, "w")""")) == ???

or

list(TokenOrderGenerator.fromString(r"""open(os.devnull, "w")""")) == [???,
???, ???]


I don't know what to put in the question marks above because I still don't
understand what things tog generator instances yield.

The "fromString" is a hypothetical static factory method (
https://stackoverflow.com/questions/929021/what-are-static-factory-methods/929273)
which makes it easy to instantiate the class.

On Tue, Jan 7, 2020 at 1:32 PM Brian Theado  wrote:

>
>
> On Tue, Jan 7, 2020 at 5:01 AM Edward K. Ream  wrote:
>
>> On Sunday, December 29, 2019 at 7:19:03 PM UTC-5, btheado wrote:
>>
>> I was looking at the tests in leoAst.py in the fstringify branch and I
>>> don't find any asserts in the tests themselves.
>>>
>>
>> The tests in the TestTOG are actually extremely strong.  I have just
>> added the following to the docstring for the TestTOG class:
>>
>> QQQ
>> These tests call BaseTest.make_data, which creates the two-way links
>> between tokens and the parse tree.
>>
>> The asserts in tog.sync_tokens suffice to create strong unit tests.
>> QQQ
>>
>
> I'll take your word for it as it isn't obvious to me from a glance at the
> code. I don't have a huge interest in the tokens project so I don't really
> understand where the parse tree is coming from and how it relates to the
> tokens. Maybe if I really want to understand, I can intentionally break the
> code in a few places and see how the failures work.
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8Cw0%2B9d-j5xsuMVJY%3DTSQM0UqJ0jO2BzEdo-k38kP8x5LQ%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-07 Thread Brian Theado
On Tue, Jan 7, 2020 at 5:01 AM Edward K. Ream  wrote:

> On Sunday, December 29, 2019 at 7:19:03 PM UTC-5, btheado wrote:
>
> I was looking at the tests in leoAst.py in the fstringify branch and I
>> don't find any asserts in the tests themselves.
>>
>
> The tests in the TestTOG are actually extremely strong.  I have just added
> the following to the docstring for the TestTOG class:
>
> QQQ
> These tests call BaseTest.make_data, which creates the two-way links
> between tokens and the parse tree.
>
> The asserts in tog.sync_tokens suffice to create strong unit tests.
> QQQ
>

I'll take your word for it as it isn't obvious to me from a glance at the
code. I don't have a huge interest in the tokens project so I don't really
understand where the parse tree is coming from and how it relates to the
tokens. Maybe if I really want to understand, I can intentionally break the
code in a few places and see how the failures work.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8Czko%3DQ7J5F6d-zb0vdTHjohhD84v0e4_RYphqBi63Cvmw%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-07 Thread Brian Theado
On Sun, Dec 29, 2019 at 5:54 PM Edward K. Ream  wrote:

> On Sun, Dec 29, 2019 at 5:29 PM Brian Theado 
> wrote:
>
[...]

> > You might also find the code coverage report useful:
>
> Yes, that's interesting. The TOG classes remember which visitors have
> been visited, so that's probably enough.
>

It still think full line-by-line coverage analysis will be very valuable. I
ran it like this:

pip install pytest-cov
pytest --cov-report html --cov-report term-missing --cov=leo.core.leoAst
leo/core/leoAst.py
firefox htmlcov/leo_core_leoAst_py.html  ;# Or whatever your web browser is


It shows only 52% coverage, but maybe there is a lot more to that file
which you don't want to test?

The TOG class looks to have a higher percentage of the code covered than
that, but there are still plenty of gaps.  Maybe you aren't viewing the
your visitor report or have not yet prioritized adding tests for all the
visitors, but here is a partial list of completely untested methods:

sync_newline
do_AsyncFunctionDef
do_Interactive
do_Expression
do_Constant
do_ExtSlice
do_Set
do_SetComp


Even in the methods which are tested, there are some interesting-looking
branches of code not tested. Here are some examples:

   1. do_Keyword, the if node.arg branch
   2. do_Slice, if step is not None branch
   3. do_Dict, "Zero or more expressions" for loop
   4. do_FunctionDef, if returns is not None branch
   5. the visitor method itself has several untested sections

BTW, I had to change the hard-coded directory path in make_file_data in
order to get all the tests to pass

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8Cy%3DXRY6A78WDeJ2xwqa5%2BvTE-TiQvgGXXguZiXVaa3v8Q%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2020-01-07 Thread Edward K. Ream
On Sunday, December 29, 2019 at 7:19:03 PM UTC-5, btheado wrote:

I was looking at the tests in leoAst.py in the fstringify branch and I 
> don't find any asserts in the tests themselves.
>

The tests in the TestTOG are actually extremely strong.  I have just added 
the following to the docstring for the TestTOG class:

QQQ
These tests call BaseTest.make_data, which creates the two-way links 
between tokens and the parse tree.

The asserts in tog.sync_tokens suffice to create strong unit tests.
QQQ

I'm not so sure about the test in TestTOT class.  It looks like it just 
ensures that two traversals run without crashing or looping.  Again, this 
is a stronger test than it might at first appear.  In any case, the 
Fstringify class is a subclass of the TokenOrderTraverser class, so all the 
tests in the TestFstringify class are implicitly also a test of the TOT 
class.

Finally, the tests in the TestFstringify class do all include the usual 
asserts.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/33530fd2-7235-422f-a204-ceac2b7070fe%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-29 Thread Brian Theado
Edward,

On Sun, Dec 29, 2019 at 5:54 PM Edward K. Ream  wrote:
[...]

> > You might need to introduce failed tests in order to experience the
> better assert failure reporting?
>
> Leo's existing unit tests use asserts.  It's no big deal.
>

I was looking at the tests in leoAst.py in the fstringify branch and I
don't find any asserts in the tests themselves. I expected to find
assertions which verify the actual results of the test match some expected
results. Is this just an early version of the tests or do you have some
different approach?

Brian

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CzdA6dUDaNX2A4KM5urteePEpDRQB9WOOonyXoTmu6GJw%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-29 Thread Edward K. Ream
On Sun, Dec 29, 2019 at 5:29 PM Brian Theado  wrote:

> Your clone should work just as well. From the clone be sure you are on
the pytest-experiment branch (git checkout pytest-experiment).

That worked.  Thanks.

>> So I don't understand what difference pytest makes.  What am I missing?

> According to https://docs.pytest.org/en/latest/unittest.html#unittest,
pytest supports running unittest style tests.

> Clearly that's working.I only know I like the function-based style which
pytest encourages.

The tests in the test classes in leoAst.py segregate tests in to test_*
functions. That's probably why both unittest and pytest find and run the
tests.

> You might need to introduce failed tests in order to experience the
better assert failure reporting?

Leo's existing unit tests use asserts.  It's no big deal.

> You might also find the code coverage report useful:

Yes, that's interesting. The TOG classes remember which visitors have been
visited, so that's probably enough.

>> Btw, giving a directory does *not* work (pytest 3.6.7).  For example,
pytest leo\core finds no tests.

> the default test discovery for pytest will only find tests within
test_*.py or *_test.py files.

That explains it.  Thanks.

> 3.6 is almost a year and a half old, I think...Might be useful for you to
upgrade.

Will do. Thanks for your help.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAMF8tS2Bv718kv8ez4hKj%2BHBDKBSLMZ_eY7WZXAOgSxZa2nY9Q%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-29 Thread Brian Theado
Edward,

On Sun, Dec 29, 2019 at 2:03 PM Edward K. Ream  wrote:

> On Saturday, December 28, 2019 at 10:44:39 AM UTC-5, btheado wrote:
>
> I've been experimenting lately writing pytest tests for leo. I just
>> published my work at
>> https://github.com/btheado/leo-editor/tree/pytest-experiment.
>>
>
>> You should be able try it out with these commands (untested):
>>
>> git origin add btheado https://github.com/btheado/leo-editor.git
>> git checkout btheado pytest-experiment
>> pip install pytest
>> pytest leo/test/pytest
>>
>>
> The first line didn't work.  I did `git clone
> https://github.com/btheado/leo-editor.git brian`, but I don't see the
> pytest folder in the leo/test folder, which is strange.
>

Sorry about that. Not sure what is wrong with my instructions. Your clone
should work just as well. From the clone be sure you are on the
pytest-experiment branch (git checkout pytest-experiment). Did you already
try that and not find the leo/test/pytest folder? If that doesn't work, I'm
not sure what to say other than you can view the code at github:
https://github.com/btheado/leo-editor/tree/pytest-experiment/leo/test/pytest


> With my present setup,  `pytest leo\core\leoAst.py` executes all the
> present unit tests in leoAst.py:
>
> leo\core\leoAst.py
> ...
>
> == 71 passed in 5.42 seconds
> ==
>
> So I don't understand what difference pytest makes.  What am I missing?
>

According to https://docs.pytest.org/en/latest/unittest.html#unittest,
pytest supports running unittest style tests. That page has a list of
purported benefits of using pytest even for unittest style tests. I haven't
used unittest to write tests, I only know I like the function-based style
which pytest encourages. As it says on the main page (
https://docs.pytest.org/en/latest/index.html): "The pytest framework makes
it easy to write small tests, yet scales to support complex functional
testing for applications and libraries."

You might need to introduce failed tests in order to experience the better
assert failure reporting?

You might also find the code coverage report useful:

pip install pytest-cov
pytest --cov-report html --cov-report term-missing --cov=leo.core.leoAst
leo/core/leoAst.py
firefox htmlcov/leo_core_leoAst_py.html



> Btw, giving a directory does *not* work (pytest 3.6.7).  For example,
> pytest leo\core finds no tests.
>

https://docs.pytest.org/en/latest/goodpractices.html#test-discovery - the
default test discovery for pytest will only find tests within test_*.py or
*_test.py files

3.6 is almost a year and a half old, I think. I only started using pytest
version 5.2+, so I'm not very familiar with old versions. Might be useful
for you to upgrade.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/CAO5X8CwhiAk3LYmy7jzg%3DRsQ_3CWmLJ-6Os0rxYgkh%2B%3DzJxnYw%40mail.gmail.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-29 Thread Edward K. Ream
On Saturday, December 28, 2019 at 10:44:39 AM UTC-5, btheado wrote:

I've been experimenting lately writing pytest tests for leo. I just 
> published my work at 
> https://github.com/btheado/leo-editor/tree/pytest-experiment. 
>

> You should be able try it out with these commands (untested):
>
> git origin add btheado https://github.com/btheado/leo-editor.git
> git checkout btheado pytest-experiment
> pip install pytest
> pytest leo/test/pytest
>
>
The first line didn't work.  I did `git clone 
https://github.com/btheado/leo-editor.git brian`, but I don't see the 
pytest folder in the leo/test folder, which is strange.

With my present setup,  `pytest leo\core\leoAst.py` executes all the 
present unit tests in leoAst.py:

leo\core\leoAst.py 
...

== 71 passed in 5.42 seconds 
==

So I don't understand what difference pytest makes.  What am I missing?

Btw, giving a directory does *not* work (pytest 3.6.7).  For example, 
pytest leo\core finds no tests.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/5841e140-d7cb-41c2-b41d-399f38b98713%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-28 Thread Terry Brown
On Sat, 28 Dec 2019 02:17:06 -0800 (PST)
"Edward K. Ream"  wrote:

> Bob's remarks yesterday about unit testing had me look at Leo's unit 
> testing framework with new eyes. 
> 
> #1467  suggest
> using traditional unit tests everywhere, or maybe almost everywhere.

Sounds like a good idea.  I suspect pytest https://docs.pytest.org/
is used by major projects more routinely than Python's "built-in" library, so I
would evaluate that before making the switch.

Cheers -Terry

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/20191228123737.5a80d28b%40lakeview.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-28 Thread Brian Theado
I've been experimenting lately writing pytest tests for leo. I just
published my work at
https://github.com/btheado/leo-editor/tree/pytest-experiment.

You should be able try it out with these commands (untested):

git origin add btheado https://github.com/btheado/leo-editor.git
git checkout btheado pytest-experiment
pip install pytest
pytest leo/test/pytest


The tests I wrote are in leo/test/pytest.leo

The first set of tests I wrote are for testing the leoNode.py file. I was
interested in seeing what it was like to try to get full code coverage by
the tests. There is a project called coverage.py (
https://coverage.readthedocs.io) which can produce nice reports about which
lines of code have been executed and which have not. I tried using this
coverage.py against leo's current unit tests. Somehow coverage.py was not
able to properly mark all the executed code. I suspect something in Leo's
unit tests caused coverage.py to lose track but I'm not sure. I planned on
doing binary search through the unit tests (enable/disable) until I found
which ones caused coverage to lose its way. I never got around to doing
that and instead used the pytest coverage.py plugin on some unit tests I
wrote.

My leo/test/pytest/leoNodes_test.py file contains 24 tests. I picked some
methods of the Position class in leoNodes.py and strove to get full
coverage on them. With 24 tests, I did fully cover them, but it is only a
small percentage of the whole file. You can see the coverage achieved by
running these commands:

pip install pytest-cov
pytest --cov-report html --cov-report term-missing --cov=leo.core.leoNodes
leo/test/pytest/leoNodes_test.py
firefox htmlcov/leo_core_leoNodes_py.html  ;# Or whatever your web browser
is


The resulting web page highlights all the lines of code which haven't been
executed in red. From the tests I wrote, you should see 100% coverage for
several of the Position methods including the comparison
operators, convertTreeToString, moreHead,
moreBody, children, following_siblings, nearest_roots,
and nearest_unique_roots.

In the process of writing those tests, I found what I think is a bug and I
discovered the very nice xfail (
https://docs.pytest.org/en/latest/skipping.html#xfail) feature of pytest.
With it you can mark test that you expect to fail with the xfail decorator.
This will suppress all the verbose information pytest gives when there is a
failure (stack trace, etc), Normally, this verbose information is helpful
in tracking down the reason for failures. But if the bug is one which can't
or won't be fixed right away, all the extra information can get in the way.
So the xfail marker will suppress all the failure details unless you run
using the --runxfail command line option. That way you can keep your pytest
runs clean, but then easily access the failure details when you are ready
to fix the bug.

To see the details of the expected failures I've identified as bugs (2 of
them so far), run it like this:

pytest leo/test/pytest --runxfail


I'm a big fan of writing tests which fail before a bug is fixed and pass
after a bug is fixed. The xfail feature is very helpful in this regard.

Pytest also has the feature of fixtures (
https://docs.pytest.org/en/latest/fixture.html) which I've made use of. It
allows all tests to be written as simple functions. No classes needed.
Anything which normally would be done in a setup method can be implemented
as a fixture instead. It seems there are some tradeoffs here, but overall I
like it a lot.

Fixtures can be defined in any test file, but if they are common to
multiple test files, they can be defined in a file named conftest.py. This
makes the fixtures available to all the files in the directory. I've
defined my fixtures there.

My fixtures include a bridge fixture which can be used to access the leo
bridge. I also have fixtures for several example outlines. I didn't like
that the fixtures ended up "distant" from the tests themselves, so I came
up with a naming convention for the example outlines which allows you to
know the exact structure and contents of the outline just by looking at the
name of the outline. I tried to explain this naming convention in
conftest.py, but I'm not sure if it will be clear to anyone other than
myself.

Using pytest provides a lot of benefits:

   - Information provided about failures is excellent
   - marking expected failures with xfail is very useful for documenting
   bugs before they are fixed
   - fixtures allow all tests to be written as simple functions
   - coverage plugin allows code coverage to be measured
   - there are many, many other plugins

I'm not suggesting leo switch to using pytest. With this work I've shared,
I hope it is easy for those familiar with leo unit tests to be able to
evaluate the nice features of pytest and decide whether it is worth further
consideration.

On Sat, Dec 28, 2019 at 8:40 AM Edward K. Ream  wrote:

>
> On Saturday, December 28, 2019 at 6:07:03 AM UTC-5, vitalije wrote:
>
> For a 

Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-28 Thread Edward K. Ream

On Saturday, December 28, 2019 at 6:07:03 AM UTC-5, vitalije wrote:

For a long time I've been feeling that Leo unit tests don't prove anything. 
> They usually don't exercise real Leo code at all or if they do, they 
> exercise just a small portion of it. So, the fact that unit tests are 
> passing doesn't mean Leo would work properly for real users.
>

This is a separate issue. As I understand it, unit tests are meant to test 
small portions of code. They can also ensure that specific bugs don't 
happen again.

It might take a huge effort to fully eliminate all `if g.unitTesting` 
> conditionals from Leo core, but it might be worth doing.
>

A second separate issue. It's not likely to happen, because tests in 
unitTest.leo test outline operations. It's natural to run those tests in a 
real outline.  Indeed, I don't see how else to run those tests.  I've just 
updated the title and first comment of #1467 to indicate that leoTest.py 
and unitTest.leo will likely remain.

*Summary*

unitTest.leo seems necessary to test outline operations. @test nodes are 
natural in that environment.

For all other work, using stand-alone test classes should be easier and 
more natural. Even in unitTest.leo, there will likely be ways of leveraging 
stand-alone test classes. I'll be investigating the possibilities...

Using traditional unit tests where possible will remove another objection 
to using Leo.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/dd029cac-6860-4f8f-a565-7a3083f16120%40googlegroups.com.


Re: #1467: Use traditional unit test (almost?) everywhere

2019-12-28 Thread vitalije
This is a good idea. I suppose you don't plan to make a lot of changes in 
the test cases. But even without changing test cases this would be a big 
step forward.

For a long time I've been feeling that Leo unit tests don't prove anything. 
They usually don't exercise real Leo code at all or if they do, they 
exercise just a small portion of it. So, the fact that unit tests are 
passing doesn't mean Leo would work properly for real users.

It might take a huge effort to fully eliminate all `if g.unitTesting` 
conditionals from Leo core, but it might be worth doing. Anyway it would be 
much clearer to see if it is possible to eliminate those conditionals after 
transferring unit tests into external files.

Vitalije

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/56e1d31a-870d-4139-a317-7de5e061c463%40googlegroups.com.


#1467: Use traditional unit test (almost?) everywhere

2019-12-28 Thread Edward K. Ream
Bob's remarks yesterday about unit testing had me look at Leo's unit 
testing framework with new eyes. 

#1467  suggest using 
traditional unit tests everywhere, or maybe almost everywhere. Imo, this 
will be a significant improvement.  I'll use a script to do the conversion, 
so it shouldn't be a major project.

*Benefits of traditional unit tests*

Yesterday I converted (by hand) the @test nodes for leoAst.py in 
unitTest.leo to several Test classes within leoAst.py itself. There were no 
significant problems. Instead, I saw more and more advantages to 
traditional test classes and functions:

- Using classes and functions allows the test code to reside within the 
file being tested. Such code can be used by people who have not installed 
Leo.

- Common code can reside in, say, a BaseTest class. With @test nodes, such 
common code requires the ugly @common-code hack.

- Tests can be grouped by putting them in distinct Test classes, which will 
typically be subclasses of BaseTest.  The unittest module can run all tests 
within a module, or within a specific test class, or even a single test 
function:

python -m unittest test_module1 test_module2python -m unittest 
test_module.TestClasspython -m unittest test_module.TestClass.test_method

  
Such commands commands can be run within Leo.  For example, I used the 
following @button node yesterday:

import os
g.cls()
leo_editor_dir = g.os_path_finalize_join(g.app.loadDir, '..', '..')
os.chdir(leo_editor_dir)
commands = r'python -m unittest leo.core.leoAst.TestTOG'
g.execute_shell_commands(commands, trace=False)

*Grouping tests*

The big advantage (or so I thought) to @test nodes was that it is easy to 
run individual tests or group of tests.  But this advantage is an 
illusion!  Using clones, it is easy to run any desired group of test 
functions:

- Create a RecentTests test class.

- Use clones to add test functions to that class.

- Run the script in an @command run-recent-tests node, similar to that 
shown above.

*Summary*

@test nodes aren't nearly as useful as I thought they were. Traditional 
unit tests have many advantages over @test nodes.

Traditional unit tests play well with Leo. @command/button node can run 
groups of tests. Clones can be used to add test functions to test classes.

The new test classes will simplify Test Driven Development of Leo itself.

I'll use scripts to convert tests in unitTest.leo to test classes in Leo's 
core modules. unitTest.leo and (maybe) leoTest.py will likely be retired.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/9c1089c5-53bd-4ce8-b81b-a299813c2946%40googlegroups.com.