On Thu, Dec 21, 2017 at 11:48 AM, Jeroen Demeyer <[email protected]> wrote:
> On 2017-12-21 10:58, Erik Bray wrote:
>>
>> Do you mean you need some test to make
>> sure that a test someone wrote is actually being run?
>
>
> Yes. We test that all things which look like doctests are actually tested as
> doctest.
>
>> If there were a quoting issue in the doctest parser that happened to
>> be caught by...a different parser, then that was pure luck.
>
>
> ...but it worked!
>
>> It would
>> be better to write regression tests against the actual parser.
>
>
> Of course, there are regression tests. Still, we cannot foresee all bugs.
> The whole Sage library is a very big test case: at least once, a new bug was
> found this way.

No, you can't foresee all bugs, but I think writing some less robust
code in order to compare against already more robust code (especially
when that less robust code already has some false positives) is not a
good way to design a test in general when you could just as easily
test the real code against specific test cases.  It's true you could
catch bugs this way but you'd have just as much luck fuzz testing or
something.  It's somewhat arbitrary.

Anyways, if we can't agree that this test is all but useless, at the
very least how would you suggest proceeding on
https://trac.sagemath.org/ticket/24261 ?  This presents a problem
because different numbers of tests are going to be skipped whether
we're testing on Python 2 or Python 3 (unless for every #py2 flag
there were also a corresponding #py3 flag, which partly defeats the
purpose...)

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to