Gareth Rees added the comment:
This morning I noticed that I had forgotten to update the library
reference, and I also noticed two more problems to add to the list
above:
6. Although Lib/test/test_tokenize.py looks like it contains tests for
backslash-newline handling, these tests are ineffective. Here they
are:
>>> roundtrip("x=1+\\\\n"
... "1\\n"
... "# This is a comment\\\\n"
... "# This also\\n")
True
>>> roundtrip("# Comment \\\\nx = 0")
True
There are two problems here: (i) because of the double string
escaping, these are not backslash-newline, they are backslash-n.
(ii) the roundtrip() test is too weak to detect this problem:
tokenize() outputs an ERRORTOKEN for the backslash and untokenize()
restores it. So the round-trip property is satisfied.
7. Problem 6 shows the difficulty of using doctests for this kind of
test. It would be easier to ensure the correctness of these tests
if the docstring was read from a separate file, so that at least
the tests only need one level of string escaping.
I fixed problem 6 by updating these tests to use dump_tokens() instead
of roundtrip(). I have not fixed problem 7 (like 4 and 5, I can leave
it for another issue). Revised patch attached.
----------
Added file: http://bugs.python.org/file33924/Issue12691.patch
_______________________________________
Python tracker <[email protected]>
<http://bugs.python.org/issue12691>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com