Zachary McCord <[email protected]> added the comment:
I think anyone using the tokenize module to programmatically edit python source
wants to use and probably does use the undocumented behavior, which should then
be documented.
I ran into this issue because for me this manifested as a crash:
$ python3
>>> import tokenize
>>> tokenize.untokenize([(tokenize.STRING, "''", (1, 0), (1, 0), None)])
"''"
>>> tokenize.untokenize([(tokenize.STRING, "''", None, None, None)])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/<snip>/virtualenv/lib/python3.6/tokenize.py", line 338, in untokenize
out = ut.untokenize(iterable)
File "/<snip>/virtualenv/lib/python3.6/tokenize.py", line 272, in untokenize
self.add_whitespace(start)
File "/<snip>/virtualenv/lib/python3.6/tokenize.py", line 231, in
add_whitespace
row, col = start
TypeError: 'NoneType' object is not iterable
The second call is giving untokenize() input that is documented to be valid,
yet which causes a crash.
----------
nosy: +Zachary McCord
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue35297>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com