Thanks! I didn't even know about that module. Does this take into
account your local changes to the tokenizer, though? I've added a new
token type to Grammar/Tokens, and some code to tokenizer.c to return
that token type in appropriate circumstances. I've stepped through the
tokenizer in the debugger, so I /think /it's working. When I run -m
tokenize as you suggest, I don't see my custom token type.
The devguide mentions that "|Lib/tokenize.py| needs changes to match
changes to the tokenizer.", so I'm guessing I would have to manually
repeat my changes in tokenize.py to see them, right? But what I want to
see is what tokenizer.c is producing when my newly built Python binary
actually reads a file.
On 30/05/2022 00:09, Jean Abou Samra wrote:
Le 30/05/2022 à 00:59, Jack a écrit :
Hi, I'm just getting into the CPython codebase just for fun, and I've
just started messing around with the tokenizer and the grammar. I was
wondering, is there a way to just print out the results of the
tokenizer (as in just the stream of tokens it generates) in a human
readable format? It would be really helpful for debugging. Hope the
question's not too basic.
python -m tokenize file.py
?
See https://docs.python.org/3/library/tokenize.html#command-line-usage
Cheers,
Jean
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/archives/list/python-dev@python.org/message/ZCG44GZJH4FWPC66TI7GNLYZTHGLQ6NM/
Code of Conduct: http://python.org/psf/codeofconduct/