On 8/19/2016 1:16 AM, Terry Reedy wrote:
On 8/18/2016 8:27 PM, Eric V. Smith wrote:
So something that parses or scans a Python file and currently
understands u, b, and r to be string prefixes, just needs to add f to
the prefixes it uses, and it can now at least understand f-strings (and
fr-strings). It doesn't need to implement a full-blown expression parser
just to find out where the end of a f-string is.

Indeed, IDLE has one prefix re, which has changed occasionally and which
I need to change for 3.6, and 4 res for the 4 unprefixed strings, which
have been the same, AFAIK, for decades.  It that prefixes all 4 string
res with the prefix re and o or's the results together to get the
'string' re.

For something else that would become significantly more complicated to implement, you need look no further than the stdlib's own tokenizer module. So Python itself would require changes to parsers/lexers in Python/ast.c, IDLE, and Lib/tokenizer.py. In addition it would require adding tokens to Include/tokens.h and the generated Lib/token.py, and everyone using those files would need to adapt.

Not that it's impossible, of course. But don't underestimate the amount of work this proposal would cause to the many places in and outside of Python that examine Python code.

Eric.

_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to