On 17.03.16 15:14, M.-A. Lemburg wrote:
On 17.03.2016 01:29, Guido van Rossum wrote:
Should we recommend that everyone use tokenize.detect_encoding()?
I'd prefer a separate utility for this somewhere, since
tokenize.detect_encoding() is not available in Python 2.
I've attached an example implementation with tests, which works
in Python 2.7 and 3.
Sorry, but this code doesn't match the behaviour of Python interpreter,
nor other tools. I suggest to backport tokenize.detect_encoding() (but
be aware that the default encoding in Python 2 is ASCII, not UTF-8).
Python-Dev mailing list