On Mon, 27 Sep 2010 23:45:45 -0400
Steve Holden <st...@holdenweb.com> wrote:
> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
> > 2010/9/27 Meador Inge <mead...@gmail.com>:
> >> which, as seen in the trace, is because the 'detect_encoding' function in
> >> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
> >> to tokenize 'first' (a 'str' object).  It seems to me that strings should
> >> still be able to be tokenized, but maybe I am missing something.
> >> Is the implementation of 'detect_encoding' correct in how it attempts to
> >> determine an encoding or should I open an issue for this?
> > 
> > Tokenize only works on bytes. You can open a feature request if you desire.
> > 
> Working only on bytes does seem rather perverse.

I agree, the morality of bytes objects could have been better :)



_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to