Martin v. Löwis <[EMAIL PROTECTED]> added the comment:

Since this is marked "release blocker", I'll provide a shallow comment:

I don't think it should be a release blocker. It's a bug in the compile
function, and there are various work-arounds (such as saving the bytes
to a temporary file and executing that one, or decoding the byte string
to a Unicode string, and then compiling the Unicode string). It is
sufficient to fix it in 3.0.1.

I don't think the patch is right: as the test had to be changed, it
means that somewhere, the detection of the encoding declaration now
fails. This is clearly a new bug, but I don't have the time to analyse
the cause further.

In principle, there is nothing wrong with the tokenizer treating latin-1
as "raw" - that only means we don't go through a codec.

_______________________________________
Python tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue3574>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to