Gregory P. Smith added the comment:

Correct, my characterization above was wrong (I shouldn't write these up 
without the interpreter right in front of me). What is wrong with the 
conversion is:

unicode("", "utf-8") in python 2.x should become either str(b"", "utf-8") or, 
better, just "" in Python 3.x.  The better version could be done if the codec 
and value can be represented in the encoding of the output 3.x source code file 
as is but that optimization is not critical.

In order for str() to take a second arg (the codec) the first cannot be a 
unicode string already:

>>> str("foo", "utf-8")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: decoding str is not supported

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue19159>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to