On Tue, Apr 1, 2008 at 6:03 AM, Gabriel Genellina
<[EMAIL PROTECTED]> wrote:
> En Mon, 31 Mar 2008 16:17:39 -0300, Terry Reedy <[EMAIL PROTECTED]>
>  escribió:
>
>
> > "Bjoern Schliessmann" <[EMAIL PROTECTED]>
>  > wrote
>  > in message news:[EMAIL PROTECTED]
>  > | > However, I'm quite sure that when Unicode has arrived almost
>  > | > everywhere, some languages will start considering such characters
>  > | > in their core syntax.
>  > |
>  > | This should be the time when there are widespread quasi-standardised
>  > | input methods for those characters.
>  >
>  > C has triglyphs for keyboards missing some ASCII chars.  != and <= could
>  > easily be treated as diglyphs for the corresponding chars.  In a sense
>  > they
>  > are already, it is just that the real things are not allowed ;=).
>
>  I think it should be easy to add support for ≠≤≥ and even λ, only the
>  tokenizer has to be changed.
>
show me a keyboard that has those symbols and I'm all up for it.

as for the original question, the point of going unicode is not to
make code unicode, but to make code's output unicode. thin of print
calls and templates and comments the world's complexity in languages.
sadly most english speaking people think unicode is irrelevant because
ASCII has everything, but their narrow world is what's wrong.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to