> From: [EMAIL PROTECTED]
> >From: Edward Cherlin <[EMAIL PROTECTED]>
> >The internal coding used by any software is totally irrelevant to any 
> >other software, or to users. UTF-16 stores BMP CJK characters in two 
> >bytes each, whereas UTF-8 requires three. This saves some space in a 
> >number of tables. It isn't a big deal, but it is a very reasonable 
> >design choice.
>
> If the unicode standard is extended beyond 0x10FFFF

It won't be extended beyond 0x10FFFF for sure, unless all of the
current Unicode Technicall Committee(UTC) voting members are replaced
with pro-beyond-0x10FFFF-assignment people ;-).

> utf-16 is unsuitable for protocols imo

I do not necessarily disagree with your opinion, and I do not
necessarily even recommend the use of UTF-16, nor even not necessarily
recommend hardwiring to UTF-8/16/32 to people in general.

For me, surrogate support is not a big deal, and the other Unicode
complexities are not too bad to deal with, so I chose to go with the 
UTF-16 hardwiring approach for my project. ;-)

--
hiura@{freestandards.org,li18nux.org,unicode.org,sun.com} 
Chair, Li18nux/Linux Internationalization Initiative, http://www.li18nux.org
Board of Directors, Free Standards Group,       http://www.freestandards.org
Architect/Sr. Staff Engineer, Sun Microsystems, Inc, USA  eFAX: 509-693-8356


--
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to