Jamie Lokier wrote:
> Andrew Cunningham wrote:
> > any implimentation of utf-16 must include the capacity to correctly handle
> > valid surrogate pairs. You cann't restrict utf-16 characters to 2-bytes.
>
> That's way conversion from utf-16 to utf-32 should be analogous to
> conversion from utf-8 to wchar_t, � la mbtowcs. Etc. The rules about
> character by character processing apply. You may wish to use utf32_t
> for the intermediate characters, e.g. in a simple parser.
That means the whole problem is taken the wrong way.
When _storing_ the data, you will use an adequate and effective local
representation, utf-16, utf-8 or something else, depending on your locale (utf-8
is often bigger than utf-16 in asia).
When manipulating it, you will keep utf-32 as the base type for gcc.
Storing data to the database or anywhere else just in the form it has internally
is a bad thing to do.
(wchar_t is not garanted to be unicode or anything portable. Now it is utf-32 in
gcc, but it could be anything else in the future).
Nuesser, Wilhelm writes :
> Even an otherwise superior OS performance can not compensate the
> additional requirements in memory bandwidth, CPU,
I think the only way an OS can have superior performances on this two points
with two byte unicode is by supporting only ucs-2 and not utf-16 and doing no
conversion between the storage and manipulation format.
I'd be very surprised if the opposite would be demonstrated.
> disk space etc..
This is what I discused at the beginning of the post.
-
Linux-UTF8: i18n of Linux on all levels
Archive: http://mail.nl.linux.org/lists/