> What's being suggested is that locales be generated per-region/language;
> eg. tell the system to generate "tr_TR", and then be able to use all
> relevant encodings (ISO-8859-9 and UTF-8 and whatever else is convertable).
> Case mappings, collation rules, translation text and so on can be stored in
> Unicode and converted at runtime, probably still caching common encodings
> for speed.

I was imagining that having locale be less static and global was the
answer.
I think part of the original design was that having the libc's global
functions change behavior behind the scenes based upon a global variable
would avoid alot of code-rewrite. Thats fine and all, but a good way to
set the encoding to be used per-call doesnt exist.

If I wanted to write an app that worked in 4 different locales
simultaneously, across different threads even, then I couldnt easily.
(All functions which are locale dependant would require a variant
which takes a locale-context argument)

Now, with redhat 8, is the entry barrier to defaulting CJK langugaes
to UTF-8 things such as Input Servers, Kterm, and other apps not being
fully utf-8 ready yet? Or is it more of an addiction to the
peculiarities that iso2022 can produce?

Well, once whatever the last straw is is ironed out, than non-utf-8
encodings can finaly be relegated to iconv-only support, and things
such as supporting all unicode varieties of whitespace, etc, can
be addressed.
--
Linux-UTF8:   i18n of Linux on all levels
Archive:      http://mail.nl.linux.org/linux-utf8/

Reply via email to