Followup to: <[EMAIL PROTECTED]>
By author: Tomohiro KUBOTA <[EMAIL PROTECTED]>
In newsgroup: linux.utf8
>
> One point: Many Japanese texts include Alphabets, so Japanese people
> want to input not only Hiragana, Katakana, Kanji, and Numerics but
> also Alphabets.
>
With "Alphabets" here I presume you mean Roomaji (Latin alphabet.)
However, writing "alien" text (like Greek interspersed into Japanese,
for example) really needs to be possible.
> I imagine Korean people want, too. In such a case,
> switching between Alphabet (no conversion mode) and conversion mode
> has to be achieved by simple key typing like Shift + Space. The switch
> must be between conversion mode and no-conversion mode, must not be
> among all installed input methods. Is it possible in GTK applications?
> (This is achieved in Windows. Alt-Esc will switch between conversion
> and non-conversion, while Alt-Shift will switch among installed input
> methods.)
>
> Another point: I want to purge all non-internationalized softwares.
> Today, internationalization (such as Japanese character support) is
> regarded as a special "feature". However, I think that non-supporting
> of internationalization should be regarded as a bug which is as severe
> as "racist software".
WHOA... that's a pretty darn strong statement. In particular, that
would seem to request internationalization of kernel (or other
debugging or logging messages), which is probably a completely
unrealistic goal.
For user-interface issues, I would agree with you however.
> However, GTK is a relatively heavy toolkit and
> developers who want to write a lightweight software won't use it.
> I never think "If there is one internationalized software (for example,
> gnome-terminal), it is enough." If developers want to develop another
> softwares in the same category (xterm, rxvt, eterm, aterm, ...), it
> means users have freedom to choose. Such a freedom of choice must not
> be a priviledge of English-speaking (or European-languages-speaking)
> people. Do you have any idea to solve this problem?
Well, for console applications, it should be the terminal application
which would handle the internationalization, for the most part. The
rest, I think, boils down to two things:
a) It needs to be easy to write internationalized and multilingualized
applications.
b) Programmers need to be taught that it is easy, and how to do it.
When it comes to (a), it pretty much means that the complexity needs
to be hidden from the application programmer. Terminal applications,
toolkits, and perhaps libraries like readline need to support this,
but applications shouldn't need to be affected beyond a few basic
guidelines, such as don't assume byte == character. Getting UTF-8
universally deployed will be a huge part of this, because it means
that anything other than 7-bit ASCII will have to take this into
consideration.
We need easy-to-read webpages and easy-to-use libraries how to do
this, even for monolingual, American programmers who might not be
using characters outside the US-ASCII set on a daily basis.
> Of course several Japanese companies are competing in Input Method
> area on Windows. These companies are researching for better input
> methods -- larger and better-tuned dictionaries with newly coined
> words and phrases, better grammartical and semantic analyzers,
> and so on so on. I imagine this area is one of areas where Open
> Source people cannot compete with commercial softwares by full-time
> developer teams.
This seems to call for a plugin architecture. More than anything I
suspect we need *standards*.
-hpa
--
<[EMAIL PROTECTED]> at work, <[EMAIL PROTECTED]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64
--
Linux-UTF8: i18n of Linux on all levels
Archive: http://mail.nl.linux.org/linux-utf8/