Stewart Stremler wrote:
I assert that (english) words can be considered glyphs (think cursive),
and therefore deserve the same sort of treatment.
65536 is slightly too small, but 4 billion is *way* too big. There seem
to only be about 200,000 allocated code points.
Of course, once we make contact with aliens, all bets are off ...
(But as those languages use a glyph-per-word, more or less, this
shouldn't be a problem -- nobody was demanding that a sizable subset
of the english dictionary be mapped into unicode space. Fair's fair.)
Actually, not all of those language use glyph-per-word, and the issue is
that there is a more compact and efficient representation. People tend
...so we can avoid bloat in our XML documents...
Umm, is there a more compact and efficient representation than Unicode?
I'm not convinced. 200,000 code points is pretty small to encompass
all the modern languages and many dead ones.
True, true. Design by committee tends to aim at making everyone
equally unhappy.
Or, in the case of Unicode, it takes into account multiple needs that
most people can't even conceive of.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg