Stewart Stremler wrote:
Actually, if you wanted to prove your superiority, put the glyphs into something like Dasher and let people play.

How would that help? That's font-selection, innit?

http://www.inference.phy.cam.ac.uk/dasher/

It's a system for entering words into interface-deprived devices.

It turns out that Chinese is giving them fits:

How would Chinese Dasher work?

"We would not go directly for the ideograms, since there are too many of them. We have to build up sentences using a sequence of symbols each of which has small information content. "

A better coding to allow them to put together glyphs would be a big help.

I believe that was necessary to fit a useful subset completely inside UTF-16 when it was required to only use 2 bytes. Now that there are mechanisms for creating pairs of UTF-16 symbols which represent one Unicode code point, this is no longer necessary.


That would be "yes", I imagine. :)

Hm....

Okay. I'll have to ponder this for a bit.

I believe that the original constraint was the fact that Windows used wide strings (ie "String" == "array of byte pairs"). Thus, there was no ability to move beyond 65536 glyphs because Windows couldn't cope with the fact that a single glyph might be 2 *or* 4 bytes.

However, the fact that Windows demanded wide strings helped a lot. An entire generation of programmers has grown up without necessarily thinking that "String" == "char[]" == "array of single bytes".

-a

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to