David Leung (Neteka Inc.) <[EMAIL PROTECTED]> wrote: > More bits is good, but when we plan for things more than we need, > then it should be considered to be a waste of resource. So why do > we need 128bits now(i dont think the combined total of characters > in all languages in the world would require that much, not unless > we want to include scripts from oter planets : > ), whereas we need > 8/16/32 bits for Unicode, so why not design a system able to accept > ACE as a fallback and also 8/16/32 bits? If you can justify why > designing a system that can handle ASCII as a fallback and can > automatically support 8/16/32 bits Unicode is not a good design, > then I think my thinking is wrong.
Is my memory faulty, or wasn't it the CJK users who complained the loudest that UTF-8 was unfairly discriminatory because it required 3 bytes for each CJK character, and that some sort of compression (such as now provided by Punycode) was essential to the success of IDN? -Doug Ewell Fullerton, California
