On Friday 23 February 2007 22:50, M Henri Day wrote: > 2007/2/23, John Jason Jordan <[EMAIL PROTECTED]>: > > On Fri, 23 Feb 2007 10:47:39 +0100 [...] > > Well, JJJ, that was interesting information indeed ! I have always just > assumed that the hexadecimal code for Unicode glyphs was the four-digit > code given in the Table de caractères Unicode > (http://unicode.coeurlumiere.com/) and found be combining the denomination > of the row (minus the last digit) with that of the column.
You can see all the scripts at the official Unicode site: http://www.unicode.org/charts/ where you can download the PDF charts for any blocks you need. You will also notice that the most recent Unicode standard has over 90 000 glyphs and so needs more than 4 hex digits, in fact they've spread it so that it now uses 18 bits at most (eg. the Ideograph supplement). What MS intend doing with that when they have defined their Unicode characters to be 2 bytes remains to be seen. On Linux, a unicode character was often 4 bytes, but not always, and I've seen on the dev list for OOo that they are working on making all characters available as they have a few corners where the assumption of two bytes cannot be immediately corrected. I suspect the Linux input methods will have no difficulties on a 32bit or larger word size machine. [...] -- Andy Pepperdine On this mailing list help is provided by volunteers. Please subscribe to the mailing list to see all the replies to a query, and reply only to the mailing list. For FAQ, userguide, see: http://documentation.openoffice.org/ --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
