Andrew West wrote: > By looping through the "ranges" array it is possible to determine exactly > which > characters in which Unicode blocks a given font covers (as long as your > sofware > has an array of Unicode blocks and their codepoint ranges).
> As long as your software has an up-to-date list of > the > Unicode blocks and their constituent codepoints for the latest version of > Unicode, you will always be able to get up to date information about > Unicode > coverage of a font. > > If you want to determine language coverage for a particular > font, > then all you need to do is define a minimum set of codepoints that must be > covered for a particular block or set of blocks to be considered as > supporting > that language. (Just the little matter of deciding what the minimum set of > codepoints would be for every language that is supported by Unicode ...) > Thanks so much for the detailed reply. It would appear from your answer that even after implementing the algorithm to search the Unicode block coverage of a font, the actual comparison "data", that is which blocks to compare and how many code points, is totally undefined. Is there any kind of standard for defining what codepoints are required to write a given language? This seems like the issue that fontconfig gets around by using all those .orth files which define the codepoints for a given language. But is there any standardized set of language required codepoint definitions that could be used? Anyways, where is the up-to-date list of Unicode blocks to be found? It's odd to think that the old way of using Charset identifiers in fonts worked a lot more cleanly for finding fonts matching a language/language group. I would think this kind of core issue would be addressed more cleanly by the font standard. Thanks for any help. Yours truly, Elisha Berns

