> The idea is:
> ��. Assign codes and hot spots for all possible Glyph componenents,
> per script, per language system.
How will you handle open-ended scripts like Urdu where the number of
ligatures is changing while the language evolves? For example, I was
told by an Urdu computer scientist that during a visit of Margaret
Thatcher (a former Prime Minister of England) the newspapers created a
new ligature for her name.
> ��. Create a generic state machine thet can step through the input
> unicode characters, and spit out Glyph components and their
> relative hot spot positions.
This is far more complicated I fear. You will need fallback
algorithms for fonts which don't provide some glyphs/ligatures, etc.
Some fonts have e.g. `Amacron' as a single glyph, others compose it
from `A' with a macron accent.
> ��. Create a generic inverse state machine. The input is
> components and their relative hot spot positions and the
> output is unicode stream.
You can do that already by following the Adobe Glyph List (AGL)
algorithm for naming glyphs.
> The merits of such a rendering/font schema would be:
> - It is bitmap-font-friendly
Hmm, the next release of X will probably contain all bitmapped fonts
in SFNT format. It is straightforward then to provide proper OpenType
tables to do the same processing as with outline glyphs. Just van
Rossum's freely available TTX compiler/decompiler for OpenType fonts
can help here.
Werner
�{�Q1|�_'����h��e��ޖ�+r�zm���j)g�X��+��b��n��?