>> Installing Unicode fonts that way (by splitting them into subfonts and
>> then unifying them under a virtual font) is not a convenient process;
>> however, a user has to do it only once, and the entire font selection
>> is done by Omega, so that the complexity is largely hidden from the
>> user. Users just enter their text in Unicode, Shift-JIS, EUC,
>> ISO-2022, whatever, and the fonts are selected automatically.

SM> If your only target is printed material, this is sufficient.
SM> However, using a virtual font means the loss of textual
SM> information.  For example, unless CID-keyed fonts are used in a
SM> PDF, you cannot search strings in the document.

But that's not an Omega-specific weakness, it's rather a general
weakness of TeX when it comes to text beyond US-ASCII, partly because
DVI is not concerned with textual information at all, partly because
of the non-standard font encodings. Of course, when using non-Latin
scripts, this problem a lot more evident because there are more
unindexable characters; however, for example, I have this problem even
now when generating a PDF with German Umlaute and \usepackage{ae};
words containing Umlaute are not searchable in PDF. In Japanese,
Cyrillic or Arabic, this is a lot worse admittedly.

It seems that some recent developments in Omega (as observable on the
Omega list at the moment) seem to go in the direction of a general
interface for font-format-independent glyph rendering plugins, so that
Omega 2 will probably offer some level of Unicode font support and PDF
searchability beyond what has been available in TeX so far.

For most uses of non-Latin script in TeX, Omega + Unicode probably is
the way to go already or will be within some time. A LyX that fits
into such a workflow with Unicode input/output would complement this
rather nicely.

Regards,
  Philipp                            mailto:[EMAIL PROTECTED]
___________________
Out of memory / We wish to hold the whole sky / But we never will

Reply via email to