# Re: [NTG-context] XeConTeXt bug report I: strange benchmark

On Wed, May 13, 2009 at 7:12 PM, Mojca Miklavec
<mojca.miklavec.li...@gmail.com> wrote:
> On Wed, May 13, 2009 at 12:41, Wolfgang Schuster wrote:
>>
>> Am 13.05.2009 um 12:17 schrieb Mojca Miklavec:
>>
>>> Or do you want to suggest that one would possibly need both "serif"
>>> and "sans" variants of some Chinese font, often switching between
>>> families inside a document?
>>
>> That's what I mean, also in chinese you use different fonts for
>> serif, sans and mono.
>
> Wait a minute ... why do they need mono/typewriter? Aren't all the
> Chinese glyphs "fixed width" anyway?


We do not need mono. We do not even need italic, bold italic, and small caps.
that's pretty different compared to Asian scripts.
(of course, some of them are similar. for example,  Song is like roman
regular, CuSong is like roman bold. LiShu is like calligraphy,
Dengxian is like Sans. CuDengXian (Hei) is like Sans bold).
But why do we define them all as latin typefaces? that's just because
ConTeXt or LaTeX system is heavily based on their own font naming
mechanism. You can grep the ConTeXt source code, and find font
switching like \it, \rm, \ss appears on almost every file. If we
insist on our own naming mechanism (like \song, \hei, \cusong), then
ConTeXt will become unusable in the end.
Moreover, the only way to write consist document in both latin and cjk
language is to fall cjk fonts into latin system. Suppose you want to
define a document chapter title font, neither \heid  nor \bfd  will
suffice. The only way is to map all chinese font names (hei, song, for
example) to latin names(bold italic, for example) [since latin names
can be treat as a super set of asian names], or design a more
comprehensive naming system, and later mapping both into that.
well. when you want to write a web link or source code that contains
Chinese like
\type{http://zh.wikipedia.org/wiki/字体}
or
\starttyping
if (filename == NIL){
printf("文件%s未找到\n", filename);
return -1;
}
\stoptyping
you will run into problems since mono for Chinese are not defined.

>
> Also, if one has a well-designed font that includes all the latin
> glyphs and all the bold/italic variants then one should in principle
> not need an extra "latin" family, but those are probably just nice
> dreams ... I always had the impression that there are not many
> high-quality Chinese fonts and that having bold and italic alone is a
> problem, not to speak about frequent mixing of several different
> families.
>

Chinese fonts released in recent years have better latin glyphs
design. But as I said before, the Chinese script system is quite
different from the Latin one. Chinese companies are not obliged to
include "italic" or "small caps" glyphs in their Chinese fonts, just
like we cannot ask Adobe to include "kai" and "song" styles in their
Minion Pro fonts.

> (Just thinking alound: aren't there plenty of books around that also
> mix lots of greek and latin, possibly using different fonts for them?
> How do they deal with the problem, or is the problem just
> neglectable?)
>

Compound typeface is a very basic feature required for all typesetting
software.
Quark and InDesign have options to enable this feature.
This is even possible in no-production-purpose software like Microsoft Word.

But as far as I know, most typographic software programs are WYSIWYG
systems, and rendering a page in computer is much faster than typing a
word by human beings. So we will not notice the performance problem in
these systems. But for TeX this is different, in order to see the
result after positioning something to the right place, or changing a
procedures will be involved. So the long compilation time will make
the software unusable (Imagine that you want to correct a minor error
in a document and see the result, you press the typeset button in
TeXWorks, leave the lualatex program running, and during the 5 minutes
compilation time, you can have a cup of tea and enjoy the sunshine...
Anyway, t-zhspacing is much faster than lualatex. but 1 minutes to
compile a 100 pages document is still too long)

But one thing is always puzzling me: When open MS Word, it consumes
16MB memory, then I typeset Chinese text in 10 different Chinese fonts
(all are 10M+ each), word only consumes 30MB memory. If I do that in
LuaTeX it will consume almost 1G memory... why MS Word parses fonts
and renders text so fast while consumes such less memory? Can TeX do
that too?

> Mojca
> ___________________________________________________________________________________
> If your question is of interest to others as well, please add an entry to the
> Wiki!
>
> maillist : ntg-context@ntg.nl / http://www.ntg.nl/mailman/listinfo/ntg-context
> webpage  : http://www.pragma-ade.nl / http://tex.aanhet.net
> archive  : https://foundry.supelec.fr/projects/contextrev/
> wiki     : http://contextgarden.net
> ___________________________________________________________________________________
>
___________________________________________________________________________________