On 9/8/12 19:21, Behdad Esfahbod wrote:
Hi Jonathan,

Thanks for bringing this up.  And more profiling is hugely appreciated.

I like to take it to a different direction though: if text is all ASCII and
all glyphs are in the font, then the normalization pass should really boil
down to a cmap lookup for each glyph.  In which case we should be able to
combine it with hb_map_glyphs() to make one cmap lookup per character instead of current too. That should make the normalizer overhead go away. Indeed, if you check in the profile, all time spent in the normalizer is in HBGetGlyph,
and is equal to time spent in hb_substitute_default.
Right, it's the cmap lookup that costs the time here (and if we can make our GetGlyph callback faster, that'd help too). So if map_glyphs can be combined with the check that normalization is doing, that should be equally effective. Maybe not quite as trivial to implement, but by all means, go for it! :)


Easy fish would be hb_set_unicode_props. Maybe I make that lazy. Don't know. We need some of that stuff after mapping to glyphs, so we need to be able to
predict whether we would ever need to look at them...  Or make it faster.

Makes sense?

behdad


_______________________________________________
HarfBuzz mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/harfbuzz

Reply via email to