On Fri, Sep 1, 2023, 9:45 AM Alexei Podtelezhnikov <apodt...@gmail.com>
wrote:

>
> 
> >> I will try the dynamic heap allocations for the rendering
> >> buffer. This might be the largest of them, I think. In addition,
> >> this should help with the rendering speed when rendering complex
> >> shapes like
> >> https://fonts.google.com/specimen/Cabin+Sketch. Currently, FreeType
> >> makes several attempts until a sub-band can fit into a static stack
> >> buffer. We should be able to fit it into a dynamic buffer easily. I
> >> wonder if CabinSketch should be about as complex as we can tolerate
> >> and refuse anything much more complex than this. A lot of time-outs
> >> will be resolved...
> >
> > Perhaps a hybrid approach is the right one: Use the current
> > infrastructure up to a certain size, being as fast as possible because
> > dynamic allocation overhead can avoided, and resort to dynamic
> > allocation otherwise.
>
> Werner,
>
> FreeType is not shy about allocating buffers to load a glyph. This is just
> one more I highly doubt that it matters even at small sizes. We always
> allocate FT_Bitmap even for rendering too. As a matter of fact FreeType
> loses to the dense renderers when rendering complex glyphs precisely
> because of multiple restarts to fit the small buffer.
>
> Alexei


Wanted to point out that compiling with gcc and adding "-stack-usage=2000"
to get reports about stacks larger than 2000 bytes is probably the easiest
way to track down large stacks at the moment. Note that
af_cjk_metrics_init_widths (44480 bytes) and af_latin_metrics_init_widths
(52992 bytes) are by far the largest. cf2_interpT2CharString (27520 bytes)
is also surprisingly large. There are a few others like the rasterizer
stacks that are between 10-20kb which one may also want to look into, but
these have been less problematic on my experience (though that may have
been due to the even larger stacks being allocated first). Just wanted to
point out how to measure and that the rasterizer might not be the first
place to look.

Reply via email to