On Tue, 30 Aug 2022 22:23:44 +0600
NRK <n...@disroot.org> wrote:

Dear NRK,

> The "proper" way (IMO) would be to build up a list of fonts which
> would be capable of representing as many code-points available in the
> system *right at startup* - instead of checking each unknown
> code-point as we go.
> 
> This way if the code-point cannot be found within the list; we'll know
> right away that it's a missing glyph and there won't be any need to
> call XftFontMatch for each "unknown" code-point.
> 
> The problem is, as I said, I'm not sure if it's even possible/feasible
> with Xft/FontConfig as I'm not very familiar with those libraries. If
> someone knows the answer, then feel free to speak up.
> 
> If it is possible and someone can point out which routines I should be
> looking at then I can try to take a crack at it. In case that's not
> possible, then there's probably not a whole lot that can be done about
> the situation.

this aspect was discussed a while back and we all know that
Xft/Fontconfig is cancer. This entire font-rendering-topic is a huge
rabbit hole though, given it covers a very wide range of topics. You
have complex file parsing (OTF, TTF), font shaping (which only a single
library, harfbuzz, has a monopoly of and which Unicode works with as
"specification by implementation", which is horrible) and
rendering/rasterization.

It's difficult to even get a foot in the door. As far as I remember
Thomas Oltmann worked on a rendering library and has a good insight
into the difficulties of this.

As a middleground, maybe one could design a simple frontend for
fontconfig. I can imagine that caching the rendering-ability by
codepoints in a compressed format into metadata might be a cool
approach; I have made the experience while working on libgrapheme that
such tables are highly compressible down to a few kilobytes per
complete codepoint-table.

With best regards

Laslo

Reply via email to