Re: Progress update on adjustment database
> Right now, I don't understand the needs well enough to know what to > generalize or to test whether the general solution works well > enough. I'd rather start with specific use cases rather than a > general solution with unclear goals. Well, there are no 'unclear goals': the general solution is *exactly* what I was talking about all the time, and what you need for any entry in the adjustment database: A mapping from input Unicode characters to glyph indices based on the GSUB + cmap tables and not on cmap alone. >> Right now, I favor the latter: It should be a last-minute action, >> similar to TrueType's `DELTAP[123]` bytecode instructions. > > I disagree with doing the adjustment after grid fitting because in > this case, grid fitting is a destructive action. Doing it after > would require taking a flat line and adding the wiggle back in, Yes, but you have all the available data because you can access the original glyph shape. In other words, you can exactly control which points to move. > possibly in a way that doesn't match the font. At the resolutions we are talking about this absolutely doesn't matter, I think. Essentially you have to make appear as x x x x > It sounds easier to prevent that from happening in the first place. OK, give it a try. Rounding information is available also, so this might work as well. Werner
Re: COLRv1 to gray/alpha question (& color-blindness question)
> Probably both approaches are wrong. I am asking that question both in terms > of the spec and in terms of implementation details - what is the > correct/recommended approach to render multi-layered 32-bit RGBA COLRv1 data > to a non-colour target media? My recommendation is to blend three times, each color separately.Then combine with appropriate non-equal weights. This should be correct. Perhaps, you can combine the color first, than blend using alpha.
Re: Progress update on adjustment database
> Probably yes, but who knows. It would be nice to have a generic > solution that completely covers the whole situation, and we never have > to think about it again. Right now, I don't understand the needs well enough to know what to generalize or to test whether the general solution works well enough. I'd rather start with specific use cases rather than a general solution with unclear goals. > This leads to the basic question: Shall the correction be applied >before or after the grid fitting? Right now, I favor the latter: It > should be a last-minute action, similar to TrueType's `DELTAP[123]` > bytecode instructions. I disagree with doing the adjustment after grid fitting because in this case, grid fitting is a destructive action. Doing it after would require taking a flat line and adding the wiggle back in, possibly in a way that doesn't match the font. It sounds easier to prevent that from happening in the first place. On Thu, Jul 20, 2023 at 1:02 PM Werner LEMBERG wrote: > > > Since hinting glyphs that are descendants of combining characters > > will help few fonts, what other ways does the database need to use > > the GSUB table? The only other use case I'm aware of are one to one > > substitutions providing alternate forms of a glyph. > > Probably yes, but who knows. It would be nice to have a generic > solution that completely covers the whole situation, and we never have > to think about it again. > > > As for the tilde un-flattening, the approach I'm thinking of is to > > force the tilde to be at least 2 pixels tall before grid fitting > > begins. Would this ever cause the tilde to be 3 pixels because of > > rounding? > > This leads to the basic question: Shall the correction be applied > before or after the grid fitting? Right now, I favor the latter: It > should be a last-minute action, similar to TrueType's `DELTAP[123]` > bytecode instructions. > > > https://learn.microsoft.com/en-us/typography/opentype/spec/tt_instructions#managing-exceptions > > In other words, if a tilde character's wiggle (not the whole tilde's > vertical size!) is detected to be only 1px high, the shape should be > aggressively distorted vertically to make the wiggle span two pixels. > To do this, some code has to be written to detect the the inflection > and extremum points of the upper and lower wiggle of the outline; only > the extrema are then to be moved vertically. > > > Werner >
Re: Progress update on adjustment database
> Since hinting glyphs that are descendants of combining characters > will help few fonts, what other ways does the database need to use > the GSUB table? The only other use case I'm aware of are one to one > substitutions providing alternate forms of a glyph. Probably yes, but who knows. It would be nice to have a generic solution that completely covers the whole situation, and we never have to think about it again. > As for the tilde un-flattening, the approach I'm thinking of is to > force the tilde to be at least 2 pixels tall before grid fitting > begins. Would this ever cause the tilde to be 3 pixels because of > rounding? This leads to the basic question: Shall the correction be applied before or after the grid fitting? Right now, I favor the latter: It should be a last-minute action, similar to TrueType's `DELTAP[123]` bytecode instructions. https://learn.microsoft.com/en-us/typography/opentype/spec/tt_instructions#managing-exceptions In other words, if a tilde character's wiggle (not the whole tilde's vertical size!) is detected to be only 1px high, the shape should be aggressively distorted vertically to make the wiggle span two pixels. To do this, some code has to be written to detect the the inflection and extremum points of the upper and lower wiggle of the outline; only the extrema are then to be moved vertically. Werner
Re: Progress update on adjustment database
Thanks! This even answers some questions I was thinking about, but hadn't asked. I was wondering why I couldn't find any GSUB entries for combining characters. In one font I dumped with ttx, there were entries doing the opposite: mapping 'aacute' -> 'a' + 'acute'. Since hinting glyphs that are descendants of combining characters will help few fonts, what other ways does the database need to use the GSUB table? The only other use case I'm aware of are one to one substitutions providing alternate forms of a glyph. As for the tilde un-flattening, the approach I'm thinking of is to force the tilde to be at least 2 pixels tall before grid fitting begins. Would this ever cause the tilde to be 3 pixels because of rounding? On Thu, Jul 20, 2023 at 3:21 AM Werner LEMBERG wrote: > > > The next thing I'm doing for the adjustment database is making > > combining characters work. Currently, only precomposed characters > > will be adjusted. If my understanding is correct, this would mean > > finding any lookups that map a character + combining character onto > > a glyph, then apply the appropriate adjustments to that glyph. > > Yes. I suggest that you use the `ttx` decompiler from fonttools and > analyse the contents of a GSUB table of your favourite font. > > https://pypi.org/project/fonttools/ > > At the same time, use the `ftview` FreeType demo program with an > appropriate `FT2_DEBUG` setting so that you can see what the current > HarfBuzz code does for the given font. Examples: > > ``` > ttx -t GSUB arial.ttf > FT2_DEBUG="afshaper:7 afglobal:7 -v" \ > ftview -l 2 -kq arial.ttf &> arial.log > ``` > > Option `-l 2` selects 'light' hinting (i.e., auto-hinting), `-kq` > emulates the 'q' keypress (i.e., quitting immediately). See appended > files for `arial.ttf` version 7.00. > > In `arial.log`, the information coming from the 'afshaper' component > tells you the affected GSUB lookups; this helps poking around in the > XML data as produced by `ttx`. The 'afglobal' information tells you > the glyph indices covering a given script and feature (start with > 'latn_dflt'). > > You might also try a font editor of your choice (for example, > FontForge, menu entry 'View->Show ATT') to further analyze how the > GSUB data is constructed, and to get some visual feeling on what's > going on. > > > Right now, I'm trying to figure out what features I need to look > > inside to find these lookups. Should I just search all features? > > Yes, I think so. Since the auto-hinter is agnostic to the script and > the used language, you have to have all information in advance. > > > After that, I'm going to tackle the tilde-flattening issue, and any > > other similar marks that are getting flattened. > > Note that in most fonts you won't find any GSUB data for common > combinations like 'a' + 'acute' -> 'aacute'. Usually, such stuff gets > handled by the GPOS table, i.e., instead of mapping two glyphs to a > another single one, the accent gets moved to a better position. In > this case, the glyphs are rendered separately, *outside of FreeType's > scope*. This means that we can't do anything on the auto-hinter side > to optimize the distance between the base and the accent glyph (see > also the comment in file `afshaper.c` starting at line 308, and this > nice article > https://learn.microsoft.com/en-us/typography/develop/processing-part1). > > It thus probably makes sense to do the tilde stuff first. > > > Werner >
Palette table (Re: COLRv1 to gray/alpha question (& color-blindness question)
On Wednesday, 19 July 2023 at 05:55:20 BST, Werner LEMBERG wrote: > Different colour schemes are supported; the question is about > defaults. For example, let's assume that a font contains color > schemes A and B, the latter suitable for (most) color-blind people. > Let's further assume that scheme A doesn't render well on a grayscale > device because of identical grayscale values. Does COLRv1 contain any > information to quickly decide which color scheme should be used for > grayscale rendering? I think the palette table already have classifications by dark vs light backgrounds. It might be useful for other attributes like "high-contrast / suitable for visually impaired" and "color-blind-friendly"?
Re: COLRv1 to gray/alpha question (& color-blindness question)
On Thursday, 20 July 2023 at 01:41:51 BST, Alexei Podtelezhnikov wrote: > > > Hin-Tak, > > > > > This is probably both a spec question & a technical question. What is the > > > recommendation for COLRv1 when the rendering target media is not capable > > > of color? > > > > > Alpha is colorless until blended. Therefore any conversion of RGB to > alpha will produce random blending results because assuming black > foreground is wrong even on gray surfaces. Hence, the blending should > be done in color (on gray surface r=g=b), then the final result can be > converted to gray once again for display as above. This is essentially > what should happen in ftgrid/ftview if you choose 8-bit display, > e.g.,"-d 800x600x8". That's not how it is done on my COLRv1-capable ftgrid/ftview. It is an implementation details - I mentioned it twice already, but I'll repeat it a 3rd time: Converting from 32-bit RGBA multi-layered COLRv1 data to 8-bit, I have a choice of telling Skia it is all Alpha or it is all Gray. In the former case, Skia seems to throw way all the RGB data and just collapse the multiple Alpha layers onto one, and draw the foreground Black through the combined Alpha mask. In the 2nd case it is more interesting. As I said, pale solid colors and dark transparent colors renders very differently. If you throw away the alpha data first before collapsing and overlaying the successive glyph layers from 24-bit to 8-bit. Skia seems to think that, since the target media in this case is not transparency-capable, let's throw away the alpha channel first, before drawing the overlayjng layers as solid shades of grays . Probably both approaches are wrong. I am asking that question both in terms of the spec and in terms of implementation details - what is the correct/recommended approach to render multi-layered 32-bit RGBA COLRv1 data to a non-colour target media?
Re: The COLRv1 hook code (Re: FT_Bitmap and FT_BitmapGlyph life cycles)
On Thursday, 20 July 2023 at 09:04:09 BST, Brad Neimann wrote: > > Perhaps it is easier just to show you what I have - this is already > > functional and I can even switch COLRv1 palettes in ftgrid > > (screenshots the usual place). > …and where is this ‘usual place’? I can’t see screenshots anywhere. They are being added to the bottom of: https://github.com/HinTak/harfbuzz-python-demos/tree/master/skia-adventure I have a COLRv1-emhanced ftgrid where the "C" key (normally for switching colours for glyph outlines in glyf mode) which cycles through the palettes - it also changes the status line on the side saying what palette number it is on. Still on having freetype demos telling skia to do stuff and receiving rendered coloured bitmaps back :-). Just moving further on from skia rendering SVG to skia rendering COLRv1. Some of the 7 palettes are obviously for dark backgrounds - I probably should have this info out. How clear do you see things being interesting for palettes 0 to 6?
Adobe Native + cairo (Re: Adobe's SVG native as ft2 renderer hook (Re: Bug in rsvg+cairo hook with Nabla?)
It took less than an hour of quick hack to get rid of librsvg and replace it with Adobe Native + its cairo backend from Suziki san. There is a rendering bug filed ashttps://github.com/adobe/svg-native-viewer/issues/185 I said it is between librsvg and skia. The code diff is athttps://github.com/HinTak/harfbuzz-python-demos/blob/master/svg-native/ft2-demos-Adobe-SVG-Cairo.diff (Since I did configure --with-librsvg=no, had to have cairo headers and libs manually... you cam figure out better way of doing that...) So we have 5 svg-hooks now (in history order): librsvg + cairopycairo + librsvg gobject introspection (python)skia m103+Adobe Native + skia (any/older version)Adobe Native + cairo
Re: The COLRv1 hook code
> I'd like to not call "FT_Glyph_To_Bitmap()" but just do > 'FT_New_Glyph()' on my own, but that always crashes. Why? What does the debugger say? This might give a hint. Are you sure that the memory allocation routine in your code is the one used by FreeType? `FT_MEM_ALLOC` is an internal function... Werner
Re: ftbench update: make integrated
> about percentages, i runned the bench with -c 200 to have instant > results for development process. here in the benchmark file > attached, it made more acceptable result when increased the -c flag > to 2000. This is much better, thanks! However, there are still tests which have a difference of over 10% for the same commit, inspite of having a very large number of runs. Increasing the number of iterations is actually a brute-force method – what are the timings for the new defaults? The tests must not be too slow, overwise I could run everything in a virtual machine like 'valgrind' and use just a single iteration... Apropos timings: Please add some info to the HTML page that tells how long it takes to test a given font (or perhaps even more detailed information to tell how long it takes to perform a certain test). I suggest that you have a look at other statistical tools that do such sampling, for example google's 'benchmark' project. In particular, have a look at its user manual: https://github.com/google/benchmark/blob/main/docs/user_guide.md What caught my attention especially was the warmup-time option: Maybe it helps if you add an option `--warmup=N` to the benchmark program to make it ignore the first N iterations before starting the timing. Maybe there are other things in the user manual (and/or source code) that you could use to improve the statistical quality of the FreeType tests. Other useful information to reduce the variance can be found here; please do some research of what might be applicable! https://github.com/google/benchmark/blob/main/docs/reducing_variance.md > I changed compiling and the linking process as the demo programs. i > would like to continue to another build system if it seem ok. Will test soon. Werner
Re: ftbench update: make integrated
Thank you Hin-Tak. I have checked the makefile of demos and used libs and the includes as there. (it was overriding the ccraw to cc) about percentages, i runned the bench with -c 200 to have instant results for development process. here in the benchmark file attached, it made more acceptable result when increased the -c flag to 2000. I changed compiling and the linking process as the demo programs. i would like to continue to another build system if it seem ok. Best, Goksu goksu.in On 16 Jul 2023 10:44 +0300, Werner LEMBERG , wrote: > > > * i modified benchmark program not to report 'time per op’ but > > rather 'cumulative time per N iterations' > > * changed the table design > > * sentence 'smaller values are better’ is present > > * embed a small CSS fragment at the top of the page > > * linked to the original baseline and benchmark `.txt` > > * everything is being created in the build directory > > Nice, thanks! Now the next problem: For the same commit IDs, I see > differences in percentage up to 47% in your HTML file! This > essentially means that the delivered numbers are still completely > meaningless – the differences must be at most a few percent or even > smaller, given that the tests are run on exactly the same machine. > > Please investigate how to improve that, probably by modifying the > benchmark test options, or probably even by implementing per-test > options so that the single tests can be fine-tuned. Perhaps you > should do some internet research to find how other, similar benchmark > tests are constructed to get meaningful numbers. > > > Werner Title: Benchmark Results Benchmark Results Warning: Baseline and Benchmark have the same commit ID Info InfoBaselineBenchmark Parameters-c 2000-c 2000 Commit IDe9362ecce9362ecc Commit Date2023-07-14 16:18:00 +03002023-07-14 16:18:00 +0300 BranchGSoC-2023-AhmetGSoC-2023-Ahmet *Smaller values mean faster operation Results for Roboto_subset.ttf TestNBaseline (ms)Benchmark (ms)Difference (%) Load241218.2991146.1145.9 Load_Advances (Normal)241253.1121146.1978.5 Load_Advances (Fast)246.2426.1132.1 Load_Advances (Unscaled)245.7075.780-1.3 Render207120 / 197280785.332779.6170.7 Get_Glyph24355.068347.5082.1 Get_Char_Index1880005.0134.9631.0 Iterate CMap20003.9944.032-1.0 New_Face200085.61486.143-0.6 Embolden24473.296463.5752.1 Stroke55800 / 552001595.6431599.108-0.2 Get_BBox24237.693232.3962.2 Get_CBox24180.251
Re: The COLRv1 hook code (Re: FT_Bitmap and FT_BitmapGlyph life cycles)
> Perhaps it is easier just to show you what I have - this is already > functional and I can even switch COLRv1 palettes in ftgrid > (screenshots the usual place). …and where is this ‘usual place’? I can’t see screenshots anywhere. Regards, Brad
Re: Progress update on adjustment database
> The next thing I'm doing for the adjustment database is making > combining characters work. Currently, only precomposed characters > will be adjusted. If my understanding is correct, this would mean > finding any lookups that map a character + combining character onto > a glyph, then apply the appropriate adjustments to that glyph. Yes. I suggest that you use the `ttx` decompiler from fonttools and analyse the contents of a GSUB table of your favourite font. https://pypi.org/project/fonttools/ At the same time, use the `ftview` FreeType demo program with an appropriate `FT2_DEBUG` setting so that you can see what the current HarfBuzz code does for the given font. Examples: ``` ttx -t GSUB arial.ttf FT2_DEBUG="afshaper:7 afglobal:7 -v" \ ftview -l 2 -kq arial.ttf &> arial.log ``` Option `-l 2` selects 'light' hinting (i.e., auto-hinting), `-kq` emulates the 'q' keypress (i.e., quitting immediately). See appended files for `arial.ttf` version 7.00. In `arial.log`, the information coming from the 'afshaper' component tells you the affected GSUB lookups; this helps poking around in the XML data as produced by `ttx`. The 'afglobal' information tells you the glyph indices covering a given script and feature (start with 'latn_dflt'). You might also try a font editor of your choice (for example, FontForge, menu entry 'View->Show ATT') to further analyze how the GSUB data is constructed, and to get some visual feeling on what's going on. > Right now, I'm trying to figure out what features I need to look > inside to find these lookups. Should I just search all features? Yes, I think so. Since the auto-hinter is agnostic to the script and the used language, you have to have all information in advance. > After that, I'm going to tackle the tilde-flattening issue, and any > other similar marks that are getting flattened. Note that in most fonts you won't find any GSUB data for common combinations like 'a' + 'acute' -> 'aacute'. Usually, such stuff gets handled by the GPOS table, i.e., instead of mapping two glyphs to a another single one, the accent gets moved to a better position. In this case, the glyphs are rendered separately, *outside of FreeType's scope*. This means that we can't do anything on the auto-hinter side to optimize the distance between the base and the accent glyph (see also the comment in file `afshaper.c` starting at line 308, and this nice article https://learn.microsoft.com/en-us/typography/develop/processing-part1). It thus probably makes sense to do the tilde stuff first. Werner arial-7.00.ttx.xz Description: Binary data arial-7.00.log.xz Description: Binary data