Hmm, 3 other people answered but none offered this: there is no practical need 
for floating point / double or quad precision. A lot of the font spec itself 
refers to calculations involving f2.14 (16 bits numbers using 2 bits for the 
integer part and 14 bits for the fractional part), or f6.10 . How many shades 
between black and white can your eye detect on edges during antiasing? Can your 
eye practically tell even 256 shades of grey on the one-pixel edge? You gain 
speed by avoiding floating point operations and conversions. You simply don't 
need 1 in 1e38 precision, as your eyes can't tell much more beyond 1 in 64 in 
some cases / usages. So doing calculations with 1 in 4096 precisions etc, then 
throwing away the smallest bits is sufficient.

We are talking about final output that is presented with 1 in 256 or even 1 in 
64 precisions. (E.g. The grey level of an one-pixel edge).     On Thursday 2 
January 2025 at 22:39:55 GMT, Ian (Magical Bat) Dvorin <magical...@comcast.net> 
wrote:  
 
    Hello,       Recently, I was looking at the source code for the SDF 
rendering portion of FreeType, and I was surprised to see that all the math is 
done using fixed point numbers. Looking further, the library does not use 
floating point at all. What are the primary reasons for this? Is it just a 
relic of the original library design when fixed point numbers were 
significantly than floating point numbers? Do they allow for slightly more 
precision when working at a certain scale? Are there separate compatibility 
reasons? I am sorry if there is an explanation somewhere on the FreeType 
website; I was not able to find anything. Thank you in advance for any 
explanation.    

Reply via email to