I notice one strange thing in that code: the result will only fit in 16
bits if red1 and red2*alpha_1 together aren't bigger than 0x80.

In fact, if I bump the data type of red, green, blue to uint32_t, it
works fine. I'm just not sure why it would work with least16_t without
optimization, as the intermediate values in the test cases are all
fitting into 16 bits, and gcc actually uses a 16 bit value in both
optimization cases (verified with sizeof).

If I replace uint_least16_t with uint16_t, I still see the same bug.

To be honest I don't fully understand what the reduction to byte does:

      red = (uint8_t) ((red + (red >> 8) + 0x80) >> 8);

but it looks to me as it would merely do rounding up. The initial
calculation is done in the high byte of red, with a tad of
underestimating (255, not 256), which explains why rounding up is
necessary.

For the "good" case, and alpha=0, we want to get 0x40 * 255 + 0x40 * 255
= 32640 = 0x7F80, and that's indeed what the intermediate value is with
-O0 (see above). In the faulty case, it is computed as 3f80 instead.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/685352

Title:
  libplymouth2_0.8.2-2ubuntu6 and later give ragged splash and text
  rendering

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to