On 03/06/2013 08:35 PM, Søren Sandmann wrote:
Bill Spitzak <[email protected]> writes:

Søren Sandmann wrote:

No, I'm talking about keeping internal buffer as 32 bit pixels, and then
dithering the final conversion to 565, as opposed to having 16 bit
internal buffers and dithering gradients to that before compositing.

Yes that will be even better, sorry I misunderstood.

Dithering involves adding the pattern to the source, such that a pixel
of 5.5 is much more likely to end up >= 6 than a pixel of 5.0, and
thus on average will be brighter.

It won't be brighter if the dither signal is 0 on average.

What I meant is that if you took all the pixels that resulted from 5.5 they would average to brighter than the ones that resulted from 5.0.

Most work I have done with dithering has a positive-only signal but uses floor rather than round to turn the result into an integer. So I tend to think of it as making pixels brighter. I think they are mathematically equivalent and one can be converted to the other.

A good blue-noise pattern is very close in quality to error diffusion.

I agree the difference is imperceptible on 8 bit channels, and combined with the fact that a GPU shader can do it pretty much means a pattern will always be used today.

_______________________________________________
Pixman mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/pixman

Reply via email to