> Why did glsl implement this really as x * (1 - a) + y * a? > The usual way for lerp would be (y - x) * a + x, i.e. two ops for most > gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices > precision
Yes. http://fgiesen.wordpress.com/2012/08/15/linear-interpolation-past-present-and-future/ -- Aras Pranckevičius work: http://unity3d.com home: http://aras-p.info
_______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev