Re: [Mesa-dev] [PATCH 3/9] glsl: Convert mix() to use a new ir_triop_lrp opcode.

2013-02-20 Thread Aras Pranckevicius
Why did glsl implement this really as x * (1 - a) + y * a? The usual way for lerp would be (y - x) * a + x, i.e. two ops for most gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices precision Yes.

Re: [Mesa-dev] [PATCH 3/9] glsl: Convert mix() to use a new ir_triop_lrp opcode.

2013-02-20 Thread Roland Scheidegger
Am 20.02.2013 11:39, schrieb Aras Pranckevicius: Why did glsl implement this really as x * (1 - a) + y * a? The usual way for lerp would be (y - x) * a + x, i.e. two ops for most gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices precision Yes.

[Mesa-dev] [PATCH 3/9] glsl: Convert mix() to use a new ir_triop_lrp opcode.

2013-02-19 Thread Matt Turner
From: Kenneth Graunke kenn...@whitecape.org Many GPUs have an instruction to do linear interpolation which is more efficient than simply performing the algebra necessary (two multiplies, an add, and a subtract). Pattern matching or peepholing this is more desirable, but can be tricky. By using

Re: [Mesa-dev] [PATCH 3/9] glsl: Convert mix() to use a new ir_triop_lrp opcode.

2013-02-19 Thread Roland Scheidegger
Not much to say about the code (the theory sounds sane) but I was wondering about the comment. Why did glsl implement this really as x * (1 - a) + y * a? The usual way for lerp would be (y - x) * a + x, i.e. two ops for most gpus (sub+mad, or sub+mul+add). But I'm wondering if that sacrifices