On Mon, Mar 2, 2015 at 5:14 PM, Micah Fedke <micah.fe...@collabora.co.uk> wrote: > In my approach, the error discovered by comparing the ~infinite precision > result to the finite precision result *is* the allowable error range. It's > like saying "This function diverged from the true result by amount x when > simulated at finite precision on the CPU, so we need to give it x amount of > leeway when it is run on the GPU." The finite precision result should be a > complete representation of how error truly propagated through the > intermediate stages of the equation, for the given inputs.
As seen by the CPU. TBH, I have no clue what the precision guarantees are for the various functions are on CPUs, but it seems unlikely to be identical to the ones in ARB_shader_precision. I sort of assume that all the simpler ops are supposed to be within 1 ULP on the CPU, but... who knows. (x87 had a few colorful opcodes which I'm sure introduce tons of error... some taking over 1K cycles on the original 8087, which wasn't exactly a speed demon to begin with.) It seems like you're computing error due to fp32 representation limitations rather than due to calculation imprecision. I understand that it's annoying to have spent a bunch of time on something that seems so trivial, but having failing tests due to the test being wrong is _quite_ annoying (and I say this having debugged a few such tests). I may give the approach I suggested a shot shortly, at least for a few ops, to see what the code looks like. Note that I have about 30 side-projects, and this would join that list... so... don't want to set the wrong expectations :) FWIW I never actually ran the results of your tests on my nvc0, perhaps I should do that. If it passes, then maybe what you have in practice is good enough, despite not being what the spec wants. Would love to hear opinions from others as well. -ilia _______________________________________________ Piglit mailing list Piglit@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/piglit