Could do that, but one side or the other *must* account for the view having a nonzero MinLayer.
Colors[] is indexed based on the layer index relative to the underlying texture; expectedLayer is currently relative to the underlying texture; the value used in the shader needs to be relative to the view [so expectedLayer - view's MinLayer] Did this actually work on nvidia on master? On Wed, Jan 15, 2014 at 5:36 AM, Jon Ashburn <[email protected]> wrote: > see inline > > On 01/11/2014 05:39 PM, Chris Forbes wrote: >> >> We want to sample from the last layer in the view, but we were actually >> trying to sample from a layer beyond that [by the texture's minimum >> layer]. >> >> This might have worked on hardware where the sampler clamps the layer >> index before sampling [NV?], but that relies on undefined behavior. >> >> Signed-off-by: Chris Forbes <[email protected]> >> --- >> tests/spec/arb_texture_view/rendering_levels.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/tests/spec/arb_texture_view/rendering_levels.c >> b/tests/spec/arb_texture_view/rendering_levels.c >> index ef29f78..86bf055 100644 >> --- a/tests/spec/arb_texture_view/rendering_levels.c >> +++ b/tests/spec/arb_texture_view/rendering_levels.c >> @@ -205,7 +205,7 @@ test_render_layers(void) >> glClear(GL_COLOR_BUFFER_BIT); >> expectedLayer = l + numLayers[l] - 1; >> - draw_3d_depth(-1.0, -1.0, 2.0, 2.0, expectedLayer); >> + draw_3d_depth(-1.0, -1.0, 2.0, 2.0, numLayers[l] - 1); >> > > with this change then the draw depth is inconsistent with the indexing into > the Colors array > how about this: > expectedLayer = numLayers[l] - 1; > > >> expected[0] = Colors[expectedLayer][0] / 255.0; >> expected[1] = Colors[expectedLayer][1] / 255.0; > > > _______________________________________________ > Piglit mailing list > [email protected] > http://lists.freedesktop.org/mailman/listinfo/piglit _______________________________________________ Piglit mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/piglit
