Mesa already keeps track of the GLES precision for variables and stores
it in the ir_variable. When no precision is explicitly specified it
takes the default precision for the corresponding type. However, when
the variable is a struct or interface, the precision of each individual
member is attached to the glsl_type instead. The code to make it use the
default precision was missing so this branch adds it in.

Only the last patch actually makes this change. The rest of the patches
are to fix regressions in Piglit and CTS. The underlying problem is that
Mesa was considering types that have different precisions to be
different when comparing interstage interfaces (varyings and UBOs).
According to the spec it should be ignored. Presumably this problem
already existed if mismatched precisions were explicitly specified but
we didn’t have any tests to test it. Storing the default precision makes
some tests fail because the default precision for ints is different in
the vertex and fragment stages so it’s easier to accidentally make a
test case that tests this.

The tests that regressed are:

dEQP-GLES31.functional.shaders.opaque_type_indexing.* (12 tests)
piglit.spec.ext_transform_feedback.structs_gles3 basic-struct run
piglit.spec.glsl-es-3_00.execution.varying-struct-centroid_gles3
piglit.spec.ext_transform_feedback.structs_gles3 basic-struct get

https://gitlab.freedesktop.org/mesa/mesa/merge_requests/736

Attachment: signature.asc
Description: PGP signature

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to