On 03.10.2017 09:54, Olivier Lauffenburger wrote:
On 29/09/17 12:17, Nicolai Hähnle wrote:
On 28.09.2017 20:02, Roland Scheidegger wrote:
Am 28.09.2017 um 18:19 schrieb Jose Fonseca:
On 28/09/17 17:16, Roland Scheidegger wrote:
Am 28.09.2017 um 17:53 schrieb Jose Fonseca:
On 28/09/17 16:29, Roland Scheidegger wrote:
Am 28.09.2017 um 16:12 schrieb Jose Fonseca:
On 27/09/17 15:07, Roland Scheidegger wrote:
Am 27.09.2017 um 09:13 schrieb Olivier Lauffenburger:
Software rasterizer and LLVM contain code to enable clipping
as soon as a vertex shader writes to gl_ClipDistance, even if
the corresponding clip planes are disabled.
GLSL specification states that "Values written into
gl_ClipDistance for planes that are not enabled have no
effect."
The actual behavior is thus non-conformant.

This patch removes the code that automagically enables user
clipping planes even if they are disabled.

Signed-off-by: Olivier Lauffenburger
<o.lauffenbur...@topsolid.com>

FWIW that code is there because you can't disable clip
distances with
d3d10 - if you write them in the shader, they're enabled (d3d9
didn't have clip distances, just old user clip planes, which of
course have enable bits). They are very similar to cull
distances there (which you can't disable with gl neither).
I suppose we cheated there a bit... I might even have realized
it wasn't quite GL conformant when we did this, but it didn't
cause piglit regressions then (I guess it's very rare a shader
actually declares clip distance outputs but does not enable
them).
This was introduced back in June 2013:
https://lists.freedesktop.org/archives/mesa-dev/2013-June/04055
9.html


So with this removed, I suppose we need to add a workaround in
our code (which is indeed rather unfortunate). But I don't see
another
(reasonable) way to make it gl conformant.
If however there's still no piglit test exercising this, there
should be one.

I'm still not following.  Are we talking about
pipe_rasterizer_state::clip_plane_enable controlling
TGSI_SEMANTIC_CLIPDIST?
Yes.


I thought these have nothing to do with one another.
pipe_rasterizer_state::clip_plane_enable should control
legacy/fixed-fuction user clip planes.
Nope. Every single driver (except those using draw) assumes this
also enables clip dists - this includes svga which translates
those away in the shader which aren't enabled.



If the OpenGL state tracker needs to translate GLSL shaders that
write gl_ClipDistance but where the clip plane is disabled, the
solution is
simple: just create a shader variant that omits the
TGSI_SEMANTIC_CLIPDIST in question, or writes an constant to it.
Well, it is easier to have an extra enable and having to add
additional rasterizer dependencies on state trackers which don't
have separate enable, rather than having to hack shaders with
state trackers which don't have them. Of course, svga does
exactly that but it's a bit annoying.


Like it was mentioned, it should be extremely rare for apps to
emit gl_ClipDistance yet disable the clip planes, hence the
shader variants should be extremely rare.

It's not just D3D10/11 that doesn't have flags to disable clip
distances: I can't find them in Vulkan spec neither.  And while
I know few/none want to build Vulkan drivers atop Gallium
interface I think it's still useful to decide what deserves to
be in Gallium interface or not.
Of course, newer apis won't have that - clearly the separate
enables just carried over from legacy GL.

So in short, llvmpipe is fine, please let's fix the state
tracker instead.
Well, we're really the only ones caring about non-gl gallium
state tracker, so I'm not sure it makes sense to impose the d3d10
semantics there. We just cheated in draw.
And actually, thinking about this, it's really not even possible
easily and cleanly doing this in the state tracker: you can pass
those clip dists to the next shader stage. If that's all
pre-fragment stage (vertex, geometry, tesselation), this is still
doable just annoying (must pass them unaltered between stages not
the last stage before rasterization), but it's impossible to pass
them to fragment stage (unless you'd use generic varying) if you
rewrite the shader to not include them (don't ask me if it works
correctly in svga, it might have code for this too).

So I believe there's really no other choice other than following
GL semantics there in gallium.

Ok..

So when implementing D3D10 we just need to ensure clip enable is
always set to all ones?

I don't think that will work - the (gl and presumably gallium)
semantics are that enabled clip planes must be written or the
results are undefined. I suppose we could cheat there too (in draw)
and take the insersection of enabled and written clip dists, but
otherwise we'd have to enable only these clip planes a shader
actually writes...

I wouldn't call this "cheat", but merely do something sane,
something that's not merely undefined, something that works for all APIs.
Well, basically it amounts to using a undefined shader output, and
such things tend to give undefined results.


After all, when did gallium stop to be an abstraction to become
"whatever GL api does"?
Yes, we could do it differently.
Actually, there's a comment in p_state.h which even states this is
indeed the case.
"   /**
      * Enable bits for clipping half-spaces.
      * This applies to both user clip planes and shader clip distances.
      * Note that if the bound shader exports any clip distances,
these
      * replace all user clip planes, and clip half-spaces enabled
here
      * but not written by the shader count as disabled.
      */
     unsigned clip_plane_enable:PIPE_MAX_CLIP_PLANES;
"
However, I was wrong saying this could work. I realized it cannot -
this is because there's no explicit switch between old user clip
planes and clip distances. If the shader writes (at least one) clip
distance, then those clip distances will be used as additional clip
planes (when the clip enable bits are set). However, if clip dist
output does not exist, then that means old user clip planes are
enabled instead and the corresponding clip math performed (using the
user clip planes and either ordinary position output or clipVertex output).
Therefore, always setting all clip enable bits to 1 and not writing
clip dist at all would perform user clip plane clipping... Would be
fixable with another rasterizer bit I suppose but I'm not sure it's worth it?

If you do care sufficiently, IMO it'd be cleaner to have separate bit
sets for user clip planes and clip distances. It's more bits, but I
think the disentangling of state would be worth it.

That sounds a good idea to me too IMHO, as long it's not too much hassle.

Jose

How is it that gl_ClipVertex does work? The clip planes enable bits are
used to compute and fill the clip distance attributes after the vertex
shader has been executed.
As seen from afar, there is no real difference between this behavior and
the fact to take the clip plane enable bits into account when doing clipping
by user planes.

This logic may be why the clip bits were aliased in Gallium in the first place, and perhaps some hardware works like this, adding the clip distance generation code to the vertex shader when legacy gl_ClipVertex is used.

However, at least in Radeon hardware those states are completely orthogonal: there are six user clip planes which have their clip enable bits, and separately there are eight cull/clip distance outputs, each of which can separately be enabled for cull/clip. So the hardware has 6 + 8 + 8 clip/cull enable bits.

Cheers,
Nicolai


Or maybe gl_ClipVertex does not work with d3d10? (I only use software
rendering, with or without LLVM). In fact I ran into this gl_ClipDistance
problem when converting vertex shaders from gl_ClipVertex to
gl_ClipDistance in order to stop using a deprecated feature.

Olivier




--
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to