On Wed, Mar 18, 2015 at 9:31 PM, Matt Turner <matts...@gmail.com> wrote: > On Wed, Mar 18, 2015 at 9:27 PM, Jason Ekstrand <ja...@jlekstrand.net> wrote: >> On Wed, Mar 18, 2015 at 8:59 PM, Connor Abbott <cwabbo...@gmail.com> wrote: >>> On Wed, Mar 18, 2015 at 11:35 PM, Jason Ekstrand <ja...@jlekstrand.net> >>> wrote: >>>> >>>> On Mar 18, 2015 8:32 PM, "Matt Turner" <matts...@gmail.com> wrote: >>>>> >>>>> On Wed, Mar 18, 2015 at 7:39 PM, Jason Ekstrand <ja...@jlekstrand.net> >>>>> wrote: >>>>> > On Wed, Mar 18, 2015 at 11:37 AM, Matt Turner <matts...@gmail.com> >>>>> > wrote: >>>>> >> Transform this into b2f(and(a, b)). >>>>> >> >>>>> >> total instructions in shared programs: 6205448 -> 6204391 (-0.02%) >>>>> >> instructions in affected programs: 284030 -> 282973 (-0.37%) >>>>> >> helped: 903 >>>>> >> HURT: 6 >>>>> >> --- >>>>> >> src/glsl/nir/nir_opt_algebraic.py | 2 ++ >>>>> >> 1 file changed, 2 insertions(+) >>>>> >> >>>>> >> diff --git a/src/glsl/nir/nir_opt_algebraic.py >>>>> >> b/src/glsl/nir/nir_opt_algebraic.py >>>>> >> index ef855aa..f956edf 100644 >>>>> >> --- a/src/glsl/nir/nir_opt_algebraic.py >>>>> >> +++ b/src/glsl/nir/nir_opt_algebraic.py >>>>> >> @@ -95,6 +95,8 @@ optimizations = [ >>>>> >> (('fsat', a), ('fmin', ('fmax', a, 0.0), 1.0), >>>>> >> 'options->lower_fsat'), >>>>> >> (('fsat', ('fsat', a)), ('fsat', a)), >>>>> >> (('fmin', ('fmax', ('fmin', ('fmax', a, 0.0), 1.0), 0.0), 1.0), >>>>> >> ('fmin', ('fmax', a, 0.0), 1.0)), >>>>> >> + # Emulating booleans >>>>> >> + (('fmul', ('b2f', a), ('b2f', b)), ('b2f', ('iand', a, b))), >>>>> > >>>>> > Those are only equivalent if the sources are known booleans. >>>>> > Otherwise, no dice. >>>>> >>>>> Well... they're the source of a b2f. Are you saying that's not sufficient? >>>> >>>> No, that's not. Fortunately, @bool should solve it for you in all of the >>>> cases you care about. >>> >>> I think Matt has a point here. There's not much point in defining how >>> b2f should work for things that aren't bools, and I'm fine with >>> transforms that produce "bad"/undefined results when the input isn't 0 >>> or ~0. We should never get into that situation anyways. This is >>> different from the issue that compare instructions always have to >>> produce 0 or ~0, although both do stem from the fact that NIR doesn't >>> have a bool type. >> >> Ok... In that case, we need to define some things better. For >> instance, bcsel is currently defined as "if nonzero do this else that" >> which is very different from "if ~0 do this else if 0 do that else... > > The only place I see any comments about bcsel is in nir_opcodes.py, which > says: > > # Conditional Select > # > # A vector conditional select instruction (like ?:, but operating per- > # component on vectors). There are two versions, one for floating point > # bools (0.0 vs 1.0) and one for integer bools (0 vs ~0). > > I'd be great if we documented the opcodes much in the same way TGSI's > opcodes are documented. I get the sense that the documentation you're > referring to is only in your brain. :)
Yes, official documentation would be good. However, what I said above is what constant folding does as well as the i965 backend (we do a mov.nz to get the source). Also, I thought there were other places where we explicitly say 0 vs. non-zero. In any case, we need to define these things better and I'm 100% ok defining them as 0 vs ~0 and if you pass anything else, it's undefined. That would sure make optimizations easier. >> who knows?" I'm ok with either but we need to be clear. If we stop >> using the sloppy version that may mean that when translating from >> other source languages that may not be strongly typed we'll have to do >> a ine with zero to fix it up. >> --Jason _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev