On Wed, Oct 15, 2025 at 6:11 PM Michael Matz <[email protected]> wrote:
>
> Hello,
>
> On Wed, 15 Oct 2025, Alejandro Colomar wrote:
>
> > > ... while this doesn't.  So it's not equivalent, add an outer cast to bool
> > > to make it so, and then you'd get:
> > >
> > > $ ./a.out
> > > 1
> > > 0
> > > 1
> > > 1
> >
> > True.  Which makes it even worse:
> >
> > 2 at least is a valid power response for bit_ceil(), just as 0 is also
> > a valid response, if we consider wrapping arithmetics, but 1 is not.
>
> Huh?  Arguably that zero is even a power of anything is the special case;
> 1 is always a power.  Either way, the specific semantics of a single badly
> written (or badly specified) try at bit magic doesn't really influence
> what is or isn't orthogonal language design.  Maxof/Minof makes sense for
> bool and has an obvious and single sensible definition, so leaving it out
> is surprising.  Surprises usually fire back in language definitions.
>
> > And as we're seeing, bool belongs in a third class of integer types, in
> > which it is the only member.  It doesn't belong in the class of unsigned
> > integer types (which is BTW something under discussion in the
> > C Committee at the moment).
>
> If saturated types are ever included it won't be the only member of that
> class.  Even absent that the loneliness of bool doesn't imply anything
> specific.
>
> > But within the boolean class of integers, it makes no sense to use a
> > generic operator to get the limits, as there's only one type: bool.
> > Thus, you just hardcode true and false.
>
> I can envision different bool types (of different sizes for instance),
> where everything is naturally defined.  Maxof/Minof would then return the
> correctly typed variants of true and false.  But even that imagination
> isn't necessary to see that Maxof/Minof should "obviously" be defined for
> bool.

If we ever expose vector bools as GNU extension then you get a new
"signed bool" with different _Minof/_Maxof (-1 and 0).

typedef bool sbool __attribute__((signed_bool_precision(1)));

_Minof (sbool) == 1

need to compile with -fgimple to have the attribute not ignored.  And yes,
a 8-bit precision signed bool is a thing then (but still [-1,0]).

Richard.

> > And I have yet to see a macro that would make sense for all three
> > classes of integer types (signed, unsigned, and boolean).
>
> Maxof and Minof :-)
>
>
> Ciao,
> Michael.

Reply via email to