On Wed, Dec 17, 2025 at 03:23:09PM +0000, Jonathan Wakely wrote:
> > +    constexpr __rng_uint128
> > +    operator+(const __rng_uint128& __x, const __rng_uint128& __y) noexcept
> > +    {
> > +      return __rng_uint128(__x._M_h + __y._M_h
> > +                          + (__x._M_l + __y._M_l < __x._M_l),
> > +                          __x._M_l + __y._M_l);
> 
> Our existing code uses the overflow checking built-in for addition:
> 
>       friend constexpr type
>       operator+(type __l, uint64_t __c) noexcept
>       {
>         __l._M_hi += __builtin_add_overflow(__l._M_lo, __c, &__l._M_lo);
>         return __l;
>       }
> 
> I would expect that to be equivalent, right?
> 
>       friend constexpr type
>       operator-(type __l, uint64_t __c) noexcept
>       {
>         __l._M_hi -= __builtin_sub_overflow(__l._M_lo, __c, &__l._M_lo);
>         return __l;
>       }

There is even __builtin_addcll and __builtin_subcll, which are meant
for even larger chains (so for say quad intmax), on the other side those
aren't type generic.  So, in the cobol patch I was using
  unsigned long long c = 0;
  l = __builtin_addcll (l, x.l, 0, &c);
  h = __builtin_addcll (h, x.h, c, &c);
  return *this;
and
  unsigned long long c = 0;
  l = __builtin_subcll (l, x.l, 0, &c);
  h = __builtin_subcll (h, x.h, c, &c);
  return *this;
The reason I've avoided the builtins is because I wasn't
sure how is that handled during constexpr evaluation.
> 
> 
> > +    }
> > +
> > +    constexpr __rng_uint128
> > +    operator<<(const __rng_uint128& __x, size_t __y) noexcept
> > +    {
> > +      return (__y >= 64
> > +             ? __rng_uint128(__x._M_l << (__y - 64), 0)
> > +             : __y
> > +             ? __rng_uint128((__x._M_h << __y) | (__x._M_l >> (64 - __y)),
> > +                             __x._M_l << __y)
> > +             : __x);
> > +    }
> > +
> > +    constexpr __rng_uint128
> > +    operator>>(const __rng_uint128& __x, size_t __y) noexcept
> > +    {
> > +      return (__y >= 64
> > +             ? __rng_uint128(0, __x._M_h >> (__y - 64))
> > +             : __y
> > +             ? __rng_uint128(__x._M_h >> __y,
> > +                             (__x._M_l >> __y) | (__x._M_h << (64 - __y)))
> > +             : __x);
> > +    }
> 
> This is what I have (the [[likely]] attributes are because for philox
> it usually shifts by exactly 64, but maybe for the general case we
> shouldn't assume that):
> 
>       friend constexpr type
>       operator>>(type __l, unsigned __m) noexcept
>       {
>         if (__m >= 64) [[__likely__]]
>           {
>             __l._M_lo = __l._M_hi >> (__m - 64);
>             __l._M_hi = 0;
>             return __l;
>           }
> 
>         uint64_t __hi = __l._M_hi >> __m;
>         uint64_t __lo = (__l._M_lo >> __m) | (__l._M_hi << (64 - __m));

This is wrong for __m = 0.
(__l._M_lo >> __m) is fine, but you don't want to or in anything else
to that for __m 0.

        Jakub

Reply via email to