On Wed, 17 Dec 2025 at 14:49, Jakub Jelinek <[email protected]> wrote:
>
> Hi!
>
> Actually, the
> FAIL: 26_numerics/random/uniform_real_distribution/operators/64351.cc  
> -std=gnu++20 execution test
> FAIL: 26_numerics/random/uniform_real_distribution/operators/gencanon.cc  
> -std=gnu++20 execution test
> errors are gone when testing with --target_board=unix/-m32/-msse2/-mfpmath=sse
> or when the tests are compiled with -O0, which means that it isn't buggy
> unsigned __int128 emulation in that case, but rather either those tests or
> something in random.tcc not being extended precision clean.

I suspect what's happening is that the rounding in this loop changes:

  while (true)
    {
      _UInt __Ri{1};
      _UInt __sum{__urng() - _Urbg::min()};
      for (int __i = __k - 1; __i > 0; --__i)
        {
          __Ri *= __R;
          __sum += _UInt{__urng() - _Urbg::min()} * __Ri;
        }
      const _RealT __ret = _RealT(__sum / __x) / _RealT(__rd);
      if (__ret < _RealT(1.0))
        return __ret;
    }

And that means that we loop a different number of times depending on
whether the excess precision is used or not. And the tests assume a
fixed number of loops, based on the empirically observed number of
loop iterations. But for i387 FP arithmetic, the numbers are
different.

So I think the std::generate_canonical code is OK, but the test makes
invalid assumptions.

I'll send a patch that replaces yours (i.e. using the emulated 128-bit
arithmetic) after dinner, nd maybe I'll think of a clean way to fix
the numbers in the tests while I'm eating.

Reply via email to