On 1/26/24 11:01 AM, Antero Mejr wrote:
Bradley Lucier<[email protected]>  writes:

On 1/26/24 12:27 AM, John Cowan wrote:
The disadvantage of this idea is that it would fail to test bignums at all.
The current definition

     (define max-int 1000000000000000001)

fails to test any bignums at all on 64-bit Gambit, because 1000000000000000001
is a fixnum.
SRFI 252 says "there are values in the [sample] implementation that may
need to be adjusted". That number was chosen as an arbitrarily large
bignum for Gauche, but implementations should change it.

Perhaps the SRFI should have also say: "the maximum integer value should
be set to generate bignums for a given implementation and CPU
architecture".

I could try to cond-expand a bignum for common CPU architectures and
implementations (Gauche, Chibi, Gambit, Guile 3), but I think the
implementer should be the one to decide the number.

Well, it's a bit tricky to get it right.

If you set max-int to be twice the maximum fixnum, then about half the generated numbers will be bignums (if numbers are generated uniformly in the interval), which may not be what is wanted/needed.

If max-int is set to 100 times the maximum fixnum, then 99% of generated numbers will be bignums, which is probably not what someone wants.

Practically, sometimes one wants fixnums and sometimes one wants bignums. Perhaps max-int should be a parameter adjustable at runtime. Or perhaps there really should be fixnum-generator and bignum-generator.

Brad

Reply via email to