On Thu, Aug 1, 2024 at 8:14 AM John R. Hogerhuis [email protected]
<http://mailto:[email protected]> wrote:

Seems like a design flaw in BASIC. Since FP numbers cannot exactly
> represent all numbers that can be written in decimal (unless it's some kind
> of BCD format), it really should be keeping those FP numbers stored in a
> decimal or string format.
>
Yes, it seems to me like a misfeature in N82 BASIC, even if it was an
intentional tradeoff for speed as Alan Cox suggested. I’m not 100% positive
NEC knew about this flaw given their documentation of 16 “significant
digits” (which implies accuracy, not just precision). If it was for speed,
how much did it help? I’d like to see some benchmarks exercising the
floating point libraries of M100 BASIC versus N82 BASIC .

I’m suspecting that NEC made this change, not Microsoft. All the Microsoft
BASIC versions I know of keep the numbers in the program as plain ASCII,
the tokenization to floating point happens at run time, and tokenization
does not affect the source listing. That’s what the M100 does. If memory
serves, NEC was known for selling a version of Microsoft BASIC which they
had patched in-house (and pissing Microsoft off as they disagreed about
whether that broke the licensing agreement.)

Given this flaw it seems it's really only correct to put exactly
> representable fp literals into a program text or the BASIC program has a
> bug.
>
Agreed.

It would be interesting to know what literals are OK. And for your program
> it would be neat to print a warning to stderr if it tokenizes a problem FP
> literal. But that would require figuring out the format.
>
Printing a warning seems doable as I think I’ve got N82 double-precision
sussed now. Reading in little-endian order, the first byte is the base 2
exponent, with a bias of 128. The next seven bytes are the base 2 mantissa,
except for the most significant bit of the first byte which is the
mantissa’s sign bit (1 is negative).

   Value = (-1)^signbit × mantissa × 2⁽ⁿ⁻¹²⁸⁾

So, it is kinda similar to IEEE 754’s “binary-64”
<https://en.wikipedia.org/wiki/Double-precision_floating-point_format>
floating-point format, but the implicit 1 in the mantissa comes after the
decimal point, not before. That means the exponent will be one greater than
it would have been in binary-64 to shift the bits over. For example, a
mantissa of a single 1 bit followed by all zeroes with an exponent of 1
(encoded as 0x81) equals 1. To get the same value with binary-64, one would
need to use an exponent of 0.

—b9

Reply via email to