P Witte wrote:

TonyTebby writes:



The 68xxx series is 16 bit word oriented (even on the 68008 8 bit bus
version) so a whole word exponent makes sense.
But, by using only 12 bits for the exponent (enough for astonomical
callculations) the 4 MSBs can be used as the "floating point number token"
flag without needing to use an extra word in to flag a floating point
number - a 25% space saving for almost no cost.
At the time, "mass storage" was of the order of 100 kbytes, not the 100
Gbyte today.


That makes goods sense. But (my apologies to all those for whom it is as
obvious as an elephant in the living room) with a 12 bit exponent, how come
the range is only +/-617? Ie, why isnt it +/-$800 ?


It's a binary exponent. It works as the number of bits the mantissa need to be shifted left/right to give the value. The +/-$800 is all used. It's two to the power 4096 which works out as ten to the power 2048 * 0.301, i.e. ten to the power 616.5, which, with a little help from the denormal side, actually works out as roughly +/-617.



It allows all operations on the exponent to be done with unsigned
arithmetic, which saves some brain work when implementing the stuff.


Saving brainwork also means saving time and cost



It gives an easy test for a "magic" value of zero exponent, which is
handy for the "special" cases. If is were not biased, you would have to
use a value like "0x8000", which would become messy.


A floating point value = 0.0 is 6 bytes of zero - not just neat but economical


In practise Qdos seems to interpret $xxx : $00000000 as zero, in other words
there are 4096 different ways of saying nothing. That may be neat but hardly
economical ;)


Economical... out of the possible values that the full 12+32 bits can contain, there are 4096 that effectively all mean zero... i.e. a 0.000000023% waste. :)


You do have to factor in all the other wasted representations. I.e. for every normalised mantissa that happens to have "10" as its last two bits (a quarter of them all), you could represent the identical value but shifting it right one bit and having a one higher exponent (all right, not every time, 'cos it might be at max exponent, but that's only one in 4096 times). Similarly, the normalised mantissae that end in "100" (an eight of them) could be represented in two alternate ways (shifted either one or two bits right). Now we're into a series summation...
1/4 + 2/8 + 3/16 + 4/32 + ... + 30/2^31 (and we're not going to quibble about the details). That series sums to unity. I.e. corresponding to every normalised mantissae, there is precisely one unnormalised mantissa (which boils down to exactly what I said earlier - no concealed leading one bit and you lose one bit of prcision).


As a separate item, in response to what TT said about only using a 12 bit exponent, I found that the code would have been significantly quicker/shorter/easier if a 15-bit exponent had been used. A large proportion of tokens are FP, and distinguishing them by purely the sign bit would have been better all round (Huffman coding?). It would have been better in the FP computations as well, instead of messing about with nasty $FFF compares all the time. However, I'll admit that 1E+/-617 is pretty adequate for most purposes.

As another aside... I've always wondered if anyone has seriously considered F.E. format? That is, floating exponent. I.e. a number with a mantissa, an exponent and a wheee (power of two to multiply the exponent by). Addition is fine with integers, multiplication is great with FP (though addition is iffy) and exponentiation is a doddle with FE (with multiplication a slight problem and addition a nightmare!).

--
Lau
http://www.bergbland.info
Get a domain from http://oneandone.co.uk/xml/init?k_id=5165217 and I'll get the commission!




Reply via email to