> On Oct 14, 2016, at 7:22 PM, Leo Broukhis <l...@mailcom.com> wrote:
> 
> I wonder what is the historically first programming environment with native 
> binary floating point which had been proved/demonstrated to handle f.p. 
> binary<->decimal I/O conversions 100% correctly?
> By 100% correctly I mean that all valid binary representations of floating 
> point numbers could be, given a format with enough significant digits, 
> converted to unique text strings, and these strings could be converted back 
> to the corresponding unique binary representations.
> 
> Of course, there is enquire.c which facilitated finding bugs in the 
> Unix/Posix environments, but has anyone cared about this during the mainframe 
> era?

I believe so, yes.  For the design of the floating point feature of the 
Electrologica X8 (early 1960s) the design documents discuss correctness, 
including what the definition of "correct" should be.

There is a very nice and very detailed correctness proof of that floating point 
design, documented in http://repository.tue.nl/674735 
<http://repository.tue.nl/674735> .  That paper was written long after the 
fact, but by one of the people originally involved in that machine.

Apart from proofs of the correctness of each of the floating point 
instructions, that paper also describes the sqrt library function.   
Interestingly enough, the implementation of that function does not use floating 
point operations.  But the analysis, in appendix B of the paper, clearly shows 
the error terms of the approximation used and why the number of steps used is 
sufficient for correctness of the sqrt implementation.

For a different machine, the CDC 6000 series, I remember reading complaints 
about its bizarre rounding behavior (rounding at 1/3 ?).  I forgot where that 
appeared; possibly a paper by prof. Niklaus Wirth of ETH Zürich.

        paul


_______________________________________________
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to