Re: [Simh] Looking for a milestone

2016-10-18 Thread Paul Koning

> On Oct 17, 2016, at 8:42 PM, Nelson H. F. Beebe  wrote:
> 
> ...
> Some current compilers, such as releases of gcc for the last several
> years, use the GMP and MPFR multiple-precision arithmetic packages to
> supply correct compile-time conversions.  Presumably, the revision
> history of GNU glibc would reveal when similar actions were taken for
> the strtod(), printf(), and scanf() families of the C and C++
> programming languages. I have not personally investigated that point,
> but perhaps other list members have, and can report their findings.

I don't think MPFR etc. apply to glibc.  The reason GCC uses those libraries is 
for cross-compilation: you can't use IEEE float when the target uses some other 
float format.  The floating point support (in file real.c primarily) is table 
driven and has entries to handle all the formats that supported GCC targets 
use.  For example, I did some work there on the DEC float support.  Among other 
things, GCC can handle oddball cases such as the IBM 360 architecture with its 
power-of-16 rather than power-of-2 encoding.

paul


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Leo Broukhis
On Mon, 17 Oct 2016 15:29:10 -0600, Kevin Handy 
wrote:

How close are the simh emulators to the real hardware's floating point? How
> exct is the emulation of FPU's?
> Does simh emulate the real hardware close enough that you can use it to
> analyze the original hardware floating point processors? (For those that
> actually had FPUs instead of doing it in software).
> Or does it do it using "modern" methods (IEEE style FPUs) that could
> calculate different results than the original hardware did?
>

I can tell that the engineering ALU test passes in the BESM-6 emulator.

Addition and multiplication produce 80 bit of mantissa and had to be
emulated with integers; division produces 40 bit, but using IEEE double and
truncating or rounding the result causes the test to fail; I had to
implement the non-restoring division algorithm exactly as described in the
docs to make it work.

It;s probably not a big deal for most users, but if the simh FPU hardware
>  might operate any different;y than the real hardware it should at least be
> documented somewhere.
>

If other machines emulated by SIMH use IEEE for speed, it would be
interesting to run their engineering tests, if available, to see if there
are any failures.

Leo
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Nelson H. F. Beebe
The discussions on this thread began with a question about the
accuracy of binary->decimal->binary conversions.  The key original
references are recorded in

http://www.math.utah.edu/pub/tex/bib/fparith.bib
http://www.math.utah.edu/pub/tex/bib/fparith.html

in entries Goldberg:1967:BED, Matula:1968:BCT, and Matula:1968:C.
Those papers showed how many digits were needed for correct round-trip
conversions, but did not exhibit code to do so.

Some later papers had real source code, including Steele:1990:HPF,
Clinger:1990:HRF, Clinger:2004:RHR, Burger:1996:PFP, Knuth:1990:SPW,
and Abbott:1999:ASS.  The 20-year retrospectives in Clinger:2004:RHR
and Steele:2004:RHP sum up that earlier work, and may be the best
starting point to understand the problem.

It is decidedly nontrivial: the Abbott paper in the section
``Difficult numbers'' starting on page 739 discusses hard cases [where
the exact result in the output base is almost exactly halfway between
two machine numbers], and on page 740, they write ``The decimal size
may be unbounded, because there is a natural bound derived from the
exponent range, as was mentioned earlier. This bound is 126 digits for
single, 752 for double, and 11503 for extended precision.''

Entry Knuth:1990:SPW has the provocative title ``A Simple Program
Whose Proof Isn't'': it examines the conversions between fixed-binary
and decimal needed in the TeX typesetting system, a much simpler
problem whose proof eluded Knuth for several years until this paper.
A companion paper Gries:1990:BDO supplies an alternative proof of
Knuth's algorithm.

As to the first actual system to provide correct round-trip
conversions, the Abbott paper on the IBM mainframe (S/360 to zSeries
machines) describes the pressure to get it right the first time,
because of the longevity of that architecture, and the high cost of
repairing hardware implementations.

The 1990 Steele and Clinger references above supply software
implementations in Common Lisp exploiting the multiple-precision
arithmetic supported in that language.

Some current compilers, such as releases of gcc for the last several
years, use the GMP and MPFR multiple-precision arithmetic packages to
supply correct compile-time conversions.  Presumably, the revision
history of GNU glibc would reveal when similar actions were taken for
the strtod(), printf(), and scanf() families of the C and C++
programming languages. I have not personally investigated that point,
but perhaps other list members have, and can report their findings.

From scans of executables of assorted gcc versions, it appears that
GMP and MPFR came into use in gcc-family compilers about mid-2007.
The oldest gcc snapshot (of hundreds that I have installed) that
references both those libraries is gcc-4.3-20070720.  However, the
ChangeLog file in that release has mention on MPFR from 18-Nov-2006,
and an entry of 11-Jan-2007 that says "Add gmp and mpfr".

David M. Gay (then at AT Bell Labs renamed as Lucent Technologies)
released an accurate implementation of strtod() marked "Copyright (C)
1998-2001 by Lucent Technologies".  The oldest filedate in my personal
archives of versions of that software is 22-Jun-1998, with changes up
to 29-Nov-2007.

---
- Nelson H. F. BeebeTel: +1 801 581 5254  -
- University of UtahFAX: +1 801 581 4148  -
- Department of Mathematics, 110 LCBInternet e-mail: be...@math.utah.edu  -
- 155 S 1400 E RM 233   be...@acm.org  be...@computer.org -
- Salt Lake City, UT 84112-0090, USAURL: http://www.math.utah.edu/~beebe/ -
---
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Paul Koning

> On Oct 17, 2016, at 5:29 PM, Kevin Handy  wrote:
> 
> How close are the simh emulators to the real hardware's floating point? How 
> exct is the emulation of FPU's?
> Does simh emulate the real hardware close enough that you can use it to 
> analyze the original hardware floating point processors? (For those that 
> actually had FPUs instead of doing it in software).
> Or does it do it using "modern" methods (IEEE style FPUs) that could 
> calculate different results than the original hardware did?

I would hope not.  It's up to the individual emulator.  But a lot of machines 
have float designs vastly different from IEEE, possibly including more 
significant bits than commonly found, or much larger exponent ranges.

> It;s probably not a big deal for most users, but if the simh FPU hardware  
> might operate any different;y than the real hardware it should at least be 
> documented somewhere.

Yes, it would be good for the machine-specific document in the SIMH docs set to 
answer that question.

paul

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Kevin Handy
It varies from 16 bits to 256 bits.

Go to the wikipedia.org article on "IEEE floating point" for an overview.


On Mon, Oct 17, 2016 at 3:30 PM, Ray Jewhurst 
wrote:

> Just out of curiosity how many bits does the IEEE standard require for
> floating point?
>
> On Oct 17, 2016 3:51 PM, "Leo Broukhis"  wrote:
>
>> Dijkstra is above reproach; I try to compare the averages.
>>
>> Having eps^2 = eps is cute, but, given that the idea didn't spread to
>> other pre-IEEE f.p. implementations nor to IEEE (it is possible to
>> iteratively square a number x with 0 < abs(x) < 1 down to 0, given enough
>> iterations, denormals or not), it appears that the Electrologica floating
>> point turned out to be impractical.
>>
>>
>>
>> On Mon, Oct 17, 2016 at 11:35 AM, Paul Koning 
>> wrote:
>>
>>>
>>> > On Oct 17, 2016, at 2:26 PM, Leo Broukhis  wrote:
>>> >
>>> > > I think that the same answer applies to your narrower question,
>>> though I didn't see it mentioned specifically in the documents I've read.
>>> >
>>> > That's somewhat comforting; I'd hate to think that the BESM-6
>>> programmers were substantially sloppier than their Western colleagues. :)
>>>
>>> As you probably know, Dijkstra was a whole lot more disciplined than the
>>> vast majority of his colleagues.
>>>
>>> > > For example, the treatment of underflow and very small numbers in
>>> Electrologica was novel at the time; Knuth specifically refers to it in a
>>> > > footnote of Volume 2.  The EL-X8 would never turn a non-zero result
>>> into zero, for example.
>>> >
>>> > For most but not all values of "never", I presume. What was the result
>>> of squaring the number with the least representable absolute value?
>>>
>>> The least representable positive value.  See the paper by F. E. J.
>>> Kruseman Aretz that I mentioned.
>>>
>>> >
>>> > > I think IEEE ended up doing the same thing, but  that was almost 20
>>> years later.
>>> >
>>> > Are you're thinking about denormals?
>>>
>>> I think so, but I'll be the first to admit that I don't really know
>>> floating point.
>>>
>>> paul
>>>
>>>
>>
>> ___
>> Simh mailing list
>> Simh@trailing-edge.com
>> http://mailman.trailing-edge.com/mailman/listinfo/simh
>>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Ray Jewhurst
Just out of curiosity how many bits does the IEEE standard require for
floating point?

On Oct 17, 2016 3:51 PM, "Leo Broukhis"  wrote:

> Dijkstra is above reproach; I try to compare the averages.
>
> Having eps^2 = eps is cute, but, given that the idea didn't spread to
> other pre-IEEE f.p. implementations nor to IEEE (it is possible to
> iteratively square a number x with 0 < abs(x) < 1 down to 0, given enough
> iterations, denormals or not), it appears that the Electrologica floating
> point turned out to be impractical.
>
>
>
> On Mon, Oct 17, 2016 at 11:35 AM, Paul Koning 
> wrote:
>
>>
>> > On Oct 17, 2016, at 2:26 PM, Leo Broukhis  wrote:
>> >
>> > > I think that the same answer applies to your narrower question,
>> though I didn't see it mentioned specifically in the documents I've read.
>> >
>> > That's somewhat comforting; I'd hate to think that the BESM-6
>> programmers were substantially sloppier than their Western colleagues. :)
>>
>> As you probably know, Dijkstra was a whole lot more disciplined than the
>> vast majority of his colleagues.
>>
>> > > For example, the treatment of underflow and very small numbers in
>> Electrologica was novel at the time; Knuth specifically refers to it in a
>> > > footnote of Volume 2.  The EL-X8 would never turn a non-zero result
>> into zero, for example.
>> >
>> > For most but not all values of "never", I presume. What was the result
>> of squaring the number with the least representable absolute value?
>>
>> The least representable positive value.  See the paper by F. E. J.
>> Kruseman Aretz that I mentioned.
>>
>> >
>> > > I think IEEE ended up doing the same thing, but  that was almost 20
>> years later.
>> >
>> > Are you're thinking about denormals?
>>
>> I think so, but I'll be the first to admit that I don't really know
>> floating point.
>>
>> paul
>>
>>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Kevin Handy
How close are the simh emulators to the real hardware's floating point? How
exct is the emulation of FPU's?
Does simh emulate the real hardware close enough that you can use it to
analyze the original hardware floating point processors? (For those that
actually had FPUs instead of doing it in software).
Or does it do it using "modern" methods (IEEE style FPUs) that could
calculate different results than the original hardware did?

It;s probably not a big deal for most users, but if the simh FPU hardware
 might operate any different;y than the real hardware it should at least be
documented somewhere.

On Mon, Oct 17, 2016 at 1:50 PM, Leo Broukhis  wrote:

> Dijkstra is above reproach; I try to compare the averages.
>
> Having eps^2 = eps is cute, but, given that the idea didn't spread to
> other pre-IEEE f.p. implementations nor to IEEE (it is possible to
> iteratively square a number x with 0 < abs(x) < 1 down to 0, given enough
> iterations, denormals or not), it appears that the Electrologica floating
> point turned out to be impractical.
>
>
>
> On Mon, Oct 17, 2016 at 11:35 AM, Paul Koning 
> wrote:
>
>>
>> > On Oct 17, 2016, at 2:26 PM, Leo Broukhis  wrote:
>> >
>> > > I think that the same answer applies to your narrower question,
>> though I didn't see it mentioned specifically in the documents I've read.
>> >
>> > That's somewhat comforting; I'd hate to think that the BESM-6
>> programmers were substantially sloppier than their Western colleagues. :)
>>
>> As you probably know, Dijkstra was a whole lot more disciplined than the
>> vast majority of his colleagues.
>>
>> > > For example, the treatment of underflow and very small numbers in
>> Electrologica was novel at the time; Knuth specifically refers to it in a
>> > > footnote of Volume 2.  The EL-X8 would never turn a non-zero result
>> into zero, for example.
>> >
>> > For most but not all values of "never", I presume. What was the result
>> of squaring the number with the least representable absolute value?
>>
>> The least representable positive value.  See the paper by F. E. J.
>> Kruseman Aretz that I mentioned.
>>
>> >
>> > > I think IEEE ended up doing the same thing, but  that was almost 20
>> years later.
>> >
>> > Are you're thinking about denormals?
>>
>> I think so, but I'll be the first to admit that I don't really know
>> floating point.
>>
>> paul
>>
>>
>
> ___
> Simh mailing list
> Simh@trailing-edge.com
> http://mailman.trailing-edge.com/mailman/listinfo/simh
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Leo Broukhis
Dijkstra is above reproach; I try to compare the averages.

Having eps^2 = eps is cute, but, given that the idea didn't spread to other
pre-IEEE f.p. implementations nor to IEEE (it is possible to iteratively
square a number x with 0 < abs(x) < 1 down to 0, given enough iterations,
denormals or not), it appears that the Electrologica floating point turned
out to be impractical.



On Mon, Oct 17, 2016 at 11:35 AM, Paul Koning 
wrote:

>
> > On Oct 17, 2016, at 2:26 PM, Leo Broukhis  wrote:
> >
> > > I think that the same answer applies to your narrower question, though
> I didn't see it mentioned specifically in the documents I've read.
> >
> > That's somewhat comforting; I'd hate to think that the BESM-6
> programmers were substantially sloppier than their Western colleagues. :)
>
> As you probably know, Dijkstra was a whole lot more disciplined than the
> vast majority of his colleagues.
>
> > > For example, the treatment of underflow and very small numbers in
> Electrologica was novel at the time; Knuth specifically refers to it in a
> > > footnote of Volume 2.  The EL-X8 would never turn a non-zero result
> into zero, for example.
> >
> > For most but not all values of "never", I presume. What was the result
> of squaring the number with the least representable absolute value?
>
> The least representable positive value.  See the paper by F. E. J.
> Kruseman Aretz that I mentioned.
>
> >
> > > I think IEEE ended up doing the same thing, but  that was almost 20
> years later.
> >
> > Are you're thinking about denormals?
>
> I think so, but I'll be the first to admit that I don't really know
> floating point.
>
> paul
>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Timothe Litt

On 17-Oct-16 13:42, Clem Cole wrote:
>
> On Mon, Oct 17, 2016 at 12:35 PM, Paul Koning  > wrote:
>
> That doesn't excuse sloppy work. 
>
>
> ​Agreed - and you will rarely see me defend Seymour.   His systems
> were fast, but they were not programmer friendly in any way IMO.  Heck
> the man never had an assembler - he did not think it was needed, he
> used programmed in octal.
>
> As I said, close enough for government work seemed to be his mantra;
> and as long as the US National Labs kept buying from him, clearly he
> was getting feedback that was an ok way to design.
>
> Then again our own old employer, DEC took a long time to get around to
> using an IEEE FP scheme. While DEC was /_much better _/at arithmetic
> than CDC/Cray ever was, it was not until the PMAX and Alpha that DEC
> started to support IEEE.​  My old friend and colleague Bob Hanek (whom
> I used to joke as the Mr. Floating Point), once said to me at lunch,
> he thought trying to get correct results from the Vax FP unit made him
> lose his hair.  Note that Bob was hardly a great fan of IEEE either,
> he can regale you with stories of issues with it also.  As I an OS
> guy, I would smile and just say, I'll thankfully leave that you guys
> in the compiler and runtime.
>
I think Bob was ~1990, and built on much earlier work.  He took over
DXML from my group after Aquarius.

Ms. Floating point was Mary Payne.  Mary was into accuracy long before
IEEE - she was DEC's rep to that committee.  Before that, she worked on
the PDP-10 math libraries, (un)popularizing "good to the last bit" among
software, microcode and hardware folk alike.  I don't remember the exact
date, ~mid 70s.  She was one of the earliest odd-discipline people to be
promoted to consultant engineer.

She was the architect of the (unfortunate) POLY instruction.

W/ Dileep Bhandarkar:
VAX floating point: a solid foundation for numerical computation
http://dl.acm.org/citation.cfm?id=641849=ACM=DL





smime.p7s
Description: S/MIME Cryptographic Signature
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Paul Koning

> On Oct 17, 2016, at 1:42 PM, Clem Cole  wrote:
> 
> 
> On Mon, Oct 17, 2016 at 12:35 PM, Paul Koning  wrote:
> That doesn't excuse sloppy work.
> 
> ​Agreed - and you will rarely see me defend Seymour.   His systems were fast, 
> but they were not programmer friendly in any way IMO.  Heck the man never had 
> an assembler - he did not think it was needed, he used programmed in octal.
> 
> As I said, close enough for government work seemed to be his mantra; and as 
> long as the US National Labs kept buying from him, clearly he was getting 
> feedback that was an ok way to design.
> 
> Then again our own old employer, DEC took a long time to get around to using 
> an IEEE FP scheme. While DEC was much better at arithmetic than CDC/Cray ever 
> was, it was not until the PMAX and Alpha that DEC started to support IEEE.​  
> My old friend and colleague Bob Hanek (whom I used to joke as the Mr. 
> Floating Point), once said to me at lunch, he thought trying to get correct 
> results from the Vax FP unit made him lose his hair.  Note that Bob was 
> hardly a great fan of IEEE either, he can regale you with stories of issues 
> with it also.  As I an OS guy, I would smile and just say, I'll thankfully 
> leave that you guys in the compiler and runtime.

While IEEE is a good design, it clearly is not the only possible good design.  
I remember that DEC had a math algorithms team that specifically focused on 
correct (last bit accurate) algorithms for all the various math functions.  I 
forgot the name of the leader of that group; first name Mary.  It may be that 
they couldn't make that work with pre-IEEE DEC float, but I don't know.  I 
tended to avoid floating point.  Heck, I rarely used signed integers...

I still remember chatting with a former classmate who at DEC one late night was 
busy testing packed decimal exponentiation algorithms.  I asked "why the #$* do 
you need those?"  He replied: for compound interest in COBOL programs.  Oh 
yes... Duh...

paul

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Paul Koning
I think that the same answer applies to your narrower question, though I didn't 
see it mentioned specifically in the documents I've read.  For example, the 
treatment of underflow and very small numbers in Electrologica was novel at the 
time; Knuth specifically refers to it in a footnote of Volume 2.  The EL-X8 
would never turn a non-zero result into zero, for example.  I think IEEE ended 
up doing the same thing, but that was almost 20 years later.

paul

> On Oct 17, 2016, at 1:41 PM, Leo Broukhis  wrote:
> 
> Paul,
> 
> My question is more narrow. It focuses specifically on the binary<->decimal 
> transformation. It appears that while the f.p. instructions and the 
> elementary functions were proved correct to the appropriate precision, there 
> was not much care taken to ensure, for example, that "FLOAT_MAX", "FLOAT_MIN" 
> and "FLOAT_EPS" can be converted in both directions without exceptions and 
> loss of precision, etc. It appears to me that people started caring about 
> these things in the late 80s at the earliest. I'd like to be wrong.
> 
> Thanks,
> Leo
> 
> On Mon, Oct 17, 2016 at 8:55 AM, Paul Koning  > wrote:
> 
>> On Oct 14, 2016, at 7:22 PM, Leo Broukhis > > wrote:
>> 
>> I wonder what is the historically first programming environment with native 
>> binary floating point which had been proved/demonstrated to handle f.p. 
>> binary<->decimal I/O conversions 100% correctly?
>> By 100% correctly I mean that all valid binary representations of floating 
>> point numbers could be, given a format with enough significant digits, 
>> converted to unique text strings, and these strings could be converted back 
>> to the corresponding unique binary representations.
>> 
>> Of course, there is enquire.c which facilitated finding bugs in the 
>> Unix/Posix environments, but has anyone cared about this during the 
>> mainframe era?
> 
> I believe so, yes.  For the design of the floating point feature of the 
> Electrologica X8 (early 1960s) the design documents discuss correctness, 
> including what the definition of "correct" should be.
> 
> There is a very nice and very detailed correctness proof of that floating 
> point design, documented in http://repository.tue.nl/674735 
>  .  That paper was written long after the 
> fact, but by one of the people originally involved in that machine.
> 
> Apart from proofs of the correctness of each of the floating point 
> instructions, that paper also describes the sqrt library function.   
> Interestingly enough, the implementation of that function does not use 
> floating point operations.  But the analysis, in appendix B of the paper, 
> clearly shows the error terms of the approximation used and why the number of 
> steps used is sufficient for correctness of the sqrt implementation.
> 
> For a different machine, the CDC 6000 series, I remember reading complaints 
> about its bizarre rounding behavior (rounding at 1/3 ?).  I forgot where that 
> appeared; possibly a paper by prof. Niklaus Wirth of ETH Zürich.
> 
>   paul
> 
> 
> 

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Clem Cole
On Mon, Oct 17, 2016 at 12:35 PM, Paul Koning 
wrote:

> That doesn't excuse sloppy work.


​Agreed - and you will rarely see me defend Seymour.   His systems were
fast, but they were not programmer friendly in any way IMO.  Heck the man
never had an assembler - he did not think it was needed, he used programmed
in octal.

As I said, close enough for government work seemed to be his mantra; and as
long as the US National Labs kept buying from him, clearly he was getting
feedback that was an ok way to design.

Then again our own old employer, DEC took a long time to get around to
using an IEEE FP scheme. While DEC was *much better *at arithmetic than
CDC/Cray ever was, it was not until the PMAX and Alpha that DEC started to
support IEEE.​  My old friend and colleague Bob Hanek (whom I used to joke
as the Mr. Floating Point), once said to me at lunch, he thought trying to
get correct results from the Vax FP unit made him lose his hair.  Note that
Bob was hardly a great fan of IEEE either, he can regale you with stories
of issues with it also.  As I an OS guy, I would smile and just say, I'll
thankfully leave that you guys in the compiler and runtime.
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Leo Broukhis
Paul,

My question is more narrow. It focuses specifically on the binary<->decimal
transformation. It appears that while the f.p. instructions and the
elementary functions were proved correct to the appropriate precision,
there was not much care taken to ensure, for example, that "FLOAT_MAX",
"FLOAT_MIN" and "FLOAT_EPS" can be converted in both directions without
exceptions and loss of precision, etc. It appears to me that people started
caring about these things in the late 80s at the earliest. I'd like to be
wrong.

Thanks,
Leo

On Mon, Oct 17, 2016 at 8:55 AM, Paul Koning  wrote:

>
> On Oct 14, 2016, at 7:22 PM, Leo Broukhis  wrote:
>
> I wonder what is the historically first programming environment with
> native binary floating point which had been proved/demonstrated to handle
> f.p. binary<->decimal I/O conversions 100% correctly?
> By 100% correctly I mean that all valid binary representations of floating
> point numbers could be, given a format with enough significant digits,
> converted to unique text strings, and these strings could be converted back
> to the corresponding unique binary representations.
>
> Of course, there is enquire.c which facilitated finding bugs in the
> Unix/Posix environments, but has anyone cared about this during the
> mainframe era?
>
>
> I believe so, yes.  For the design of the floating point feature of the
> Electrologica X8 (early 1960s) the design documents discuss correctness,
> including what the definition of "correct" should be.
>
> There is a very nice and very detailed correctness proof of that floating
> point design, documented in http://repository.tue.nl/674735 .  That paper
> was written long after the fact, but by one of the people originally
> involved in that machine.
>
> Apart from proofs of the correctness of each of the floating point
> instructions, that paper also describes the sqrt library function.
> Interestingly enough, the implementation of that function does not use
> floating point operations.  But the analysis, in appendix B of the paper,
> clearly shows the error terms of the approximation used and why the number
> of steps used is sufficient for correctness of the sqrt implementation.
>
> For a different machine, the CDC 6000 series, I remember reading
> complaints about its bizarre rounding behavior (rounding at 1/3 ?).  I
> forgot where that appeared; possibly a paper by prof. Niklaus Wirth of ETH
> Zürich.
>
> paul
>
>
>
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Paul Koning

> On Oct 17, 2016, at 12:30 PM, Clem Cole  wrote:
> 
> "Correct" is difficult check out:  http://www.netlib.org/paranoia/paranoia.c 
> 
> This set of programs lead to the IEEE FP format work.   And Paul is 100% 
> correct, Seymour was never worried about correctness, just being fast and 
> "close enough for government work.".  He used reciprocal approximation, not 
> full dividers for the Cray and CDC boxes because they took too long, 
> ones-complement for binary etc..; basically set the dial to be fast, not 
> accurate.   Remember he came from a time when a the slide-rule and 3 
> significant digits was king.So much, if not all, of the input data was 
> not that precise.

That doesn't excuse sloppy work.  And just because you use a reciprocal 
operation doesn't mean it has to be incorrect; you just have to do the 
analysis.  Dijkstra and friends did, Cray did not; they were working around the 
same time but with very different mindsets about good design.  And in 
particular, correct doesn't have to mean slow; Dijkstra's SQRT function is 
quite short and very efficient (it only uses fixed point operations, and not 
many of them), yet it is correct.

paul

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Looking for a milestone

2016-10-17 Thread Paul Koning

> On Oct 14, 2016, at 7:22 PM, Leo Broukhis  wrote:
> 
> I wonder what is the historically first programming environment with native 
> binary floating point which had been proved/demonstrated to handle f.p. 
> binary<->decimal I/O conversions 100% correctly?
> By 100% correctly I mean that all valid binary representations of floating 
> point numbers could be, given a format with enough significant digits, 
> converted to unique text strings, and these strings could be converted back 
> to the corresponding unique binary representations.
> 
> Of course, there is enquire.c which facilitated finding bugs in the 
> Unix/Posix environments, but has anyone cared about this during the mainframe 
> era?

I believe so, yes.  For the design of the floating point feature of the 
Electrologica X8 (early 1960s) the design documents discuss correctness, 
including what the definition of "correct" should be.

There is a very nice and very detailed correctness proof of that floating point 
design, documented in http://repository.tue.nl/674735 
 .  That paper was written long after the 
fact, but by one of the people originally involved in that machine.

Apart from proofs of the correctness of each of the floating point 
instructions, that paper also describes the sqrt library function.   
Interestingly enough, the implementation of that function does not use floating 
point operations.  But the analysis, in appendix B of the paper, clearly shows 
the error terms of the approximation used and why the number of steps used is 
sufficient for correctness of the sqrt implementation.

For a different machine, the CDC 6000 series, I remember reading complaints 
about its bizarre rounding behavior (rounding at 1/3 ?).  I forgot where that 
appeared; possibly a paper by prof. Niklaus Wirth of ETH Zürich.

paul


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

[Simh] Looking for a milestone

2016-10-14 Thread Leo Broukhis
I wonder what is the historically first programming environment with native
binary floating point which had been proved/demonstrated to handle f.p.
binary<->decimal I/O conversions 100% correctly?
By 100% correctly I mean that all valid binary representations of floating
point numbers could be, given a format with enough significant digits,
converted to unique text strings, and these strings could be converted back
to the corresponding unique binary representations.

Of course, there is enquire.c which facilitated finding bugs in the
Unix/Posix environments, but has anyone cared about this during the
mainframe era?

(I've been playing with the BESM-6 floating point conversions, and the
results are shameful.)

Thanks,
Leo
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh