Thank you, Nelson!

 It appears that, indeed, apart from theoretical research, there was not
much actual work attempting to perfect the accuracy of the conversions
until the 1990s.
Then I don't have to feel ashamed on behalf of the Soviet applied
mathematicians that they didn't do a job good enough by today's standards
in the mid-1970s.
That's a relief.

Regards,
Leo

On Mon, 17 Oct 2016 18:42:19 -0600, "Nelson H. F. Beebe" <
[email protected]> wrote:

>
> The discussions on this thread began with a question about the
> accuracy of binary->decimal->binary conversions.  The key original
> references are recorded in
>
>         http://www.math.utah.edu/pub/tex/bib/fparith.bib
>         http://www.math.utah.edu/pub/tex/bib/fparith.html
>
> in entries Goldberg:1967:BED, Matula:1968:BCT, and Matula:1968:C.
> Those papers showed how many digits were needed for correct round-trip
> conversions, but did not exhibit code to do so.
>
> Some later papers had real source code, including Steele:1990:HPF,
> Clinger:1990:HRF, Clinger:2004:RHR, Burger:1996:PFP, Knuth:1990:SPW,
> and Abbott:1999:ASS.  The 20-year retrospectives in Clinger:2004:RHR
> and Steele:2004:RHP sum up that earlier work, and may be the best
> starting point to understand the problem.
>
> It is decidedly nontrivial: the Abbott paper in the section
> ``Difficult numbers'' starting on page 739 discusses hard cases [where
> the exact result in the output base is almost exactly halfway between
> two machine numbers], and on page 740, they write ``The decimal size
> may be unbounded, because there is a natural bound derived from the
> exponent range, as was mentioned earlier. This bound is 126 digits for
> single, 752 for double, and 11503 for extended precision.''
>
> Entry Knuth:1990:SPW has the provocative title ``A Simple Program
> Whose Proof Isn't'': it examines the conversions between fixed-binary
> and decimal needed in the TeX typesetting system, a much simpler
> problem whose proof eluded Knuth for several years until this paper.
> A companion paper Gries:1990:BDO supplies an alternative proof of
> Knuth's algorithm.
>
> As to the first actual system to provide correct round-trip
> conversions, the Abbott paper on the IBM mainframe (S/360 to zSeries
> machines) describes the pressure to get it right the first time,
> because of the longevity of that architecture, and the high cost of
> repairing hardware implementations.
>
> The 1990 Steele and Clinger references above supply software
> implementations in Common Lisp exploiting the multiple-precision
> arithmetic supported in that language.
>
> Some current compilers, such as releases of gcc for the last several
> years, use the GMP and MPFR multiple-precision arithmetic packages to
> supply correct compile-time conversions.  Presumably, the revision
> history of GNU glibc would reveal when similar actions were taken for
> the strtod(), printf(), and scanf() families of the C and C++
> programming languages. I have not personally investigated that point,
> but perhaps other list members have, and can report their findings.
>
> >From scans of executables of assorted gcc versions, it appears that
> GMP and MPFR came into use in gcc-family compilers about mid-2007.
> The oldest gcc snapshot (of hundreds that I have installed) that
> references both those libraries is gcc-4.3-20070720.  However, the
> ChangeLog file in that release has mention on MPFR from 18-Nov-2006,
> and an entry of 11-Jan-2007 that says "Add gmp and mpfr".
>
> David M. Gay (then at AT&T Bell Labs renamed as Lucent Technologies)
> released an accurate implementation of strtod() marked "Copyright (C)
> 1998-2001 by Lucent Technologies".  The oldest filedate in my personal
> archives of versions of that software is 22-Jun-1998, with changes up
> to 29-Nov-2007.
>
>
_______________________________________________
Simh mailing list
[email protected]
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to