On Wed, 8 Mar 2017 15:13:59 -0600, Paul Gilmartin wrote: >On Wed, 8 Mar 2017 14:32:10 -0600, John McKown wrote: >> >>one reason that programmers love >>packed rather than binary is that they can read it directly in the hex >>dump. Said dump being far more prevalent tool for debugging in the far >>past. Some decisions are not really hardware dictated. They're cultural. >> >DFP must have been a great disappointment to programmers who expected >it would facilitate reading floating point numbers in dumps. > >And appearance of dumps is the only reason I can imagine that packed >decimal is sign-magnitude rather than 10's complement.
That was not the reason. Architecture of the System/360 by Amdahl, Blaauw, and Brooks describes many of the design decisions that were made, including this one. http://www.ece.ucdavis.edu/~vojin/CLASSES/EEC272/S2005/Papers/IBM360-Amdahl_april64.pdf One of the reasons for using decimal arithmetic for commercial programs is that binary cannot precisely represent 1/10, just as decimal cannot accurately represent 1/3. If you divide 1 by x'0A' you get, IIRC, X'0.19999999999....'. -- Tom Marchant ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
