--On Wednesday, 24 January, 2001 17:33 +0530 Manoj Dhooria
<[EMAIL PROTECTED]> wrote:

> Good overview, Jaffer. Am copying to general list too to draw
> attention to a related issue of which I don't know of an
> adequate internet standard & for which I had to invent my own
> representation.
>
> I am into distributed engineering software development, &
> keeping precision of floating point values is extremely
> important for me - even when they will be transmitted over the
> net & across CPU architectures. Using printf() format is
>...
> What I essentially do is transform the source CPU's native
> representation to IEEE floating point representation; then
> print it out as a sequence of hex bytes; then run an optimizer
> on it to drop some of the "unnecessary" octets (guaranteeing
> that the original hex sequence can be reconstructed at the
> receiver end; no dropping of bytes for numbers where I can not
> guarantee this).
>... 

Hi.

I am long out of the field (and on a different continent from my
own library), but, if it is of interest to either of you, there
was an extensive literature on optimal representation of
precision of numbers in the statistical computing and statistical
and scientific database literature in the decade or so starting
in the mid-1970s.  That period overlapped the completion and
introduction of IEEE floating point, and the literature deals
with machines with both fixed precision and the IEEE floating
point gradual degradation. 

One of the interesting problems is that one really wants to
understand data (measurement) precision and precision degradation
under computation (and different algorithms).  Basing the display
precision on the apparent (machine-representation) precision of
the floating point values themselves is ultmately as misleading
as use of a constant display precision independent of the numeric
properties of the machine representations.

    john

p.s. Contemporary versions of Fortran and PL/I also accept ISO
6093 numbers.  The latter is probably no longer of importance,
but, if you are making reference to the numerical and scientific
programming communities, Fortran still is.

Reply via email to