Ian Clark <earthspo...@gmail.com> wrote:
> But why should I feel obliged to carry on using lossy methods when I've
> just discovered I don't need to? Methods such as floating point arithmetic,
> plus truncation of infinite series at some arbitrary point. The fact that
> few practical measurements are made to an accuracy greater than 0.01%
> doesn't actually justify lossy methods in the calculating machine. It
> merely condones them, which is something else entirely.

There will be a cost, of course. Supporting arbitrarily small and
large numbers changes the time characteristics of the computations in
ways that will depend on the log-size of the numbers -- and of course
will blow the CPU's caching. Also, because the intermediate values are
being stored with unlimited precision, you may find some surprises,
such as values close to 1 which have enormous numerators and
denominators.

IMO it's a worthy experiment, especially if you wind up gathering data
about the cost and benefit.

There's some interesting reflections going on about this on the
"unums" mailing list. The trouble with indefinite precision rationals
is that they are overkill for all of the problems where they're
actually needed, since the inputs and the solution will normally need
to be expressed to only finite digits. Now, I don't think this makes
doing experiments with them worthless; far from it. By tracking things
like the smallest expected input (for example the smallest triangle
side, or the largest ratio between sides) and the largest integer
generated as intermediate value (perhaps also tracking the ratio in
which this integer appeared), we can wind up answering how bad things
can get (of course, this is the task of numerical analysis).

Ulrich Kulisch developed technology called the "super-accumulator",
which was supposed to function alongside the usual group of
floating-point registers. It stored an overkill number of bits to
permit it to accumulate multiple additions of products of arbitrary
floats, the sort of operations you need to evaluate polynomials and
linear algebra.  Using this, he was able to show that a large number
of operations which were considered unstable were possible to
stabilize by providing this unrounded accumulator. In the end this
wasn't made part of the IEEE standard, but it's being included in some
of the numerical systems being developed in response to the need for
more flexible floating point formats from the machine-learning world,
where smaller-bitwidth floating point numbers both make stability a
serious concern and also make the required size of the
superaccumulator much smaller.

> Ian Clark

-Wm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to