Bojan Antonovic wrote:
> My conjuncture is: there is no need for 128 bit instruction code, so 64 bit
> inst. code will be the final one.
To which William Stuart replied:
>I am willing to bet that at least one of the non-intel chip manufacturers
>announces a 128bit chip, if only to gain some attention.
Some of the higher-end chips are already doing some 128-bit operations,
for instance the 128-bit floating multiply-add on the MIPS and IBM
and the 128-bit integer multiply on MIPS. I note that the Alpha Hardware
Reference Manual also defines a 128-bit X-floating data type, which is
already supported by the DEC F90 compiler (MIPSPro F90 has similar) but
currently software emulated, hence slow.
Serious scientific programmers will always demand access to at least a
128-bit floating data type; Cray machines have had such for many years,
perhaps right from the beginning. One can often obviate the need for
the extra precision by careful coding to minimize round-off, but the
fact remains that there are important types of calculations (e.g.
large ill-conditioned matrix computations) that are sinply not feasibly
using 8-byte reals. A few years back I did some work on non-normal
differential operators arising in hydrodynamic stability theory (see
Mayer & Reshotko, Physics of Fluids, January 1997) which illustrates
this nicely: such operators have several, often many, nearly degenerate
eigenvalues, i.e. ones whose eigenvectors are highly non-orthogonal.
This is all quite technical but the upshot is, when one attempts to
compute the eigenvalues (which will tell one whether the fluid system
is stable to perturbations or not), the non-normal nature of the underlying
differential operator naturally leads to an ill-conditioned (often
extremely so) discrete matrix eigenvalue problem. One can use all the
standard (and even some non-standard) tricks to minimize round-off
accumulation - I was using full row and column pivoting and lots of
other tricks - but for certain parameter ranges, the only way to get
even one siginificant digit in the most-unstable eigenvalue was to go
to extended precision arithmetic. One could argue that various symbolic
algebra packages allow one to do this without special hardware, but when
you're dealing with matrices of dimension 1000 x 1000 and larger, Mathematica
just isn't going to cut it.
Alpha CPUs are already being used in some of the newer, massively parallel
Cray machines - I expect a hardware 128-bit floating type is a logical
next step to satisfy the demands of users of such machines.
My point is this: 64-bit may well be enough for even power-hungry PC
users for the next ? years, but it's historically not been the PCers
that have driven chip makers' innovations in the hardcore computation
arena - just look how long it took Intel to begin serious work on the
IA-64.
Todd Lewis writes:
>I'm not ruling out 128bit chips. High end machines will always be out
>there and ahead of the marketplace. The Alpha has been out for awhile
>now, but how many alpha machines do you see sitting on desktops in
>your basic office.
The desktop in my pretty basic office has one... :)