On Tue, 2007-07-03 at 21:51 -0400, Michael wrote: > But if you know how to "simulate" quadruple precision in Matlab or Maple, or > Mathematica, in order to see if an algorithm will overslow when converting > into C/C++/Fortran, please let me know. I want to do the algorithm design in > Matlab, and test if it will overflow, before converting everything into > C/C++/Fortran. > > If you know how to "simulate" quadruple precision in Matlab, Maple or > Mathematica even with the symbolic toolbox, please let me know too... this > is for algorithm design and testing... > > Moreover, are there popular quadruple precision packages? Please recommend > the fastest one. I am really in huge need of speed. > > Thank you very much!
I am not an expert on advanced mathematics, but I do know a little bit about computer systems as a senior software engineer. We already have not only double precision, but long double precision with is 10 bytes instead of 8. With some work one might convert the existing code from GSL. If you are really lucky to have a Harris 24 bit machine (assuming that they are the same as they used to be); it has 6 bytes for normal precision and 12 bytes for double precision and 24 bytes for double double precision. High speed is more a characteristic of the computer machinery than the software. A Cray computer or a mainframe computer is the easiest way to make the calculations faster. You should be aware that IBM mainframes use 64 bit notation on floating point storage and calculation. This shifts the accuracy (magnitude ) to the left of the decimal point, but runs the risk of floating point underflow on scientific calculations. IBM does this because most of the uses of these computers are used on financial calculations and are only interested in the significant two decimal digits (of the dollar). -- Jack Denman <[EMAIL PROTECTED]> _______________________________________________ Help-gsl mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gsl
