Op Wed, 20 Apr 2011, schreef Hans-Peter Diettrich:

Of course there exists no general rule, it depends on the concrete purpose of a calculation, which algorithm, precision and type (BCD, fixed point...) yields the "best" results. But there also exists no reason why a coder should be prevented from using existing instructions and data types.

Well... I actually believe compilers should support extended precision. I frequenly get Fortran programs that I need to benchmark that use the REAL*10 type.

Do those programmers have good reasons for using REAL*10? Probably not. They use best precision by default. They code in Fortran because of this kind of support. No, not GNU Fortran, it doesn't support REAL*10, so I need to use the expensive commercial compilers. They don't care, they don't pay for it.

Is it slow? Yes. Do they care? Sometimes. But... parallelizing over 256 cores gives more benefit than using fast double precisions. They start asking government subsidies for the next big supercomputer for the sake of promoting science. That's what your tax money goes to.

Shake your head... It's stupid, I'm doing that for a few years already. But the solution is not to remove extended support from the compiler. Users will walk away.

Daniël
_______________________________________________
fpc-devel maillist  -  fpc-devel@lists.freepascal.org
http://lists.freepascal.org/mailman/listinfo/fpc-devel

Reply via email to