Hi,

I have been trying to understand the performance of a simple test
programs using both floats and doubles and was hoping that someone on this
list could help give some insight.  Basically I have a small program that
does some simple floating point math using random numbers using floats or
doubles in a loop.  I use clock() to time the loop so that I can compare
the speed with floats and doubles.  With no optimization things are as
expected - doubles take quite a bit longer.  With '-O3 -march=i686' the
time is exactly the same.  This is on a celeron 366.  Any ideas?

Karl

-- 
_____________________________________________________
| Karl W. MacMillan                                 |
| Computer Music Department                         |
| Peabody Institute of the Johns Hopkins University |
| [EMAIL PROTECTED]                           |
| www.peabody.jhu.edu/~karlmac                      |
-----------------------------------------------------

Reply via email to