On 1/4/03 3:02 PM, "Shane Legg" <[EMAIL PROTECTED]> wrote:
> 
> I had similar thoughts, but when I did some tests on the webmind code
> a few years back I was a little surprised to find that floating point
> was about as fast as integer math for our application.  This seemed to
> happen because where you could do some calculation quite directly with
> a few floating point operations, you would need more to achieve the same
> result with integer math due to extra normalisation operations etc.


I can see this in some cases, but for us the number of instructions is
literally the same; the data fields in question could swap out floats with
ints (with a simple typedef change) with no consequences.  We do have a
normalization function, but since that effectively prunes things we'd use it
whether it was floating point or integer, and it is only very rarely
triggered anyway.

I guess the key point is that we aren't really "faking" floating point with
integers.  It is a case of floating point bringing nothing to the table
while offering somewhat inferior performance under certain conditions.  The
nice thing about integers is that performance is portable.  I certainly
wouldn't shy away from using floating point if it made sense.  It is just a
mild curiosity that when all is said and done, nothing in the core engine
requires floating point computation.

 
> I was also surprised to discover that the CPU did double precision
> floating point math at about the same speed as single precision floating
> point math.  I guess it's because a lot of floating point operations are
> internally highly parallel and so extra precision don't make much speed
> difference?


I believe this is because current FP pipelines are double precision all the
way through generally.  If you run single precision code, it uses up just as
many execution pipelines as double precision.

The exception is the SIMD floating point engines (aka "multimedia
extensions") that a lot of processors support today.  But I normally just
write all floating point for standard double precision execution these days.

 
> Anyway, the thing that really did affect performance was the data size
> of the numbers being used (whether short, int, long, float, double etc.)
> Because we had quite a few RAM cache misses, using a smaller data type
> effectively meant that we could have twice as many values in cache at
> the same time and each cache miss would bring twice as many new values
> into the cache.  So it was really the memory bandwidth required by the
> size of the data types we were using that slowed things down, not the
> time the CPU took to do a calculation on a double precision floating
> point number compared to say an a simple int.


A good point, and one that applies to using LP64 types as well.  The
entirety of our code fits in cache, but data fetches are unavoidably
expensive.


> I'd always had a bias against using floating point numbers ever
> since I used to write code 15 years ago when the CPU's I used
> weren't designed for it and it really slowed things down badly.
> It's a bit different now however with really fact floating point
> cores in CPUs.


One consideration that HAS gone into maintaining a pure integer code base is
that it can run with extreme efficiency as currently designed on simple
integer MasPar.  When used like this, the opportunity exists for scalability
that is far beyond what we could get if we required a floating point
pipeline.  The idea of having scads of simple integer cores connected to a
small amount of fast memory and low latency messaging interconnects is
appealing and our code is very well suited for this type of architecture.
Fortunately there seems to be companies starting to produce these types of
chips.

Ultimately, we'd like to move the code to something like this, and since
there is no design or performance cost to only using integers on standard
PCs (they work better anyway in our case), we haven't introduced floating
point into the kernel without a good reason.  So far we haven't actually
come across a need for floating point computation in the kernel, so we've
never had to deal with this issue.

Cheers,

-James Rogers
 [EMAIL PROTECTED]


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to