> Surely the overriding problem is that the manufacturers of graphics cards
> tend
> to keep the driver code confidential. e.g. the linux drivers - if provided
> -
> "work" but by no means implement the full capabilities of the graphics
> card.
> 
> 
> Regards
> Brian Beesley

While true to some extent, you can change the fact that the hardware only
supports single-precision. There is some interesting stuff over at
www.gpgpu.org where they are emulating double-precision on the GPU, but it
has a really heavy price in terms of bandwidth and computing power (enough
so that an FFT on the top-end GPUs *might* come out even with a CPU with a
lot of work).

There is also an interesting approach taken by one set of researchers where
they did mixed precision (SP on the GPU, DP on the CPU) where the CPU would
do the double precision calculations when the single precision's 'noise' got
too high. It sounds like this might be something worth thinking about. Could
you maybe do a scenario like this?
Step 1. Do single-precision FFT on the GPU
Step 2. Check for rounding error (which you'd do for a single or double
precision FFT anyway)
Step 3. Success - goto step 1 Fail - Do DP FFT on the CPU

I don't know how often you'd get rounding errors (I suspect it'd happen a
lot), so the cost-benefit of the approach is probably net negative (since it
involves a lot of cross-talk between CPU-GPU.

I think eventually nVidia and ATI(AMD) will put higher precision FP on their
GPUs. Especially with the likes of Ageia putting out 'PPUs' which I *think*
support double-precision FP (wouldn't they have to or the physics at
distances would get wonky?)

-j

_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to