I was wondering if anyone has thought about accelerating NumPy with a GPU. For example nVidia's CUDA SDK provides a feasible way to offload vector math onto the very fast SIMD processors available on the GPU. Currently GPUs primarily support single precision floats and are not IEEE compliant, but still could be useful for some applications.
If there turns out to be a significant speedup over using the CPU, this could be a very accessible way to do scientific and numerical computation using GPUs, much easier than coding directly to the GPU APIs. Martin _______________________________________________ Numpy-discussion mailing list [email protected] http://projects.scipy.org/mailman/listinfo/numpy-discussion
