I've updated the mandelbrot.py demo:
http://wiki.tiker.net/PyCuda/Examples/Mandelbrot
which runs against the latest git, cheers for accepting the patches.

There's a (slightly linkbaity!) post on my blog with timing info for a
9800GT and GTX 480:
"22,937* faster Python math using pyCUDA"
http://ianozsvald.com/2010/07/14/22937-faster-python-math-using-pycuda/

One question that's raised from my post - how come double precision
CPU math is faster than single precision CPU math? I hadn't expected
that result to drop out of the test and it has been a while since I
did any good speed tests on CPUs. Is it generally the case that double
precision math (both in C and Python?) on x86 is faster than single
precision now?

i.

On 9 July 2010 22:11, Andreas Kloeckner <[email protected]> wrote:
> On Tue, 29 Jun 2010 16:44:18 +0100, Ian Ozsvald <[email protected]> wrote:
>> Andreas, I'm attaching two patches.
>>
>> 0001 removes the #warning lines in cuda.hpp that make msvc (2008 on WinxP) 
>> fail.
>>
>> 0002 adds GPUArray comparisons for == != < > <= >=
>
> Merged, thanks!
>
>> Assuming you're cool with the patches I can contribute an updated
>> Mandelbrot.py where a reasonable speed-up is obtained using pure
>> Python/GPUArray(numpy-like) operators rather than having to implement
>> a pure .cu solution. This GPUArray solution sits between a numpy (CPU)
>> speed-up and the pure .cu code version. It'll make for a good demo for
>> pure-Python folk (like my boss).
>
> Sure, I'd be interested in seeing that.
>
> Andreas
>
>



-- 
Ian Ozsvald (A.I. researcher, screencaster)
[email protected]

http://IanOzsvald.com
http://MorConsulting.com/
http://blog.AICookbook.com/
http://TheScreencastingHandbook.com
http://FivePoundApp.com/
http://twitter.com/IanOzsvald

_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to