On 8/20/2010 1:51 PM, Maciej Fijalkowski wrote: > 2010/8/20 Paolo Giarrusso<[email protected]>: >> 2010/8/20 Jorge Timón<[email protected]>: >>> Hi, I'm just curious about the feasibility of running python code in a gpu >>> by extending pypy. >> Disclaimer: I am not a PyPy developer, even if I've been following the >> project with interest. Nor am I an expert of GPU - I provide links to >> the literature I've read. >> Yet, I believe that such an attempt is unlikely to be interesting. >> Quoting Wikipedia's synthesis: >> "Unlike CPUs however, GPUs have a parallel throughput architecture >> that emphasizes executing many concurrent threads slowly, rather than >> executing a single thread very fast." >> And significant optimizations are needed anyway to get performance for >> GPU code (and if you don't need the last bit of performance, why >> bother with a GPU?), so I think that the need to use a C-like language >> is the smallest problem. >> >>> I don't have the time (and probably the knowledge neither) to develop that >>> pypy extension, but I just want to know if it's possible. >>> I'm interested in languages like openCL and nvidia's CUDA because I think >>> the future of supercomputing is going to be GPGPU. > Python is a very different language than CUDA or openCL, hence it's > not completely to map python's semantics to something that will make > sense for GPU. Try googling: copperhead cuda Also look at:
http://code.google.com/p/copperhead/wiki/Installing _______________________________________________ [email protected] http://codespeak.net/mailman/listinfo/pypy-dev
