Peter Alexander Wrote:

> On 13/11/10 7:04 PM, parallel noob wrote:
> > The situation is different with GPUs. My Radeon 5970 has 3200 cores. When 
> > the core count doubles, the FPS rating in games almost doubles. They 
> > definitely are not running Erlang style processes (one for GUI, one for 
> > sounds, one for physics, one for network). That would leave 3150 cores 
> > unused.
> 
> This is all to do with "what" you are actually parallelizing. The task 
> of the GPU (typically transforming vertices and doing lighting 
> calculations per pixel) is trivially parallelized.
> 
> If you did that task on CPU it would still be trivially parallelized 
> (just spawn as many threads as you have cores and split up the screen's 
> pixels between the threads).
> 
> Besides being optimized for this particular task, GPU's have no silver 
> bullet for handling concurrency. Trying to get a GPU to parallelize a 
> task that is not easily parallelizable (e.g. depth-first search on a 
> graph) is just as hard as getting the CPU to do it. In this case, 
> doubling the cores will not help.
> 
> GPU batching of tasks concurrency handles rendering well, but Erlang 
> style message passing is a better model in the general case.

He/she also doesn't really get the idea of GPU. Writing programs for the GPU is 
really simple. Just use a C enriched with parallel arrays. Few changes here and 
there. The old C program now runs 200 times as fast if it's graphics processing 
(z-buffer filtering, blur, brightness correction). When you need something more 
complex, the control becomes hard and locks don't always work. You're left with 
academic non-concepts and Erlang style message passing.

Reply via email to