On Jan 26, 2:34 am, Tristam MacDonald <[email protected]> wrote:
> Computing directly on the GPU removes all of those bottlenecks, as the CPU
> and bus are used solely to calculate and transfer control data to the GPU.
> Further, the GPU is many, many times faster than the CPU at updating
> vertex/pixel data - we are looking at 1,500+ shader cores on a
> current-generation GPU, and unlike multiple cores in a CPU, it isn't
> unreasonable to run them all flat out.

That is theoretically true, there's just two major flaws in that
theory.
1) The reality: Geometry shaders are hog slow, true instancing is not
any faster then pseudo instancing, and why does the following (wave
simulation) bring even high end GPUs to its knees for maps exceeding
256x256 cells even tough it runs entirely on the GPU?
http://hg.codeflow.org/gletools/file/e351ce1564b0/examples/waves.py

2) The programming model required to archive it  is extremely complex
and very inflexible, because it's not truly "programmable", just
hopping around on buffers in a semi-fixed pipeline. It's why I have
such high hopes for OpenCL applications, where you'd not run any of
your program on the CPU anymore (except to supply user input)

-- 
You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pyglet-users?hl=en.

Reply via email to