On Tue, Jan 26, 2010 at 2:21 PM, Florian Bösch <[email protected]> wrote:
> On Jan 26, 7:14 pm, Tristam MacDonald <[email protected]> wrote: > > In essence, the only performance issue your program is suffering from is > > that it performs a huge amount of very expensive work, *with no > discernible > > effect to the user*. As the old adage goes, "the fastest code is the code > > that isn't run"... > > My gForce 8800 claims 64GB/s memory bandwith, my wave.py test manages > just about 75MB/s (disregarding drawing). So somehow OpenGL/nvidia/ > khronos/extensions/whatever manages to retard my byte troughput > performance by a factor of roughly 1000... > Which only goes to prove my point: your wave.py example is in no way bandwidth bound. You might be limited by the way you are thrashing the z-buffer with 200x overdraw. You might be ALU limited in transforming over a million vertices per frame. You are almost certainly throwing away huge amounts of potential bandwidth by not cache-optimising those million odd vertices. > The point isn't that wave.py could be more efficient somehow, the > point is that as a test how fast you can push bytes around on your GPU > using OpenGL, it's an utter disaster. > Of course it is an utter disaster - it doesn't in any way test how fast you can push bytes around on the GPU using OpenGL. Instead, it tests how fast you can transform, clip and rasterise one million vertices - and no one would argue that that is enough to bring even a high end GPU to its knees. -- Equally, I can argue that a Ferrari should be able to hit its stated top speed of 200 km/h, while traveling uphill, in a rainstorm, and towing a trailer. -- Tristam MacDonald http://swiftcoder.wordpress.com/ -- You received this message because you are subscribed to the Google Groups "pyglet-users" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
