Andrew James Richardson wrote: > > Hello there, > I have just finished parrallelisation of Mesa vertex transformation and > found that as the Vertex buffer is quite small (~100 verticies to the nearest > order of magnitude) twhen using SMP vertex transformation you get no > noticable performance gain because of the parrallelisation overhead. If the > vertex buffer was larger ~1000 parrallelisation would be advantagous.
One question to ask is: regardless of the vertex buffer size, typically how many vertices are issued between glBegin/End or state changes? Does Q3 (for example) render any objects/characters with > 1000 vertices? > Im now writing an interface layer that passes all ogl routine calls to > another thread, there by running OGL on one thread and the client on another. > This should improve performance on Q3A and other games like that. That implies some sort of buffering between the app and the renderer. How will that work? > The other thing that got me thinking was this: I went the the ideal home > exhibition in London over the w/e and got sight of the new I-macs; they > aparently don't use vesa drivers for thier desktop, but use the 3D engine > instead. This got me thinking about the console DRI port that some-one has > already done. If you can get DRI to run from the console, would it be > possible to write and X driver using OGL/DRI (I know that this may defeat the > idea of DRI, but then again it may not....). Have people like Keith, Brian > and Daryll got any idea if it will run any faster/slower than vesa/svga > drivers and if any 3-D like special effects would be easy to encorperate into > the OGL X driver. It would probably be slower. Window system rendering generally involves 2D operations like fills, blits, color expansions, etc. Those are precisely the sort of things we haven't optimized in the 3D driver code (glDraw/CopyPixels, glBitmap). -Brian _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel