Hi All,
It was brought to my attention via private email that although the
work Keith W. has been doing on Mesa has resulted in a 38% speedup,
the overall performance of Mesa compared to native ICD drivers on
Windows is about 1/2 the speed. This then begs the question, why is
Mesa still so slow?
One of the things we have discovered with the performance of Mesa in
GLDirect, is that as the amount of geometry in the scene increases,
the performance appears to asymptote to about 1/2 the speed of native
ICD drivers. This has been a consistent result in all our testing and
benchmarking of GLDirect against native IHV drivers.
I have been thinking about this some more, and it suddenly dawned on
me where the problem is. To put it succinctly; DATA COPYING AND
REFORMATTING!
Currently Mesa does all geometry and lighting transformations into
its own internal geometry pipelin, which is in it's own internal
format. The geometry pipeline is then read by the back end graphics
hardware driver (ie: our Direct3D or the 3dfx driver), re-formatted
into the format required by the hardware driver (ie: 3dfx vertex
format, or Direct3D vertex format) and then that re-formatted data is
passed to the hardware. Hence every vertex undergoes an unneeded
copying and re-formatting step between the internal geometry pipeline
and the hardware specific geometry structures.
In essence this means that all the work that Keith W. and others have
been doing on speeding up the geometry pipeline is for nothing unless
this data copying and re-formatting can be avoided.
The only way I can see that this can be avoided, it to re-structure
the Mesa geometry pipeline so that it's internal geometry pipeline
can be restructured to be in the exact same format as the hardware
driver requires. This may in the end require that hardware drivers
also contain the back end geometry pipeline, so that we can still
have plug in back end hardware drivers, but they are optimised for
the hardware in question (this will also tie in nicely with hardware
geometry processing). Also this re-formatting avoidance will be
rather important for DMA driven hardware, since in that environment
maximum performance will only be achieved if Mesa can be transforming
vertex data directly into the hardware DMA buffers where appropriate.
Comments?
Regards,
+---------------------------------------------------------------+
| SciTech Software - Building Truly Plug'n'Play Software! |
+---------------------------------------------------------------+
| Kendall Bennett | Email: [EMAIL PROTECTED] |
| Director of Engineering | Phone: (530) 894 8400 |
| SciTech Software, Inc. | Fax : (530) 894 9069 |
| 505 Wall Street | ftp : ftp.scitechsoft.com |
| Chico, CA 95928, USA | www : http://www.scitechsoft.com |
+---------------------------------------------------------------+
_______________________________________________
Mesa-dev maillist - [EMAIL PROTECTED]
http://lists.mesa3d.org/mailman/listinfo/mesa-dev