Jens Owen wrote:
Ian,

I think you're making a mountain out of a mole hill, but I like the mountain that you're trying to build. Supporting HW accellerated indirect rendering would be a good thing, but it's not necessary for the change you're trying to make.

Right. It's not required for what I want to do at all. I just saw that some of the same things needed to happen in order to do either. :)


Ian Romanick wrote:

There is no easy way for the DDX driver to export the extended visual information needed for fbconfigs to the GLX layer. The primary issue is binary compatability. The size of the __GLXvisualConfigRec structure cannot be changed, and I have not been able to find a way to communicate version information between the GLX layer and the DDX driver.


The 3D (DRI) driver can be dependent on a specific version of the 2D (DDX) driver, and breaking compatability between these two drivers if done properly is much easier than breaking compatability with the kernel (DRM) driver.

The 2D and 3D driver are always distributed together, so it should be rare that someone is using an older DDX driver with a newer Mesa driver, and simply bumping the major number of the DDX DRI version will cause the 3D driver to gracefully fall back to indirect rendering if this mismatch occurs.

That is true today. However, if driver development does move to Mesa CVS and the existing DRI tree gets depricated, that may not continue to be the case. Even if that were always true, it doesn't solve the problem of getting the extended information into the GLX layer on the server-side.


That said, I'll comment on HW accellerated indirect rendering, simple because that's a cool project:

 > I am not a big fan of the fork trick.
 >
 >  From a security perspective, people may want to disable direct
 > rendering.  There is a shared memory segment that an "evil" program
 > could muck with and cause DoS problems.  I probably haven't thought
 > about it enough, but I can't see how would could disable direct
 > rendering AND use the fork method.
 >
 > Regardless, there would be a fair amount of overhead on every GL call.
 > If I'm not mistaken, the server would have to receive the GLX protocol
 > then send it to another process.  There would be the overhead of
 > sending the data to yet another process and the task switch.  That on
 > top of the  overhead already in the GLX protocol starts to sound very
 > painful.

Take a look at the DRI high level design doc:

http://dri.sourceforge.net/doc/design_high_level.html

In section 4.3, Indirect Rendering, there's a section on Multi-rendering in a single address space. Basically this boils down to threads. A decent document was written on this:

Interesting. I had been thinking about how multiple contexts would be handled from a single daemon process. Using threads would simplify things a lot. That should even provide some performance benefit to SMP machines.


 [KHLS94] Mark J. Kilgard, Simon Hui, Allen A Leinwand, and Dave
  Spalding.  X Server Multi-rendering for OpenGL and PEX.  8th Annual X
  Technical Conference, Boston, Mass., January 25, 1994.  Available from
  <http://reality.sgi.com/opengl/multirender/multirender.html>.

However, reality.sgi.com doesn't appear to be online. Does anybody have an archived version of this document?

I know a couple people that can bug Mark for a copy. :)




-------------------------------------------------------
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to