Alex Deucher wrote:
--- Ian Romanick <[EMAIL PROTECTED]> wrote:

Only if direct rendering was disabled. If HW accelerated indirect rendering were working you'd just have another client trying to use
the hardware. :)

To get hw accelerated indirect rendering It is my understanding that GLcore would have to be adapted to decode GLX packets and send them to the dri module rather than the SW renderer. This doesn't seem too complicated...at least in theory. Can you give me an idea of how this should work?

Close, but not quite exactly. The goal would be to modify the interface between libglx.a (the part that handles the high-level GLX protocol stuff) and libGLcore.a (the part that does the drawing) to be the same as the interface between libGL and the client-side DRI driver. Then a "client-side" DRI driver would be loaded instead of libGLcore.a. That wouldn't change any of the issues in the client-side driver with buffer locations changing.


What about having independant front, back and depth buffers per

head?


That wouldn't solve the problem Michel is refering to. If you have
two separete back / depth buffers (one for each head), when a window is moved from one head to the other it's buffer locations would change. Right now DRI drivers can't handle that.

when the buffer locations change could this mean "change to the video ram on a different card :)

That would be a different sort of change altogether. :) I think getting direct rendering on Xinerama will be hard. Abstractly, for either direct or indirect rendering, you can think of there being a dispatch layer between the application and the hardware driver (i.e., libGL.so or libglx.a). In the current case there's only one hardware driver associated with a given screen. That makes the dispatch layer pretty simple. Get a command, send it to the driver.


In the Xinerama case you have multiple drivers per "screen". This complicates the dispatch layer. A lot. The dispatch layer has to determine which driver or drivers need to get which commands. That also makes things slower, modulo clever optimization. Right now when an application calls glVertex3fv, it goes to a function that looks (roughly) like:

__disp_glVertex3fv( const GLfloat * v )
{
    context * ctx = GET_CURRENT_CONTEXT();
    (*ctx->dispatch_table[ GL_VERTEX3FV_ENTRY ])(v);
}

The dispatch table function then goes right into the driver. To support Xinerama we need to add a layer that actually sends the command to multiple drivers. Right now we don't even have the mechanism to load multiple drivers like that!

The obvious optimization is that if a window is completely on one physical screen the extra layer can be avoided. Basically, act like the non-Xinerama case.

Once the fbconfig work is done, I would be more than happy to discuss a desgin for doing this. I probably won't have time to contribute much code though.

In the long term, this is something that needs to be fixed. Ideally we'd like to have back / depth buffers allocated per-drawable
(instead of having a single set allocated staticly). Doing this would require


that drivers support the back / depth buffers changing location. Each time a window was resized the back / depth buffers would have to
resize, which could cause there location to change (think calling realloc()).

I'd like to get involved with this aspect of the 3D side. Do you have any pointers on where to start as far as planning how this should work. Or has no one even given this much though yet? ;) Where should I start in the source tree?

There's a few parts to this. The first is a new on-card / AGP memory allocator. I've been doing some work on that, but other demands on my time have kept me away from it for some months. I hope to get back to it soon, though. If you search back through the archive for 'texmem-0-0-2' you'll probably find some discussion about this. The other parts end up being very driver specific. Each driver needs to be modified to be able to update the destination frame-buffer and depth-buffer pointers, update the drawing pitch / window-size, and perhaps other updates. On thing that I'm unsure about is drawing to the front-buffer. If the back-buffer and the depth-buffer are the size of the window, but the front-buffer is the side of the screen, some cards may have pitch problems (i.e., they may not be able to set a different drawing pitch for the rendering target buffer and the depth-buffer).


Some of these issues will apply to pbuffer rendering as well. A good place to start would be to look at the various drivers to see what the problems might be. The people that have been working on the stand-alone drivers may have some insight on this as well.



-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to