Philip Brown wrote:
On Tue, Mar 25, 2003 at 11:21:22PM -0800, Ian Romanick wrote:

Philip Brown wrote:

On Tue, Mar 25, 2003 at 05:07:38PM -0800, Ian Romanick wrote:

The idea is that the X server and the 3D driver can use the same memory manager for off-screen memory. That way pixmap cache, textures, and vertex buffers all get managed in the "same" way and share ALL of off screen memory.

Yes, and existing core X server APIs allow that.

How can it do that with direct rendering?


Well, okay, there needs to be a little extra handholding between server and
client. So, you add a GLX_dri_reserve_mem extension or something that
reserves video memory by proxy. Or do it in some more direct fashion,
bypassing GLX protocol overhead if you prefer, but still letting the GLX
module on the server reserve it cleanly through the server interfaces.

That's the clean way to do it, even if it requires more coding on the DRI
side.

For non-video (ie AGP) memory, the issue isnt relevant, since the client
can do the reservation through the drm kernel driver directly, I believe.

After reading this I honestly believe that you and I must be talking about different things. I'm talking about allocating a block of memory to hold a texture that's going to be read directly by the rendering hardware. The texture should be kept in directly readable memory (either on-card or AGP) unless the space is needed by some other operation.


Not only that, this is an operation that needs to be fast. As fast as possible, in fact. It may happen many times per frame, and it's performance can dominate the overall rendering performance.

What you describe requires, at the very least, two context switches and several system calls for I/O. I can almost guarantee that the performance of doing that will approach the (unacceptable for many applications) performance of indirect rendering.

You are correct. There is a clean solution. There is a clean solution that will perform acceptably. Yo-yoing through the X server to allocate a block of texture memory is NOT it!

? Even worse, if the 3D driver code "lives" in Mesa CVS and the core server code lives in XFree86 CVS, we would have the same code living in two different trees!

You only have duplicate code for this, if you code it "wrong" :-> Duplicate code is usually one of the basic signs of "the code needs a redesign"

Video mem is a core X server resource, and should be reserved through the
core server, always.

It would be different if there were no core api for allocationg video ram.
But there is. So use it.

[okay, the docs suck. but it's still there]

Technically speaking, we do currently use those APIs. In order to get the performance that we *NEED* and all the functionality that we *NEED* we *HAVE TO* reserve all of the memory that isn't used by the front/back/depth buffers. The existing internal X API just is not usable for what we need. Period.


Right now our memory manager is layered on top of the X memory manager. Our memory manager provides functionality that the X memory manager does not. This will be even more true with the texmem-0-0-2 code. In order to improve the operation of the whole system we want to layer the API with the smaller set of functionality on top of the API with the larger set of functionality.



-------------------------------------------------------
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to