Philip Brown wrote:
On Wed, Mar 26, 2003 at 09:14:37AM -0800, Ian Romanick wrote:

Philip Brown wrote:

Consider the GLX_dri_reserve_mem as equivalent to sbrk()
Then have a more local memory allocator for subdividing the large chunk.
That's going to be a lot more efficient that relying on the XFree routines
to do fine-level memory management anyways. xfree's routines arent really
optimized for that sort of thing, I think.

Okay. You're just not listening. THAT WON'T ALLOW US TO IMPLEMENT A FUNCTIONING 3D DRIVER. Textures memory is like a cache that is shared by multiple running processes. We need to be able to do the equivalent of paging out blocks from that cache when one process needs more memory. An OS needs something under sbrk in order to implement paged memory, and so do we.


eh?


Card has 32 megs of VideoRAM. Initialization phase: X grabs 4 megs for actual video display X grabs 1 meg(?) for pixmaps DRI/GLX starts, notices that there is 27 megs free. Decides to be nice, and only pre-alloc 16 megs. Parcels out that 16 megs to clients somehow. (clients will probably grab memory in 2-4meg chunks from GLX, then use "local" memory manager on that)

....

 New client comes in. Requests new corse chunk o' VRAM from GLX
 Oops. we've used up the 16 megs pre-allocated.
 Used to be 11 megs free, but X server has been busy, and there is
 now "only" 8 megs free.
 GLX calls xf86AllocateOffscreenLinear() to grab another 4 megs of
 VRAM from the X server, then hands some part of it off to the new
 client

What happens when you have 15 processes running with GL contexts that each need 24MB of texture memory per frame? Nearly all of the allocations in question are transient. A texture only "needs" to be in graphics memory while it's being used by the hardware. If the texture manager has to pull from a hodge podge of potentially discontiguous blocks of memory (as in your example) there will be a lot of requests for memory that we should be able to satisfy that will fail. The result is a fallback to the software rasterizer.


Grab the texmem-0-0-1 branch and look at the code in lib/GL/mesa/src/drv/common/texmem.[ch], read the texmem-0-0-2 design document that was posted to the list (and discussed WRT this very issue at great length), and then get back to me.

... Instead, it will make a lot of work for DRI developers (every process with a GL context will have to be notified when any context makes a magic sbrk call).


No, you dont have to notify all GL clients. See above.

Ya know, I heard this guy "Keith Whitwell" wrote some nice mmXXX()
routines in 1999 that, coincidentally enough, handle *exactly* *this* *type*
*of* *situation* for a "local memory manager" for GLX clients.
Now, what are the odds of that? Maybe we could get that guy to help out
here somehow...

Okay, seriously?!? I've spent the last 18 months working with this code. Texture memory management in the DRI has been my primary focus for over a year. I know what's in there. I know how it works. I know what its shortcomings are.


The current memory management system looks like this:

     Core X routines
           |
           V
     Coarse grained, block oriented cache / paged memory system
           |
           V
     Keith's mmHeap_t code

What needs to happen to make everyone "play nice" together is:

     Coarse grained, block oriented cache / paged memory system
           |                             |
           V                             |
     Core X routines                     |
                                         V
                          3D driver texture allocator

In other words, what you've brought up here is a completely orthogonal issue.



-------------------------------------------------------
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to