On Wed, Mar 26, 2003 at 11:18:18AM -0800, Ian Romanick wrote:
> Philip Brown wrote:
> >  ....
> > 
> >  New client comes in. Requests new corse chunk o' VRAM from GLX
> >  Oops. we've used up the 16 megs pre-allocated.
> >  Used to be 11 megs free, but X server has been busy, and there is
> >  now "only" 8 megs free.
> >  GLX calls xf86AllocateOffscreenLinear() to grab another 4 megs of
> >  VRAM from the X server, then hands some part of it off to the new
> >  client
> 
> What happens when you have 15 processes running with GL contexts that 
> each need 24MB of texture memory per frame?  Nearly all of the 
> allocations in question are transient.  A texture only "needs" to be in 
> graphics memory while it's being used by the hardware.  If the texture 
> manager has to pull from a hodge podge of potentially discontiguous 
> blocks of memory (as in your example) there will be a lot of requests 
> for memory that we should be able to satisfy that will fail.  The result 
> is a fallback to the software rasterizer.


Ah, I see whats on your mind now  ...

> What needs to happen to make everyone "play nice" together is:
> 
>       Coarse grained, block oriented cache / paged memory system
>             |                             |
>             V                             |
>       Core X routines                     |
>                                           V
>                            3D driver texture allocator
> 
> In other words, what you've brought up here is a completely orthogonal 
> issue.

Orthogonal to the issue that is foremost on your mind,  of
 "how do you 'page out' textures from a GLX client, to give the active
  client more room",   yes.

[I'd be happy to discuss that actual issue in irc with you next time ;-)
 but I'll spare the list that one for now]

So since it is orthogonal, you should have no objections to lowest-level
allocation of video memory being done by GLX calling xf86Allocate routines, 
yes?
(ie: "leave the X core code alone")


I believe this whole thread started off by references to hacking X server
code to call DRI extension code. That is what I am arguing against, as
unneccessary. Extension code should call core code, not the other way
around  (except for API-registered callbacks, of course)




-------------------------------------------------------
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to