I've got a couple of things that are bothering me if we're looking at 
finalizing the TTM interface in the near term.

Specifically I'm concerned that we don't have a recoverable way to deal 
with out-of-memory situations.

Consider a driver that tries to submit whole frames of q3, which is 
running on say a 32mb card.

There's nothing to stop the driver specifying textures in excess of what 
is available to the memory manager.  If the driver does do this, there's 
no feedback from the kernel that this is a band idea until after it's 
done.  Also, all the geometry is now gone, so it's too late to 
restructure the command stream or even fall back to software.

Note this is a different situation to using 8 huge textures to draw a 
single triangle, where nothing can be done to help.  In the case above, 
the scene could be split and rendered on hardware, although at the 
expense of texture thrashing.

We've benefited from the flexibilty with IGPs to avoid this so far, but 
we do want to cope with VRAM and it seems like we are currently missing 
some of the necessary mechanisms...

So the issues are:

        - how does the driver know ahead of time it is running out of texture 
space?
        
        - if the answer to the above is "it doesn't", then how do we rescue 
submitted command streams that exceed texture space?

        - relatedly, on cards with texture-from-vram and texture-from-agp, how 
does the driver know which pool to use for a particular texture?



At worst, I can imagine something like the kernel pushing out to 
userspace a size for each pool which is guarenteed to be available for 
validated buffers.

Eg, on a 32 mb card, we could say that there is a maximum no-evict space 
of 8mb, meaning that at all times there is at least 24 mb available for 
validated buffers.  There may be more than that.  The userspace driver 
would be responsible for ensuring that all the buffers it wants to 
validate to that pool do not exceed 24mb (given some alignment 
constraints???).  When it approaches that limit, it can either switch to 
other pools or flush the command stream.

If some of the 8mb is free, it can be used by the memory manager to 
avoid evicts.

In the worst case, it just means that the userspace driver flushes more 
often than it strictly has to.  It can even try and exceed the 24mb if 
it wants to, but has to live with the possibility of the memory manager 
saying 'no'.  It still has access to the full amount of memory on the 
card not taken by no-evict buffers.


So summarizing:

        - Enforce a limit on no-evict buffers.  Keep these to a contigous 
region of the address space (XXX: note this makes pageflipping private 
backbuffers more complex).

        - Advertize the size of the remaining space.

        - Drivers monitor the total size of buffers referenced by relocations, 
and flush before it reaches the available space in any pool.    
        
        - Drivers may try to reference more as long as they are prepared for 
failure.

        - The memory manager uses any extra space to avoid evicts.

This seems like it can be implemented in the time available with minimal 
kernel changes.  It also seems like it will probably work, and pushes 
most of the responsibility into the userspace driver, and allows it to 
make decisions as the stream is being built rather than trying to fix it 
up afterwards...

Keith

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to