----- Original Message ----
> From: Ian Romanick <[EMAIL PROTECTED]>
> To: DRI <dri-devel@lists.sourceforge.net>
> Sent: Monday, May 19, 2008 10:04:09 AM
> Subject: Re: TTM vs GEM discussion questions
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Ian Romanick wrote:
> 
> | I've read the GEM documentation several times, and I think I have a good
> | grasp of it.  I don't have any non-trivial complaints about GEM, but I
> | do have a couple comments / observations:
> |
> | - I'm pretty sure that the read_domain = GPU, write_domain = CPU case
> | needs to be handled.  I know of at least one piece of hardware with a
> | kooky command buffer that wants to be used that way.
> |
> | - I suspect that in the (near) future we may want multiple read_domains.
> | ~ I can envision cases where applications using, for example, vertex
> | feedback mode would want to read from a buffer while the GPU is also
> | reading from the buffer.
> |
> | - I think drm_i915_gem_relocation_entry should have a "size" field.
> | There are a lot of cases in the current GL API (and more to come) where
> | the entire object will trivially not be used.  Clamped LOD on textures
> | is a trivial example, but others exist as well.
> 
> Another question occurred to me.  What happens on over-commit?  Meaning,
> in order to draw 1 polygon more memory must be accessible to the GPU
> than exists.  This was a problem that I never solved in my 2004
> proposal.  At the time on R200 it was possible to have 6 maximum size
> textures active which would require more than the possible on-card + AGP
> memory.

I don't actually think the problem is solvable for buffer-based memory managers 
-- the best we can do is spot the failure and recover, either early as the 
commands are submitted by the API, or some point later, and for some meaning of 
'recover' (eg - fail cleanly, fallback, use-smaller-mipmaps, disable texturing, 
etc).

The only real way to solve it is to move to a page-based virtualizaization of 
GPU memory, which requires hardware support and isn't possible on most cards.  
Note that this is different from per-process GPU address spaces, and is a 
signficantly tougher problem even on supporting hardware.

Note there are two concepts with similar common names:

   - virtual GPU memory -- ie per-context page tables, but still a buffer-based 
memory manager, textures pre-loaded into GPU memory prior to command execution
  
   - virtualized GPU memory -- as above, but with page faulting, typically 
IRQ-driven with kernel assistance.  Parts of textures may be paged in/out as 
required, according to the "memory access" patterns of active shaders.

It's not clear to me which of the above the r300 & nv people are aiming at, but 
in my opinion the latter is such a significant departure from what we have been 
thinking about that I have always believed it should be addressed by a new set 
of interfaces.


Ketih

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to