----- Original Message ----
> From: Thomas Hellström <[EMAIL PROTECTED]>
> To: Stephane Marchesin <[EMAIL PROTECTED]>
> Cc: DRI <dri-devel@lists.sourceforge.net>
> Sent: Monday, May 19, 2008 9:49:21 AM
> Subject: Re: TTM vs GEM discussion questions
> 
> Stephane Marchesin wrote:
> > On 5/18/08, Thomas Hellström wrote:
> >
> >  
> >>  > Yes, that was really my point. If the memory manager we use (whatever
> >>  > it is) does not allow this kind of behaviour, that'll force all cards
> >>  > to use a kernel-validated command submission model, which might not be
> >>  > too fast, and more difficult to implement on such hardware.
> >>  >
> >>  > I'm not in favor of having multiple memory managers, but if the chosen
> >>  > one is both slower and more complex to support in the future, that'll
> >>  > be a loss for everyone. Unless we want to have another memory manager
> >>  > implementation in 2 years from now...
> >>  >
> >>  > Stephane
> >>  >
> >>
> >> First, TTM does not enforce kernel command submission, but it forces you
> >>  to tell the kernel about command completion status in order for the
> >>  kernel to be able to move and delete buffers.
> >>    
> >
> > Yes, emitting the moves from the kernel is not a necessity. If your
> > card can do memory protection, you can setup the protection bits in
> > the kernel and ask user space to do the moves. Doing so means in-order
> > execution in the current context, which means that in the normal case
> > rendering does not need to synchronize with fences at all.
> >
> >  
> >>  I'm not sure how you could avoid that with ANY kernel based memory
> >>  manager, but I would be interested to know how you expect to solve that
> >>  problem.
> >>    
> >
> > See above, if the kernel controls the memory protection bits, it can
> > pretty much enforce things on use space anyway.
> >
> >  
> Well, the primary reason for the kernel to sync and move a buffer object 
> would be to evict it from VRAM, in which case I don't think the 
> user-space approach would be a valid solution, unless, of course, you 
> plan to use VRAM as a cache and back it all with system memory.
> 
> Just out of interest (I think this is a valid thing to know, and I'm not 
> being TTM / GEM specific here):
> 1) I've never seen a kernel round-trip per batchbuffer as a huge 
> performance-problem, and it surely simplifies things for an in-kernel 
> memory manger. Do you have any data to back this?
> 2) What do the Nvidia propriety drivers do w r t this?


What I understand is that each hardware context (and there are lots of hardware 
contexts) has a ringbuffer which is mapped into the address space of the driver 
assigned that context.  The driver just inserts commands into that ringbuffer 
and the hardware itself schedules & context-switches between rings.

Then the question is how does this interact with a memory manager.  There still 
has to be some entity managing the global view of memory -- just as the kernel 
does for the regular vm system on the CPU.  A context/driver shouldn't be able 
to rewrite its own page tables, for instance.

Keith


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to