On 11/27/05, Benjamin Herrenschmidt <[EMAIL PROTECTED]> wrote:
>
> > You know, I didn't think about this until just now.  I was thinking
> > about locking the pages and then blocking if there's a write.  But I
> > wonder... we could set up CoW so that if the process tries to write to
> > a page of a texture that is in the process of being uploaded, the page
> > gets copied, the copy assigned to the process, and the original gets
> > freed when the upload is done.  This is a compromise between up-front
> > copying and blocking until the copy is done, because not all of it
> > will necessarily get copied.  The various performance impacts will
> > have to be considered.
>
> I'm not sure the performance impact of the various MMU operations won't
> make it worthless if it's only about dealing with an upload, but heh,
> maybe ...

I really don't know.  I have some sense that some caches are virtual
address based, so they have to be flushed when you change TLB entries,
and TLB flushes are also somewhat expensive.  But this is the case
with ANY CPU context switch, and I'm not sure that it's any worse when
dealing with CoW stuff.

> I was thinking more about context switching, while may require also
> switching textures in/out vram.

Like I say, if we have host backing stores, the transfers are cut in
half.  Also, if we have centralized memory management, we can do this
swapping in a lazy way.  If an app has 10 textures but uses only two
of them during its GPU 'timeslice', there's less thrashing (or none if
we haven't run out of vram).

One approach to take is to not necessarily have backingstores as long
as there is free vram.  The instant vram fills, the kernel driver
switches tactics and starts keeping them.  This will make downloads
happen at the beginning and then taper off quickly.  Best of both
worlds.

_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to