On Sat, 2007-12-22 at 09:09 +0100, Thomas Hellström wrote: > This should be easy. We can have drm_ttm_get_page() come in two > varieties, one which tests > for NULL only and one that tests for dummy_page or NULL and allocates a > new page if needed. drm_ttm_get_page() is called from the nopfn() function.
Oh, right, we'll still get nopfn called when the process attempts to write to the page. > Making ttm buffer object pageable shouldn't be that complicated. > However, Linux makes it a bit awkward. Yes, I know -- we have to have some kind of process context to tie the pages to. For unshared objects, we can allocate address space from the current user's VMA. For shared objects, we can create files in tmpfs. This would, obviously, encourage us to use fewer sharable objects, which seems fine to me. > My idea for this is to only page out objects that are in local memory. > If there are no objects in local memory when an accounting limit is hit > we will have to simply walk the LRU lists and put objects in local memory. By 'local memory' do you mean pages which are not mapped to the GTT? Or do you mean pages which are not on the graphics card? For the former, I agree -- page out stuff which is not pinned to the card, then walk the LRU and unpin (waiting if necessary) objects and push them out as well. For the latter, I'd like to figure out how we can page the backing store pages without also pulling the data from the card; in an ideal situation, we'd have data on the GPU backed by pages allocated from the swap space and not consuming CPU memory at all. > So the interesting questions arise: Should we have a single large VMA > supplied by the drm master (X server) or should each client be required > to present a VMA which is equal to or larger in size than all the buffer > objects it intends to create? I think we must account for these pages in the users' VMA. Anything else would allow them to escape any memory allocation limits that might be set, and I think we want to eventually eliminate the master role in any case. > Also, if the VMAs are created using mmap() there might an option to > specify a separate "swap" file for the buffer objects. Something that we > also perhaps would want to explore. Also we need to make sure we do the > right thing across "fork"s. Providing an fd-based API for object allocation has some appeal; applications would then just share objects through any existing file system. However, that places a fairly substantial burden on the applications to agree on file naming conventions. Using tmpfs would avoid these issues and provide the same benefit. This would obviously need to be system dependent, although anything which followed the sysv shm semantics should be easily duplicated on non-linux systems. -- [EMAIL PROTECTED]
signature.asc
Description: This is a digitally signed message part
------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
-- _______________________________________________ Dri-devel mailing list Dri-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/dri-devel