2010/3/19 Thomas Hellström <tho...@shipmail.org>:
> Pauli, Dave and Jerome,
>
> Before reviewing this, Could you describe a bit how this interfaces with the
> TTM memory accounting. It's important for some systems to be able to set a
> limit beyond which TTM may not pin any pages.
>
> Am I right in assuming that TTM memory accounting is kicked in only when TTM
> allocs and frees pages from the pool?

yes.

TTM memory accounting is still handled in ttm_tt.c so pool is outside
of it. But I can move calls to memory accounting into pool if that is
preferred place.

With current implementation pool can hold about 512 pages more memory
than what is the TTM limit.

> Can the system reclaim *all* pages not used by TTM through a shrink
> mechanism?

Not with current version but I can modify patch so that system can
reclaim all pages. Current lowest limit is 16 pages in pool because
that avoids refills for 2D only desktop use.

The limit is already changed in runtime so making it scale to zero
sized pool is only minor change. But what should happen for pool
refill if system just before forced the pool size to zero?

>
> In the long run, I'd like to have a pool of non-kernel-mapped pages instead
> of a pool of uncached / write-combined pages, because then we'd have quite
> fast transition from write-combined to write-back, but I guess that will be
> something for the future.

I think this can be simulated with multiple pools if free logic is
changed from handling single pool at a time to combine multiple pools
to a single wb transition operation.
Trouble in making very large cache transition operations is that
allocating large continues arrays in kernel is problematic. Current
code is limiting the size of single cache transition operation to
avoid possible memory allocation problems.

>
> /Thomas
>
>
>
>
> Pauli Nieminen wrote:
>>
>> When allocating wc/uc pages cache state transition requires cache flush
>> which
>> is expensive operation. To avoid cache flushes allocation of wc/uc pages
>> should
>> be done in large groups when only single cache flush is required for whole
>> group
>> of pages.
>>
>> In some cases drivers need t oallocate and deallocate many pages in a
>> short time
>> frame. In this case we can avoid cache flushes if we keep pages in the
>> pool before
>> actually freeing them later.
>>
>> arch/x86 was missing set_pages_array_wc and set_memory_array_wc. Patch 6
>> and 7 add
>> missing functions and hooks set_pages_array_wc to the pool allocator.
>>
>>
>>
>> ------------------------------------------------------------------------------
>> Download Intel&#174; Parallel Studio Eval
>> Try the new software tools for yourself. Speed compiling, find bugs
>> proactively, and fine-tune applications for parallel performance.
>> See why Intel Parallel Studio got high marks during beta.
>> http://p.sf.net/sfu/intel-sw-dev
>> --
>> _______________________________________________
>> Dri-devel mailing list
>> Dri-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/dri-devel
>>
>
>
>
>

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to