On 03/01/2017 10:48 AM, George Dunlap wrote:
> On 27/02/17 17:06, Boris Ostrovsky wrote:
>>>> Briefly, the new algorithm places dirty pages at the end of heap's page 
>>>> list
>>>> for each node/zone/order to avoid having to scan full list while searching
>>>> for dirty pages. One processor form each node checks whether the node has 
>>>> any
>>>> dirty pages and, if such pages are found, scrubs them. Scrubbing itself
>>>> happens without holding heap lock so other users may access heap in the
>>>> meantime. If while idle loop is scrubbing a particular chunk of pages this
>>>> chunk is requested by the heap allocator, scrubbing is immediately stopped.
>>> Why not maintain two lists?  That way it is O(1) to find either a clean
>>> or dirty free page.
>> Since dirty pages are always at the tail of page lists we are not really
>> searching the lists. As soon as a clean page is found (starting from the
>> tail) we can stop.
> Sure, having a back and a front won't add significant overhead; but it
> does make things a bit strange.  What does it buy us over having two lists?

If we implement dirty heap just like we do regular heap (i.e.
node/zone/order) that datastructure is almost a megabyte under current
assumptions (i.e. sizeof(page_list_head) * MAX_NUMNODES * NR_ZONES *
(MAX_ORDER+1) = 16 * 41 * 21 * 64 = 881664).

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to