MinChan Kim taught me something:

From: MinChan Kim <[EMAIL PROTECTED]>

Sorry to late reply.
Our county is going on Thanksgiving day.
So I am spending on my holidays. :)

On Mon, Sep 15, 2008 at 3:49 PM, Peter Teoh <[EMAIL PROTECTED]> wrote:
> can i ask/learn something from you?
>
> a.  what is the meaning of "cleardown"?

It mean just "Remove"

> b.   i am designing a module to list down all the unassigned physical
> page frame in the system, does the idea sound logical?  (ie, page
> frame that have not been assigned to any pagetable's mapping yet,
> neither has it been added to the freelist.)  I suspect this should
> always be zero, in a properly running kernel - correct?

If your kernel isn't some special, I think so.

> c.   the following function:
>
> /*
>  * Free a 0-order page
>  */
> static void free_hot_cold_page(struct page *page, int cold)
> {
>        struct zone *zone = page_zone(page);
>        struct per_cpu_pages *pcp;
>        unsigned long flags;
>
>        if (PageAnon(page))
>                page->mapping = NULL;
>        if (free_pages_check(page))
>                return;
>
>        if (!PageHighMem(page)) {
>                debug_check_no_locks_freed(page_address(page), PAGE_SIZE);
>                debug_check_no_obj_freed(page_address(page), PAGE_SIZE);
>        }
>        arch_free_page(page, 0);
>        kernel_map_pages(page, 1, 0);
>
>        pcp = &zone_pcp(zone, get_cpu())->pcp;
>        local_irq_save(flags);
>        __count_vm_event(PGFREE);
>        if (cold)
>                list_add_tail(&page->lru, &pcp->list);
>        else
>                list_add(&page->lru, &pcp->list);
>        set_page_private(page, get_pageblock_migratetype(page));
>        pcp->count++;
>        if (pcp->count >= pcp->high) {
>                free_pages_bulk(zone, pcp->batch, &pcp->list, 0);
>                pcp->count -= pcp->batch;
>        }
>        local_irq_restore(flags);
>        put_cpu();
> }
>
> So pages will alternate between allocated and in pcp->list, correct?

In case of one-order pages and don't overcommit pcp's limit.

> but don't they have to be free from the page table entries as well?

It had to  be done in previous procedure.

> d.   And comparing this list and the next list:
>
> static inline void
> add_page_to_active_list(struct zone *zone, struct page *page)
> {
>        list_add(&page->lru, &zone->active_list);
>        __inc_zone_state(zone, NR_ACTIVE);
> }
>
> static inline void
> add_page_to_inactive_list(struct zone *zone, struct page *page)
> {
>        list_add(&page->lru, &zone->inactive_list);
>        __inc_zone_state(zone, NR_INACTIVE);
> }
>
> What is the different between the two list?   the above
> add_page_to_active_list() is called from vmscan.c, which is
> alternating between pages being in active use, vs those that is not.
> So can I say that the total of all pages from zone active and inactive
> list, is always a subset (or equal) of that of all the pages in used
> in the kernel (opposite that of pcp->list?)

yes. always subset.
It is unlikely equal.
Because kernel also have reserved pages and dynamic memory for only
kernel usage.

> e.   and in the next function:
>
> static inline void __free_one_page(struct page *page,
>                struct zone *zone, unsigned int order)
> {
>        unsigned long page_idx;
>        int order_size = 1 << order;
>        int migratetype = get_pageblock_migratetype(page);
>
>        if (unlikely(PageCompound(page)))
>                destroy_compound_page(page, order);
>
>        page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1);
>
>        VM_BUG_ON(page_idx & (order_size - 1));
>        VM_BUG_ON(bad_range(zone, page));
>
>        __mod_zone_page_state(zone, NR_FREE_PAGES, order_size);
>        while (order < MAX_ORDER-1) {
>                unsigned long combined_idx;
>                struct page *buddy;
>
>                buddy = __page_find_buddy(page, page_idx, order);
>                if (!page_is_buddy(page, buddy, order))
>                        break;
>
>                /* Our buddy is free, merge with it and move up one order. */
>                list_del(&buddy->lru);
>                zone->free_area[order].nr_free--;
>                rmv_page_order(buddy);
>                combined_idx = __find_combined_index(page_idx, order);
>                page = page + (combined_idx - page_idx);
>                page_idx = combined_idx;
>                order++;
>        }
>        set_page_order(page, order);
>        list_add(&page->lru,
>
> &zone->free_area[order].free_list[migratetype]);=================> we
> are looking at freeing page leading to adding to per zone free list.
> why?   what is the difference with adding to per-cpu free list?   and
> not not just one perzone or per cpu free list.....?  i am confused.

I am not sure your point. If I understand your point,
pcp list is just temporaily storage. pages in pcp list is moved to
zone's free list at last.
pcp list's purpose is a just fast path to allocate one-order page
since one-order page allocation happens frequently.
But free list's purpose is to manage free pages to allocate to kernel
for satisfying his request.


>        zone->free_area[order].nr_free++;
> }
>
> On Mon, Sep 8, 2008 at 10:11 PM, MinChan Kim <[EMAIL PROTECTED]> wrote:
>> On Fri, Sep 5, 2008 at 7:19 PM, Andy Whitcroft <[EMAIL PROTECTED]> wrote:
>>> When we are about to release a page we perform a number of actions
>>> on that page.  We clear down any anonymous mappings, confirm that
>>> the page is safe to release, check for freeing locks, before mapping
>>> the page should that be required.  Pull this processing out into a
>>> helper function for reuse in a later patch.
>>>
>>> Note that we do not convert the similar cleardown in free_hot_cold_page()
>>> as the optimiser is unable to squash the loops during the inline.
>>>
>>> Signed-off-by: Andy Whitcroft <[EMAIL PROTECTED]>
>>> Acked-by: Peter Zijlstra <[EMAIL PROTECTED]>
>>> Acked-by: KOSAKI Motohiro <[EMAIL PROTECTED]>
>>> Reviewed-by: Rik van Riel <[EMAIL PROTECTED]>
>>> ---
>>>  mm/page_alloc.c |   43 ++++++++++++++++++++++++++++++-------------
>>>  1 files changed, 30 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index f52fcf1..b2a2c2b 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -489,6 +489,35 @@ static inline int free_pages_check(struct page *page)
>>>  }
>>>
>>>  /*
>>> + * Prepare this page for release to the buddy.  Sanity check the page.
>>> + * Returns 1 if the page is safe to free.
>>> + */
>>> +static inline int free_page_prepare(struct page *page, int order)
>>> +{
>>> +       int i;
>>> +       int reserved = 0;
>>> +
>>> +       if (PageAnon(page))
>>> +               page->mapping = NULL;
>>
>> Why do you need to clear down anonymous mapping ?
>> I think if you don't convert this cleardown in free_hot_cold_page(),
>> you don't need it.
>>
>> If you do it, bad_page can't do his role.
>>
>>> +       for (i = 0 ; i < (1 << order) ; ++i)
>>> +               reserved += free_pages_check(page + i);
>>> +       if (reserved)
>>> +               return 0;
>>> +
>>> +       if (!PageHighMem(page)) {
>>> +               debug_check_no_locks_freed(page_address(page),
>>> +                                                       PAGE_SIZE << order);
>>> +               debug_check_no_obj_freed(page_address(page),
>>> +                                          PAGE_SIZE << order);
>>> +       }
>>> +       arch_free_page(page, order);
>>> +       kernel_map_pages(page, 1 << order, 0);
>>> +
>>> +       return 1;
>>> +}
>>> +
>>> +/*
>>>  * Frees a list of pages.
>>>  * Assumes all pages on list are in same zone, and of same order.
>>>  * count is the number of pages to free.
>>> @@ -529,22 +558,10 @@ static void free_one_page(struct zone *zone, struct 
>>> page *page, int order)
>>>  static void __free_pages_ok(struct page *page, unsigned int order)
>>>  {
>>>        unsigned long flags;
>>> -       int i;
>>> -       int reserved = 0;
>>>
>>> -       for (i = 0 ; i < (1 << order) ; ++i)
>>> -               reserved += free_pages_check(page + i);
>>> -       if (reserved)
>>> +       if (!free_page_prepare(page, order))
>>>                return;
>>>
>>> -       if (!PageHighMem(page)) {
>>> -               
>>> debug_check_no_locks_freed(page_address(page),PAGE_SIZE<<order);
>>> -               debug_check_no_obj_freed(page_address(page),
>>> -                                          PAGE_SIZE << order);
>>> -       }
>>> -       arch_free_page(page, order);
>>> -       kernel_map_pages(page, 1 << order, 0);
>>> -
>>>        local_irq_save(flags);
>>>        __count_vm_events(PGFREE, 1 << order);
>>>        free_one_page(page_zone(page), page, order);
>>> --
>>> 1.6.0.rc1.258.g80295
>>>

-- 
Regards,
Peter Teoh

Reply via email to