On 7/17/07, Benjamin Herrenschmidt <[EMAIL PROTECTED]> wrote: > On Tue, 2007-07-17 at 16:07 -0500, Satya wrote: > > hello, > > > > Upon investigating the below issue further, I found that > > pte_alloc_map() calls kmap_atomic. The allocated pte page must be > > unmapped before invoking any function that might_sleep. > > > > In this case clear_huge_page() is being called without invoking > > pte_unmap(). The 'normal' counterpart of hugetlb_no_page (which is > > do_no_page() in mm/memory.c) does call pte_unmap() before calling > > alloc_page() (which might sleep). > > > > So, I believe pte_unmap() must be invoked first in hugetlb_no_page(). > > But the problem here is, we do not have a reference to the pmd to map > > the pte again (using pte_offset_map()). The do_no_page() function does > > have a pmd_t* parameter, so it can remap the pte when required. > > > > For now, I resolved the problem by expanding the pte_alloc_map() macro > > by hand and replacing kmap_atomic with kmap(), although I think it is > > not the right thing to do. > > > > Let me know if my analysis is helping you figure out the problem here. > > Thanks! > > Except that I don't see where pte_alloc_map() has been called before > hand... hugetlb_no_page() is called by hugetlb_fault() which is called > by __handle_mm_fault(), with no lock held. >
the calling sequence is : __handle_mm_fault -> hugetlb_fault -> huge_pte_alloc() -> pte_alloc_map() where -> stands for 'calls'. hugetlb_fault() calls hugetlb_no_page() after returning from huge_pte_alloc(). [huge_pte_alloc() is an arch specific call back implemented in the patch referred to in my earlier posts] Satya. > Ben. > > > -- ...what's remarkable, is that atoms have assembled into entities which are somehow able to ponder their origins. -- http://cs.uic.edu/~spopuri _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@ozlabs.org https://ozlabs.org/mailman/listinfo/linuxppc-dev