Commit:     d506a7725114aaddbf982fd18621b3e0e5c27f1b
Parent:     2b45ab3398a0ba119b1f672c7c56fd5a431b7f0a
Author:     Benjamin Herrenschmidt <[EMAIL PROTECTED]>
AuthorDate: Sun May 6 14:50:02 2007 -0700
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Mon May 7 12:12:55 2007 -0700

    get_unmapped_area handles MAP_FIXED on powerpc
    The current get_unmapped_area code calls the f_ops->get_unmapped_area or the
    arch one (via the mm) only when MAP_FIXED is not passed.  That makes it
    impossible for archs to impose proper constraints on regions of the virtual
    address space.  To work around that, get_unmapped_area() then calls some
    hugetlbfs specific hacks.
    This cause several problems, among others:
    - It makes it impossible for a driver or filesystem to do the same thing
      that hugetlbfs does (for example, to allow a driver to use larger page 
      to map external hardware) if that requires applying a constraint on the
      addresses (constraining that mapping in certain regions and other mappings
      out of those regions).
    - Some archs like arm, mips, sparc, sparc64, sh and sh64 already want
      MAP_FIXED to be passed down in order to deal with aliasing issues.  The 
      is there to handle it...  but is never called.
    This series of patches moves the logic to handle MAP_FIXED down to the 
    arch/driver get_unmapped_area() implementations, and then changes the 
    code to always call them.  The hugetlbfs hacks then disappear from the 
    Since I need to do some special 64K pages mappings for SPEs on cell, I need 
    work around the first problem at least.  I have further patches thus
    implementing a "slices" layer that handles multiple page sizes through 
    of the address space for use by hugetlbfs, the SPE code, and possibly 
    but it requires that serie of patches first/
    There is still a potential (but not practical) issue due to the fact that
    filesystems/drivers implemeting g_u_a will effectively bypass all arch 
     This is not an issue in practice as the only filesystems/drivers using that
    hook are doing so for arch specific purposes in the first place.
    There is also a problem with mremap that will completely bypass all arch
    checks.  I'll try to address that separately, I'm not 100% certain yet how,
    possibly by making it not work when the vma has a file whose f_ops has a
    get_unmapped_area callback, and by making it use is_hugepage_only_range()
    before expanding into a new area.
    Also, I want to turn is_hugepage_only_range() into a more generic
    is_normal_page_range() as that's really what it will end up meaning when 
    in stack grow, brk grow and mremap.
    None of the above "issues" however are introduced by this patch, they are
    already there, so I think the patch can go ini for 2.6.22.
    This patch:
    Handle MAP_FIXED in powerpc's arch_get_unmapped_area() in all 3
    implementations of it.
    Signed-off-by: Benjamin Herrenschmidt <[EMAIL PROTECTED]>
    Acked-by: William Irwin <[EMAIL PROTECTED]>
    Cc: Paul Mackerras <[EMAIL PROTECTED]>
    Cc: Richard Henderson <[EMAIL PROTECTED]>
    Cc: Ivan Kokshaysky <[EMAIL PROTECTED]>
    Cc: Russell King <[EMAIL PROTECTED]>
    Cc: David Howells <[EMAIL PROTECTED]>
    Cc: Andi Kleen <[EMAIL PROTECTED]>
    Cc: "Luck, Tony" <[EMAIL PROTECTED]>
    Cc: Kyle McMartin <[EMAIL PROTECTED]>
    Cc: Grant Grundler <[EMAIL PROTECTED]>
    Cc: Matthew Wilcox <[EMAIL PROTECTED]>
    Cc: "David S. Miller" <[EMAIL PROTECTED]>
    Cc: Adam Litke <[EMAIL PROTECTED]>
    Cc: David Gibson <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 arch/powerpc/mm/hugetlbpage.c |   21 +++++++++++++++++++++
 1 files changed, 21 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d0ec887..1f07f70 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -566,6 +566,13 @@ unsigned long arch_get_unmapped_area(struct file *filp, 
unsigned long addr,
        if (len > TASK_SIZE)
                return -ENOMEM;
+       /* handle fixed mapping: prevent overlap with huge pages */
+       if (flags & MAP_FIXED) {
+               if (is_hugepage_only_range(mm, addr, len))
+                       return -EINVAL;
+               return addr;
+       }
        if (addr) {
                addr = PAGE_ALIGN(addr);
                vma = find_vma(mm, addr);
@@ -641,6 +648,13 @@ arch_get_unmapped_area_topdown(struct file *filp, const 
unsigned long addr0,
        if (len > TASK_SIZE)
                return -ENOMEM;
+       /* handle fixed mapping: prevent overlap with huge pages */
+       if (flags & MAP_FIXED) {
+               if (is_hugepage_only_range(mm, addr, len))
+                       return -EINVAL;
+               return addr;
+       }
        /* dont allow allocations above current base */
        if (mm->free_area_cache > base)
                mm->free_area_cache = base;
@@ -823,6 +837,13 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, 
unsigned long addr,
        /* Paranoia, caller should have dealt with this */
        BUG_ON((addr + len)  < addr);
+       /* Handle MAP_FIXED */
+       if (flags & MAP_FIXED) {
+               if (prepare_hugepage_range(addr, len, pgoff))
+                       return -EINVAL;
+               return addr;
+       }
        if (test_thread_flag(TIF_32BIT)) {
                curareas = current->mm->context.low_htlb_areas;
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to