Commit:     9ab37b8f21b4dfe256d736c13738d20c88a1f3ad
Parent:     dd0ec16fa6cf2498b831663a543e1b67fce6e155
Author:     Paul Mundt <[EMAIL PROTECTED]>
AuthorDate: Fri Jan 5 16:36:30 2007 -0800
Committer:  Linus Torvalds <[EMAIL PROTECTED]>
CommitDate: Fri Jan 5 23:55:23 2007 -0800

    [PATCH] Sanely size hash tables when using large base pages
    At the moment the inode/dentry cache hash tables (common by way of
    alloc_large_system_hash()) are incorrectly sized by their respective
    detection logic when we attempt to use large base pages on systems with
    little memory.
    This results in odd behaviour when using a 64kB PAGE_SIZE, such as:
    Dentry cache hash table entries: 8192 (order: -1, 32768 bytes)
    Inode-cache hash table entries: 4096 (order: -2, 16384 bytes)
    The mount cache hash table is seemingly the only one that gets this right
    by directly taking PAGE_SIZE in to account.
    The following patch attempts to catch the bogus values and round it up to
    at least 0-order.
    Signed-off-by: Paul Mundt <[EMAIL PROTECTED]>
    Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
    Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
 mm/page_alloc.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8c1a116..4a9a83f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3321,6 +3321,10 @@ void *__init alloc_large_system_hash(const char 
                        numentries >>= (scale - PAGE_SHIFT);
                        numentries <<= (PAGE_SHIFT - scale);
+               /* Make sure we've got at least a 0-order allocation.. */
+               if (unlikely((numentries * bucketsize) < PAGE_SIZE))
+                       numentries = PAGE_SIZE / bucketsize;
        numentries = roundup_pow_of_two(numentries);
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to