From: Andi Kleen <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 07:11:06 +0200
> The hash sizing code needs far more tweaks. iirc it can still
> allocate several GB hash tables on large memory systems (i've seen
> that once in the boot log of a 2TB system). Even on smaller systems
> it is usually too much.
There is already a limit parameter to alloc_large_system_hash(), the
fact that the routing cache passes in zero (which causes the limit to
be 1/16 of all system memory) is just a bug. :-)
In passing this immediately this suggests a fix to alloc_large_system_hash,
in that when limit is given as 0, it should follow the same rules for
HASH_HIGHME which are applied to the "scale" arg. It currently goes:
if (max == 0) {
max = ((unsigned long long)nr_all_pages << PAGE_SHIFT) >> 4;
do_div(max, bucketsize);
}
Whereas it should probably go:
if (max == 0) {
max = (flags & HASH_HIGHMEM) ? nr_all_pages : nr_kernel_pages;
max = (max << PAGE_SHIFT) >> 4;
do_div(max, bucketsize);
}
or something like that.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html