Re: [PATCH] Enable hashdist by default on PowerPC

2009-02-20 Thread David Miller
From: Anton Blanchard an...@samba.org Date: Fri, 20 Feb 2009 16:19:56 +1100 Hi David, I should probably do this on sparc64 too. Why don't we just change this thing to CONFIG_64BIT? I agree. How does this look? Hmmm... my bad, I think you need to keep the CONFIG_NUMA there too as

Re: [PATCH] Enable hashdist by default on PowerPC

2009-02-19 Thread Anton Blanchard
Hi David, I should probably do this on sparc64 too. Why don't we just change this thing to CONFIG_64BIT? I agree. How does this look? Anton -- On PowerPC we allocate large boot time hashes on node 0. This leads to an imbalance in the free memory, for example on a 64GB box (4 x 16GB

Re: [PATCH] Enable hashdist by default on PowerPC

2009-02-18 Thread David Miller
From: Anton Blanchard an...@samba.org Date: Wed, 18 Feb 2009 16:11:12 +1100 @@ -145,9 +145,10 @@ extern void *alloc_large_system_hash(const char *tablename, #define HASH_EARLY 0x0001 /* Allocating during early boot? */ /* Only NUMA needs hash distribution. - * IA64 and

[PATCH] Enable hashdist by default on PowerPC

2009-02-17 Thread Anton Blanchard
On PowerPC we allocate large boot time hashes on node 0. This leads to an imbalance in the free memory, for example on a 64GB box (4 x 16GB nodes): Free memory: Node 0: 97.03% Node 1: 98.54% Node 2: 98.42% Node 3: 98.53% If we switch to using vmalloc (like ia64 and x86-64) things are more

Re: [PATCH] Enable hashdist by default on PowerPC

2009-02-17 Thread Benjamin Herrenschmidt
For many HPC applications we are limited by the free available memory on the smallest node, so even though the same amount of memory is used the better balancing helps. Signed-off-by: Anton Blanchard an...@samba.org --- You have numbers ? :-) I'm asking mostly because I've been wondering

Re: [PATCH] Enable hashdist by default on PowerPC

2009-02-17 Thread Anton Blanchard
Hi Ben, You have numbers ? :-) I'm asking mostly because I've been wondering whether it offsets the 16M pages vs. 4K or 64K pages in term of TLB/ERAT impact. The speedup is application dependent. Things like linpack usually improve when you throw more memmory at them. The potential slowdown