From: Eric Dumazet [EMAIL PROTECTED]
Date: Sun, 04 Nov 2007 12:31:28 +0100
[PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket
On Wed, Nov 07, 2007 at 02:41:14AM -0800, David Miller wrote:
From: Eric Dumazet [EMAIL PROTECTED]
Date: Sun, 04 Nov 2007 12:31:28 +0100
[PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
As done two years ago on IP route cache table (commit
hold time or convert to RCU.
I agree too, rwlocks are more expensive when contention is low, so let do this
rwlock-spinlock change on next step (separate patch), because it means
changing also lhash_lock.
Thanks to Jarek, I added locks cleanup in dccp_fini()
[PATCH] INET : removes per bucket
But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
num_possible_cpus() will be NR_CPUS.
Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
-Andi
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
Andi Kleen a écrit :
But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
num_possible_cpus() will be NR_CPUS.
Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
Good to know, thank you Andi for this clarification.
-
To unsubscribe from this list: send the
other book, so they could be serious about such
things. (But, don't worry, it's not me - happily I'm not serious!)
This patch looks OK now, but a bit of grumbling shouldn't harm?:
...
[PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table
As done two years ago on IP route cache
Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
Eric Dumazet wrote, On 11/04/2007 12:31 PM:
...
+static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+{
...
+if (sizeof(rwlock_t) != 0) {
...
+for (i = 0; i size; i++)
+
Jarek Poplawski a écrit :
Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
Eric Dumazet wrote, On 11/04/2007 12:31 PM:
...
+static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+{
...
+ if (sizeof(rwlock_t) != 0) {
...
+ for (i = 0; i size;
From: Andi Kleen [EMAIL PROTECTED]
Date: Sun, 4 Nov 2007 13:26:38 +0100
But I suspect distributions kernels enable CONFIG_HOTPLUG_CPU so
num_possible_cpus() will be NR_CPUS.
Nope, on x86 num_possible_cpus() is derived from BIOS tables these days.
And similarly on SPARC64 is will be set
Eric Dumazet wrote, On 11/04/2007 10:23 PM:
Jarek Poplawski a écrit :
Jarek Poplawski wrote, On 11/04/2007 06:58 PM:
Eric Dumazet wrote, On 11/04/2007 12:31 PM:
...
+static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo)
+{
...
+ if (sizeof(rwlock_t) != 0) {
...
+
On Sunday 04 November 2007 22:56:21 David Miller wrote:
From: Andi Kleen [EMAIL PROTECTED]
This makes a huge different as we have to set NR_CPUS to 4096
in order to handle the cpu numbering of some UltraSPARC-IV
machines.
Really? Hopefully you have a large enough stack then. There
are
From: Andi Kleen [EMAIL PROTECTED]
Date: Mon, 5 Nov 2007 00:01:03 +0100
On Sunday 04 November 2007 22:56:21 David Miller wrote:
From: Andi Kleen [EMAIL PROTECTED]
This makes a huge different as we have to set NR_CPUS to 4096
in order to handle the cpu numbering of some UltraSPARC-IV
From: David Miller [EMAIL PROTECTED]
Date: Sun, 04 Nov 2007 20:24:54 -0800 (PST)
From: Andi Kleen [EMAIL PROTECTED]
Date: Mon, 5 Nov 2007 00:01:03 +0100
If it's for sparse cpu ids -- x86 handles those with an
translation array.
I would rather not do this, so much assembler code
On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote:
Looks good from a quick look. Thanks for doing that work.
Some quick comments:
+#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
+/*
+ * Instead of using one rwlock for each inet_ehash_bucket, we use a table of
locks
+ *
From: Andi Kleen [EMAIL PROTECTED]
Date: Sun, 4 Nov 2007 00:18:14 +0100
On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote:
Some quick comments:
+#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
+/*
+ * Instead of using one rwlock for each inet_ehash_bucket, we use a
Also the EHASH_LOCK_SZ == 0 special case is a little strange. Why did
you add that?
He explained this in another reply, because ifdefs are ugly.
I meant why having it at all?
Any use that makes
sense is a case where the code should be rewritten to decrease the
lock hold time or
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for litle
performance differences. (we hit a
From: Eric Dumazet [EMAIL PROTECTED]
Date: Thu, 01 Nov 2007 11:16:20 +0100
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical x86_64 platform, this
On Thu, 1 Nov 2007, Eric Dumazet wrote:
@@ -134,6 +156,13 @@ static inline struct inet_ehash_bucket
*inet_ehash_bucket(
return hashinfo-ehash[hash (hashinfo-ehash_size - 1)];
}
+static inline rwlock_t *inet_ehash_lockp(
+ struct inet_hashinfo *hashinfo,
...These two fit to
Em Thu, Nov 01, 2007 at 04:03:40AM -0700, David Miller escreveu:
From: Eric Dumazet [EMAIL PROTECTED]
Date: Thu, 01 Nov 2007 11:16:20 +0100
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for
Hi,
A few doubts below:
Eric Dumazet wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
...
diff --git a/include/net/inet_hashtables.h
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet [EMAIL PROTECTED] wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical x86_64 platform, this
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet [EMAIL PROTECTED] wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per
hash bucket for the huge TCP/DCCP hash tables.
On a typical
Jarek Poplawski a écrit :
Hi,
A few doubts below:
+#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
Probably || defined(CONFIG_DEBUG_SPINLOCK) is needed here.
Not sure, because DEBUG_SPINLOCK only applies to spinlocks.
Here we deal with rwlocks.
+/*
+ * Instead of using one
Eric Dumazet wrote:
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet [EMAIL PROTECTED] wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one
lock per hash bucket for the huge TCP/DCCP hash
Rick Jones a écrit :
Eric Dumazet wrote:
Stephen Hemminger a écrit :
On Thu, 01 Nov 2007 11:16:20 +0100
Eric Dumazet [EMAIL PROTECTED] wrote:
As done two years ago on IP route cache table (commit
22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one
lock per hash bucket for
Rick Jones a écrit :
Something is telling me finding a 64 core system with a suitable
workload to try this could be a good thing. Wish I had one at my
disposal.
Maybe on big NUMA machines, we might prefer to spread the rwlock array on
multiple nodes (ie using vmalloc() instead of
From: Eric Dumazet [EMAIL PROTECTED]
Date: Thu, 01 Nov 2007 18:54:24 +0100
Stephen Hemminger a écrit :
Longterm is there any chance of using rcu for this? Seems like
it could be a big win.
This was discussed in the past, and I even believe some patch was proposed,
but some guys
28 matches
Mail list logo