On Tue, 24 May 2011, Robert Watson wrote:
Over the next few days, I will be merging a number of TCP-related locking
changes, as well as changes to various network stack infrastructure bits,
such as the netisr implementation. The goal, generally, has been to move us
in the direction of supporting more clear CPU affinity for network flows,
the ability to program filters in network cards to support those affinities
explicitly, and elimination of cache line contention (whether by locks,
stats, etc) during high-volume parallel steady-state TCP load, with
ancillary benefits (hopefully) for UDP and other protocols. This has
implied non-trivial changes to our inpcb locking model, netisr code, etc.
Detailed information will appear in commit messages as I go; some elements,
such a programming of card filters based on setting TCP socket options, are
very much a work in progress.
Obviously, there are no bugs in this code at all. However, if they are,
they might manifest as network problems, new WITNESS warnings, etc, and
network stack exercise + reports would be greatly appreciated!
This work has been sponsored by Juniper Networks. Thanks also to Bjoern
Zeeb, who has been reviewing changes!
After a series of smaller commits, I've just merged some initial decomposition
of the pcbinfo lock into an additional pcbhash lock, which changes lock
ordering and lookup with respect to inpcbs significantly (r222488; commit
message below). I expect there to be some initial instability as people shake
out edge cases I didn't bump into in my testing. Please report bugs to
current@, and I'll pick them up there!
Robert N. M. Watson
University of Cambridge
Computer Laboratory
Decompose the current single inpcbinfo lock into two locks:
- The existing ipi_lock continues to protect the global inpcb list and
inpcb counter. This lock is now relegated to a small number of
allocation and free operations, and occasional operations that walk
all connections (including, awkwardly, certain UDP multicast receive
operations -- something to revisit).
- A new ipi_hash_lock protects the two inpcbinfo hash tables for
looking up connections and bound sockets, manipulated using new
INP_HASH_*() macros. This lock, combined with inpcb locks, protects
the 4-tuple address space.
Unlike the current ipi_lock, ipi_hash_lock follows the individual inpcb
connection locks, so may be acquired while manipulating a connection on
which a lock is already held, avoiding the need to acquire the inpcbinfo
lock preemptively when a binding change might later be required. As a
result, however, lookup operations necessarily go through a reference
acquire while holding the lookup lock, later acquiring an inpcb lock --
if required.
A new function in_pcblookup() looks up connections, and accepts flags
indicating how to return the inpcb. Due to lock order changes, callers
no longer need acquire locks before performing a lookup: the lookup
routine will acquire the ipi_hash_lock as needed. In the future, it will
also be able to use alternative lookup and locking strategies
transparently to callers, such as pcbgroup lookup. New lookup flags are,
supplementing the existing INPLOOKUP_WILDCARD flag:
INPLOOKUP_RLOCKPCB - Acquire a read lock on the returned inpcb
Decompose the current single inpcbinfo lock into two locks:
- The existing ipi_lock continues to protect the global inpcb list and
inpcb counter. This lock is now relegated to a small number of
allocation and free operations, and occasional operations that walk
all connections (including, awkwardly, certain UDP multicast receive
operations -- something to revisit).
- A new ipi_hash_lock protects the two inpcbinfo hash tables for
looking up connections and bound sockets, manipulated using new
INP_HASH_*() macros. This lock, combined with inpcb locks, protects
the 4-tuple address space.
Unlike the current ipi_lock, ipi_hash_lock follows the individual inpcb
connection locks, so may be acquired while manipulating a connection on
which a lock is already held, avoiding the need to acquire the inpcbinfo
lock preemptively when a binding change might later be required. As a
result, however, lookup operations necessarily go through a reference
acquire while holding the lookup lock, later acquiring an inpcb lock --
if required.
A new function in_pcblookup() looks up connections, and accepts flags
indicating how to return the inpcb. Due to lock order changes, callers
no longer need acquire locks before performing a lookup: the lookup
routine will acquire the ipi_hash_lock as needed. In the future, it will
also be able to use alternative lookup and locking strategies
transparently to callers, such as pcbgroup lookup. New lookup flags are,
supplementing the existing INPLOOKUP_WILDCARD flag:
INPLOOKUP_RLOCKPCB - Acquire a read lock on the returned inpcb
INPLOOKUP_WLOCKPCB - Acquire a write lock on the returned inpcb
Callers must pass exactly one