Hello!

> Currently it is redundant (because the giant kernel lock protects against 
> other threads), but I think it is still a good idea to add lock_sock (which
> is cheap because the global bh lock is never contended)  to
> prepare for multi threaded socket calls in 2.3. Networking already does 
> more locking than it really needs currently just to prepare for this event.

Andy, when this event will occur all these plans will be broken in any case.
Waiting for this happy day, it is better to keep desk clean of garbage
and important papers hidden far enough not to make them dirty occasionally
when we will have full pants 8)8)
Also, I still hope that 2.2 will live not less time, than 2.0 does.

BTW I permanently forget to ask one thing:

Why our lock_sock does work in this way?
Look: syncronize_bh() is necessary only in normal thread
to wait until bhs will finish with THIS socket.
Effectively, it can be replaced with waiting until ONLY THIS
sk->sock_readers==1, right?
Well, provided we add couple of atomic_inc(&sk->sock_readers) to bh tcp code,
which is very easy.

Please, tell me if I am wrong, probably, it is a hole in my brains.


Another question is in queue 8) Have we something sort of
spinlock_save_bh(spinlock)? I.e. thing, which translates to
start_bh_atomic() on UP, and to spinlock, combined with
cpu local bh protection. The last one is only to prevent dead loops,
it does not assert any protection (but spinlock) and does not
require any synchronization. This thing would be superb replacement
for SOCKHASH_LOCK() and to bh_atomic() in ip_route_output()
and in lots of another places, where spinlock_save_irq() is too strong
and start_bh_atomic() is too expensive.

Again, tell me if I am wrong, probably, it is another hole in my brains.


Well, that's things which are realistic to my opinion.
Before going to kernel parallelism, it is not so bad to learn
to fight with two threads.

Alexey
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]

Reply via email to