Julian Elischer writes:
> try this:
>
> in tdsignal, (kern_sig.c)
> take a lock on schedlock and release it again, just around the call to
> forward-signal()
>
> forward_signal(c4445540) at forward_signal+0x1a
> tdsignal(c4445540,2,2) at tdsignal+0x182
> psignal(c443d558,2) at psignal+0x3c8
>
> hopefully this will not be called with the schedlock already locked
>
Following your suggestion, the appended patch appears to work.
However, it does seem a bit silly, as we end up dropping
and-reaquiring the sched lock quite a few times:
mtx_unlock_spin(&sched_lock);
if (td->td_state == TDS_RUNQ ||
td->td_state == TDS_RUNNING) {
signotify(td->td_proc); /* grabs & releases sched_lock*/
#ifdef SMP
if (td->td_state == TDS_RUNNING && td != curthread) {
mtx_lock_spin(&sched_lock);
forward_signal(td);
mtx_unlock_spin(&sched_lock);
}
#endif
}
goto out;
Wouldn't it be cleaner if there was a signotify_locked () that
assumed you had the sched_lock held (and was called by signotify)?
Drew
Index: kern_sig.c
===================================================================
RCS file: /home/ncvs/src/sys/kern/kern_sig.c,v
retrieving revision 1.171
diff -u -r1.171 kern_sig.c
--- kern_sig.c 29 Jun 2002 17:26:18 -0000 1.171
+++ kern_sig.c 3 Jul 2002 01:48:35 -0000
@@ -1543,8 +1543,11 @@
td->td_state == TDS_RUNNING) {
signotify(td->td_proc);
#ifdef SMP
- if (td->td_state == TDS_RUNNING && td != curthread)
+ if (td->td_state == TDS_RUNNING && td != curthread) {
+ mtx_lock_spin(&sched_lock);
forward_signal(td);
+ mtx_unlock_spin(&sched_lock);
+ }
#endif
}
goto out;
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message