Ingo Molnar wrote:
* Nick Piggin <[EMAIL PROTECTED]> wrote:


while writing the ->break_lock feature i intentionally avoided
overhead in the spinlock fastpath. A better solution for the bug you
noticed is to clear the break_lock flag in places that use
need_lock_break() explicitly.

What happens if break_lock gets set by random contention on the lock somewhere (with no need_lock_break or cond_resched_lock)? Next time it goes through a lockbreak will (may) be a false positive.


yes, and that's harmless. Lock contention is supposed to be a relatively
rare thing (compared to the frequency of uncontended locking), so all
the overhead is concentrated towards the contention case, not towards
the uncontended case. If the flag lingers then it may be a false
positive and the lock will be dropped once, the flag will be cleared,
and the lock will be reacquired. So we've traded a constant amount of
overhead in the fastpath for a somewhat higher, but still constant
amount of overhead in the slowpath.


Yes that's the tradeoff. I just feel that the former may be better, especially because the latter can be timing dependant (so you may get things randomly "happening"), and the former is apparently very low overhead compared with the cost of taking the lock. Any numbers, anyone?


One robust way for that seems to be to make the need_lock_break() macro
clear the flag if it sees it set, and to make all the other (internal)
users use __need_lock_break() that doesnt clear the flag. I'll cook up a
patch for this.


If you do this exactly as you describe, then you'll break cond_resched_lock (eg. for the copy_page_range path), won't you?


(cond_resched_lock() is an 'internal' user that will use
__need_lock_break().)


Off the top of my head, this is what it looks like:

spin_lock(&dst->lock);

spin_lock(&src->lock);
for (lots of stuff) {
        if (need_lock_break(src->lock) || need_lock_break(dst->lock))
                break;
}
spin_unlock(&src->lock);

cond_resched_lock(&dst->lock);

Right? Now currently the src->lock is broken, but your change would break
the cond_resched_lock here, it will not trigger because need_lock_break
clears dst->lock... oh I see, the spinning CPU will set it again. Yuck :(


- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Reply via email to