ChangeSet 1.2231.1.127, 2005/03/28 19:51:53-08:00, [EMAIL PROTECTED]

        [PATCH] break_lock fix
        
        lock->break_lock is set when a lock is contended, but cleared only in
        cond_resched_lock.  Users of need_lockbreak (journal_commit_transaction,
        copy_pte_range, unmap_vmas) don't necessarily use cond_resched_lock on 
it.
        
        So, if the lock has been contended at some time in the past, break_lock
        remains set thereafter, and the fastpath keeps dropping lock 
unnecessarily.
         Hanging the system if you make a change like I did, forever restarting 
a
        loop before making any progress.  And even users of cond_resched_lock 
may
        well suffer an initial unnecessary lockbreak.
        
        There seems to be no point at which break_lock can be cleared when
        unlocking, any point being either too early or too late; but that's 
okay,
        it's only of interest while the lock is held.  So clear it whenever the
        lock is acquired - and any waiting contenders will quickly set it again.
        Additional locking overhead?  well, this is only when CONFIG_PREEMPT is 
on.
        
        Since cond_resched_lock's spin_lock clears break_lock, no need to clear 
it
        itself; and use need_lockbreak there too, preferring optimizer to 
#ifdefs.
        
        Signed-off-by: Hugh Dickins <[EMAIL PROTECTED]>
        Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
        Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
        Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>



 sched.c    |    5 +----
 spinlock.c |    2 ++
 2 files changed, 3 insertions(+), 4 deletions(-)


diff -Nru a/kernel/sched.c b/kernel/sched.c
--- a/kernel/sched.c    2005-03-28 21:34:33 -08:00
+++ b/kernel/sched.c    2005-03-28 21:34:33 -08:00
@@ -3741,14 +3741,11 @@
  */
 int cond_resched_lock(spinlock_t * lock)
 {
-#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT)
-       if (lock->break_lock) {
-               lock->break_lock = 0;
+       if (need_lockbreak(lock)) {
                spin_unlock(lock);
                cpu_relax();
                spin_lock(lock);
        }
-#endif
        if (need_resched()) {
                _raw_spin_unlock(lock);
                preempt_enable_no_resched();
diff -Nru a/kernel/spinlock.c b/kernel/spinlock.c
--- a/kernel/spinlock.c 2005-03-28 21:34:33 -08:00
+++ b/kernel/spinlock.c 2005-03-28 21:34:33 -08:00
@@ -187,6 +187,7 @@
                        cpu_relax();                                    \
                preempt_disable();                                      \
        }                                                               \
+       (lock)->break_lock = 0;                                         \
 }                                                                      \
                                                                        \
 EXPORT_SYMBOL(_##op##_lock);                                           \
@@ -209,6 +210,7 @@
                        cpu_relax();                                    \
                preempt_disable();                                      \
        }                                                               \
+       (lock)->break_lock = 0;                                         \
        return flags;                                                   \
 }                                                                      \
                                                                        \
-
To unsubscribe from this list: send the line "unsubscribe bk-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to