Jeroen Van den Keybus wrote:
> Gilles,
> 
> 
> I cannot reproduce those messages after turning nucleus debugging on.
> Instead, I now either get relatively more failing mutexes or even hard
> lockups with the test program I sent to you. If the computer didn't crash,
> dmesg contains 3 Xenomai messages relating to a task being movend to
> secondary domain after exception #14. As when the computer crashes: I have
> written the last kernel panic message on a paper. Please tell if you want
> also the addresses or (part of) the call stack.
> 
> I'm still wondering if there's a programming error in the mutex test
> program. After I sent my previous message, and before I turned nucleus
> debugging on, I managed (by reducing the sleeptimes to max. 5.0e4) to
> fatally crash the computer, while spewing out countless 'scheduling while
> atomic messages'. Is the mutex error reproducible ?

I was not able to crash my box or generate that scheduler warnings, but
the attached patch fixes the false positive warnings of unlocked
mutexes. We had a "leak" in the unlock path when someone was already
waiting. Anyway, *this* issues should not have caused any other problems
then the wrong report of rt_mutex_inquire().

@Gilles: please apply to both trees.

[Update] While writing this mail and letting your test run for a while,
I *did* get a hard lock-up. Hold on, digging deeper...

Jan
Index: ChangeLog
===================================================================
--- ChangeLog   (revision 465)
+++ ChangeLog   (working copy)
@@ -1,3 +1,8 @@
+2006-01-18  Jan Kiszka  <[EMAIL PROTECTED]>
+
+       * ksrc/skins/native/mutex.c (rt_mutex_unlock): Fix leaking lockcnt
+       on unlock with pending waiters.
+
 2006-01-16  Gilles Chanteperdrix  <[EMAIL PROTECTED]>
 
        * ksrc/skins/native/task.c (rt_task_create): Use a separate string
Index: ksrc/skins/native/mutex.c
===================================================================
--- ksrc/skins/native/mutex.c   (revision 465)
+++ ksrc/skins/native/mutex.c   (working copy)
@@ -461,8 +461,11 @@
     mutex->owner = 
thread2rtask(xnsynch_wakeup_one_sleeper(&mutex->synch_base));
 
     if (mutex->owner != NULL)
+       {
+       mutex->lockcnt = 1;
        xnpod_schedule();
-    
+       }
+
  unlock_and_exit:
 
     xnlock_put_irqrestore(&nklock,s);

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to