Jan Kiszka wrote:
 > Jeroen Van den Keybus wrote:
 > > Gilles,
 > > 
 > > 
 > > I cannot reproduce those messages after turning nucleus debugging on.
 > > Instead, I now either get relatively more failing mutexes or even hard
 > > lockups with the test program I sent to you. If the computer didn't crash,
 > > dmesg contains 3 Xenomai messages relating to a task being movend to
 > > secondary domain after exception #14. As when the computer crashes: I have
 > > written the last kernel panic message on a paper. Please tell if you want
 > > also the addresses or (part of) the call stack.
 > > 
 > > I'm still wondering if there's a programming error in the mutex test
 > > program. After I sent my previous message, and before I turned nucleus
 > > debugging on, I managed (by reducing the sleeptimes to max. 5.0e4) to
 > > fatally crash the computer, while spewing out countless 'scheduling while
 > > atomic messages'. Is the mutex error reproducible ?
 > 
 > I was not able to crash my box or generate that scheduler warnings, but
 > the attached patch fixes the false positive warnings of unlocked
 > mutexes. We had a "leak" in the unlock path when someone was already
 > waiting. Anyway, *this* issues should not have caused any other problems
 > then the wrong report of rt_mutex_inquire().

Actually the patch seem insufficient, the whole block :
        {
        xnsynch_set_owner(&mutex->synch_base,&task->thread_base);
        mutex->owner = task;
        mutex->lockcnt = 1;
        goto unlock_and_exit;
        }

should be done after xnsynch_sleep_on in rt_mutex_lock.

-- 


                                            Gilles Chanteperdrix.

Reply via email to