Philippe Gerum schrieb:
> On Fri, 2007-07-20 at 14:16 +0200, Philippe Gerum wrote: 
>> On Fri, 2007-07-20 at 13:54 +0200, M. Koehrer wrote:
>>> Hi Philippe,
>>> I left my test running for a couple of hours - no freeze so far... 
>>> However, I have to do some other stuff on this machine, I have to stop the 
>>> test now...
>> Ok, thanks for the feedback. I will send an extended patch later today,
>> so that you could test it on a longer period when you see fit.
> It took me a bit longer than expected, but here is a patch which
> addresses all the pending issues with RPI, hopefully (applies against
> 2.3.1 stock).
> The good thing about Jan grumbling at me, is that this usually makes me
> look at the big picture anew. And the RPI picture was not that nice,
> that's a fact.
> Beside the locking sequence issue, the ex-aequo #1 problem was that CPU
> migration of Linux tasks causing a RPI boost had some very nasty
> side-effects on RPI management, and would create all sort of funky
> situations I'm too shameful to talk about, except under the generic term
> of "horrendous mess".
> Now, regarding the deadlock issue, suppressing the RPI-specific locking
> entirely would have been the best solution, but unfortunately, the
> migration scheme makes this out of reach, at least without resorting to
> some hairy and likely unreliable implementation. Therefore, the solution
> I came with consists of making the RPI lock a per-cpu thing, so that
> most RPI routines are actually grabbing a _local_ lock wrt the current
> CPU, those routines being allowed hold the nklock as they wish. When
> some per-CPU RPI lock is accessed from a remote CPU, it is guaranteed
> that _no nklock_ may be held nested. Actually, the remote case only
> occurs once, in rpi_clear_remote(), and all its callers are guaranteed
> to be nklock-free (a debug assertion even enforces that).

Yeah, it is actually safe against deadlocks now. Still, I wonder why we 
can't design xnshadow_rpi_check like this:

        int need_renice = 0;

        xnlock_get_irqsave(&rpislot->lock, s);

        if (sched_emptypq_p(&rpislot->threadq) &&
            xnpod_root_priority() != XNCORE_IDLE_PRIO)
                need_renice = 1;

        xnlock_put_irqrestore(&rpislot->lock, s);

        if (need_renice)

If we can avoid nesting (even if it's safe), we should do so. Or does 
this pattern here introduce new, ugly race possibility?


Xenomai-core mailing list

Reply via email to