Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>> Gilles Chanteperdrix wrote:
>>>> Jan Kiszka wrote:
>>>>> Gilles Chanteperdrix wrote:
>>>>>> Jan Kiszka wrote:
>>>>>>> In case the user thinks rtdm_lock_get could be used like spin_lock or
>>>>>>> messes up the IRQ protection for other reasons, catch this with a
>>>>>>> XENO_BUGON.
>>>>>>> Signed-off-by: Jan Kiszka <jan.kis...@siemens.com>
>>>>>>> ---
>>>>>>>  include/rtdm/rtdm_driver.h |    8 ++++++++
>>>>>>>  1 files changed, 8 insertions(+), 0 deletions(-)
>>>>>>> diff --git a/include/rtdm/rtdm_driver.h b/include/rtdm/rtdm_driver.h
>>>>>>> index 058a9f8..fe42eea 100644
>>>>>>> --- a/include/rtdm/rtdm_driver.h
>>>>>>> +++ b/include/rtdm/rtdm_driver.h
>>>>>>> @@ -655,7 +655,15 @@ typedef unsigned long rtdm_lockctx_t;
>>>>>>>   *
>>>>>>>   * Rescheduling: never.
>>>>>>>   */
>>>>>>> +#ifdef DOXYGEN_CPP /* Beautify doxygen output */
>>>>>>>  #define rtdm_lock_get(lock)    rthal_spin_lock(lock)
>>>>>>> +#else /* This is how it really works */
>>>>>>> +#define rtdm_lock_get(lock)                                    \
>>>>>>> +       do {                                                    \
>>>>>>> +               XENO_BUGON(RTDM, rthal_local_irq_test());       \
>>>>>>> +               rthal_spin_lock(lock);                          \
>>>>>>> +       } while (0)
>>>>>>> +#endif
>>>>>> Why is it a problem to call rthal_spin_lock with irqs off?
>>>>> Did I messed it up again? I meant it is a problem to call it with irqs
>>>>> *on*. Checking what rthal_local_irq_test() actually means... Hmm, I
>>>>> still think it's correct. Maybe we should rename rthal_local_irq_test to
>>>>> rthal_local_irq_enabled to clarify the usage.
>>>> Ok. Got it. So, maybe, what you want is:
>>>> if (rthal_local_irq_test())
>>>>    xnpod_lock_sched();
>>>> rthal_spin_lock
>>>> ?
>>> That would be a semantical enhancement of rtdm_spin_lock/unlock, but I'm
>>> not sure we want it. Because then you can run into bugs if the user
>>> forgets to pick the irqsave version for task context when there is also
>>> an IRQ context use of the same lock.
>>> So far the semantic of rtdm_lock was very simple: cross-CPU protection
>>> via spin lock, local preemption protection via IRQ lock. And that
>>> pattern could easily be validated with the instrumentation I posted. And
>>> so far no one asked for more. And finally: xnpod_lock/unlock_sched won't
>>> be be cheaper than irqsave/restore as it involves a full nklock
>>> acquisition - with irqsave/restore...
>> On the other hand, I already had to port some plain Linux drivers to
>> RTDM, and from this perspective, having a one to one mapping of the
>> services is a real win. I also think that equivalents to
>> preempt_enable() and preempt_disable() are missing in the RTDM
>> interface. I found myself calling
>> xnpod_lock_sched()/xnpod_unlock_sched() in some RTDM drivers, which is
>> bad, I know...
> Yes, but as long as we have no comparably cheap preempt_enable/disable
> in Xenomai, I think it is counterproductive to motivate its (potentially
> heavy) use in drivers.

I do not think the fact that xnpod_lock_sched is heavier than
preempt_disable is much of a problem: it is not that much heavier, and
if you use it, it is for lengthy critical sections for which you want
interrupts enabled, so, its cost is amortized. The problem is that
locking the scheduler for a long period of time is bad for the
scheduling, it can cause a kind of priority inversions. So, when porting
drivers, we should make the critical sections short and use
rtdm_lock_get_irqsave, period.


Xenomai-core mailing list

Reply via email to