Anders Blomdell wrote:
Philippe Gerum wrote:
Jan Kiszka wrote:
Wolfgang Grandegger wrote:
Dmitry Adamushko wrote:
this is the final set of patches against the SVN trunk of 2006-02-03.
It addresses mostly remarks concerning naming (XN_ISR_ISA ->
XN_ISR_EDGE), a few cleanups and updated comments.
Functionally, the support for shared interrupts (a few flags) to the
Not directly your fault: the increasing number of return flags for IRQ
handlers makes me worry that they are used correctly. I can figure out
what they mean (not yet that clearly from the docs), but does someone
else understand all this:
ISR says it has handled the IRQ, and does not want any propagation to
take place down the pipeline. IOW, the IRQ processing stops there.
This says that the interrupt will be ->end'ed at some later time
(perhaps in the interrupt handler task)
The ISR may end the IRQ before returning, or leave it to the nucleus upon return
by adding the ENABLE bit.
ISR says it wants the IRQ to be propagated down the pipeline. Nothing
is said about the fact that the last ISR did or did not handle the IRQ
locally; this is irrelevant.
This says that the interrupt will eventually be ->end'ed by some later
stage in the pipeline.
ISR requests the interrupt dispatcher to re-enable the IRQ line upon
return (cumulable with HANDLED/CHAINED).
This is wrong; we should only associate this to HANDLED; sorry.
This says that the interrupt will be ->end'ed when this interrupt
This new one comes from Dmitry's patch for shared IRQ support AFAICS.
It would mean to continue processing the chain of handlers because the
last ISR invoked was not concerned by the outstanding IRQ.
Sounds like RT_INTR_CHAINED, except that it's for the current pipeline
Now for the quiz question (powerpc arch):
1. Assume an edge triggered interrupt
2. The RT-handler returns RT_INTR_ENABLE | RT_INTR_ENABLE (i.e. shared
interrupt, but no problem since it's edge-triggered)
( Assuming RT_INTR_CHAINED | RT_INTR_ENABLE )
3. Interrupt gets ->end'ed right after RT-handler has returned
4. Linux interrupt eventually handler starts its ->end() handler:
if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
// Next interrupt occurs here!
It can't occur here: hw interrupts are off after local_irq_save_hw, and unlocking
some IRQ does not flush the IRQ log.
Wouldn't this lead to a lost interrupt? Or am I overly paranoid?
This could happen, yep. Actually, this would be a possible misuse of the ISR
If one chains the handler Adeos-wise, it is expected to leave the IC in its
original state wrt the processed interrupt. CHAINED should be seen as mutually
exclusive with ENABLE.
My distinct feeling is that the return value should be a scalar and not
To sum up, the valid return values are HANDLED, HANDLED | ENABLE (*), HANDLED |
CHAINED and CHAINED. It's currently a set because I once thought that the
"handled" indication (or lack of) could be a valuable information to gather at
nucleus level to detect unhandled RT interrupts. Fact is that we currently don't
use this information, though. IOW, we could indeed define some enum and have a
scalar there instead of a set; or we could just leave this as a set, but whine
when detecting the invalid ENABLE | CHAINED combination.
(*) because the handler does not necessary know how to ->end() the current IRQ at
IC level, but Xenomai always does.
I would vote for the (already scheduled?) extension to register an
optimised IRQ trampoline in case there is actually no sharing taking
place. This would also make the "if (irq == XNARCH_TIMER_IRQ)" path
I support that. Shared interrupts should be handled properly by Xeno
since such - I'd say "last resort" - configuration could be needed;
this said, we should not see this as the rule but rather as the
exception, since this is basically required to solve some underlying
hw limitations wrt interrupt management, and definitely has a
significant cost on processing each shared IRQ wrt determinism.
Incidentally, there is an interesting optimization on the project's
Is this todo list accessible anywhere?
There's a roadmap for v2.1 that has been posted to the -core list in
October/November. Aside of that, the todos are not maintained in a centralized and
accessible way yet. We could perhaps use GNA's task manager for that
(http://gna.org/task/?group=xenomai), even if not to the full extent of its features.
> that would allow non-RT interrupts to be masked at IC level when
the Xenomai domain is active. We could do that on any arch with
civilized interrupt management, and that would prevent any
asynchronous diversion from the critical code when Xenomai is running
RT tasks (kernel or user-space). Think of this as some hw-controlled
interrupt shield. Since this feature requires to be able to
individually mask each interrupt source at IC level, there should be
no point in sharing fully vectored interrupts in such a configuration
anyway. This fact also pleads for having the shared IRQ support as a
Xenomai-core mailing list