<snip>

> > I would see the following scenario - an ISR wants to
> > _temporary_ defer an IRQ line enabling
> > until some later stage (e.g. rt_task which is a bottom half).
> > This is the only reason why xnarch_end_irq() or some later step in it
> > (in this case ->end() ) must be aware
> > of IPIPE_DISABLED_FLAG.
> >
> > Why the currently used approach is not that good for it (NOENABLE) ?
> >
> > 1)   it actually defers (for some PICs) not only "enabling" but sending
> > an EOI too;
> >     As a consequence :
> >
>
> This is no more the case for ppc over which Adeos had this bug Anders reported and
> fixed.

I thought that the actual reason of that problem was the use of rthal_enable_irq() in xnintr_irq_handler()
to play the IRQ .ending phase, not taking into account that .ending is not always == .enabling.
As a consequence, it didn't work for some ppc/PICs where the EOI is sent by .end and not .ack and
that's why rthal_end_irq() has been recently introduced.

> For each arch/pic pairs, ->ack() should now send eoi and likely mask the
> outstanding IRQ, whilst ->end() should only unmask it as needed.
> AFAICT after a brief inspection, x86, ppc, ia64, and blackfin ports look ok regarding this. (I
> did not check neither the ARM nor ppc64 ports yet, though).
> But in any case, this is the way Adeos is expected to behave, and it should be fixed iff it doesn't.

This actually makes the implemention of nested irq disable calls simpler.

But is such a rework of the PIC logic always possible and correct, hw-wise?

let's consider 2 cases :

1)
.ack = NULL (do nothing)
.end = send EOI

2)
.ack = { mask and send EOI }
.end = unmask

I guess, both 1) and 2) keep the line "disabled" until .end takes place.
But 2) requires 2 more opertions (mask and unmask) and, hence, it gives high
overhead, esp. on some sluggish archs with not memory-mapped i/o ports.

Are those variants always safely exchangable?

> <snip>
> To sum up, I agree with you that addressing #2 directly through a disable nesting
> count would solve those issues quite more elegantly.

Good.

> > Actually, why is ipipe_irq_unlock(irq) necessary in
> > __ipipe_override_irq_end()? ipipe_irq_lock() is not
> > called in __ipipe_ack_irq(). Is it locked somewhere else? At least, I
> > haven't found explicit ipipe_irq_lock()
> > or __ipipe_lock_irq() calls anywhere else.
>
> Basically because as documented in __do_IRQ, the ->end() handler has to deal with
> interrupts which got disabled while the handler was running. If for some reason,
> some IRQ handler decides to disable its own IRQ line, then the lock bit would be
> raised in the pipeline for this IRQ too as a result of calling disable_irq().
> Therefore, ->end() must unlock the IRQ at pipeline level whenever it eventually
> decides to unmask.

So roughly, __ipipe_irq_lock() is coupled with ->disable() while __ipipe_irq_unlock() -
with ->enable().
Actually, I was confused by the fact that .ack normally does the same thing
as ->disable(), i.e. masking, but there is no corresponding __ipipe_irq_lock() call there.
IOW, a number of __ipipe_irq_lock() calls doesn't correlate to a number of __ipipe_irq_unlock()
calls as 1:1.

And actually this saves us from a possible problem, I guess.
.ack and .end may be issued from different domains and that, in turn, leads to calling
__ipipe_irq_lock() and __ipipe_irq_unlock() from different domains too. As a result, a given
IRQ line is enabled (at the hw layer) but remains to be "LOCKED" in the domain where .ack
took place.

The same may happen if a rt ISR calls ->enable() and then asks nucleus to propagate an interrupt
down to the Linux domain where .end takes place.


> --
>
>Philippe.
>

--
Best regards,
Dmitry Adamushko

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to