[Xenomai-core] Re: How to hook genirq best

2006-12-07 Thread Philippe Gerum
On Thu, 2006-12-07 at 00:03 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > Not for x86, ipipe_ack is always valid, and __ipipe_ack_irq() which
> > calls it is arch-dependent, so we are safe. Controlling the irq_chip
> 
> lapic_chip?
> 

This was missing from the last patch, since it introduces unconditional
acknowledge:

+ .ipipe_ack = lapic_noop_lapic

> Moreover, calling [ipipe_]ack is flow type specific. So you would also
> have to install a no-op handler for fasteoi chips e.g. Given that
> ioapic_chip seems to be used for both edge-triggered and fast irqs, your
> previous version looks better.

Actually, the general problem is that __ipipe_ack_irq() is not currently
flow type specific, albeit it should be. You need this routine to be
flow type aware to properly handle the primary ack/eoi cycle, and this
does not depend on the location where PIC routine calls have been voided
(i.e. generic or arch-dep/irq_chip section).

What we need is a set of companion routines to handle__irq(),
each of which implementing the required acknowledge code, precisely the
one which has been removed or nop'ed from the original flow handlers.
__set_irq_handler() would then install a magic pointer in the IRQ
descriptor, calling back the right acknowledge code. The very inelegant
thing about that is that we would have to somehow select a companion
handler based on the known values of regular flow handler pointers, but
unless we patch all calls sites of set_irq_chip_and_handler*(), I don't
see another way to do it right now. The flow would look like this:

__ipipe_handle_irq -> irq_desc[irq].ipipe_acknowledge(irq) ->
irq_chip(irq)->ipipe_*

I will work on that in a few days.

[...]

> > This issue goes beyond the quick hack of masking recurring IRQs, and
> > IIRC, we discussed it briefly on this list some months ago. This is what
> > would bring a significant improvement wrt determinism.
> 
> As long as we do not fall back behind previous characteristics of
> I-pipe, I see no problem with addressing this in an even smarter way
> later. Is this patch equivalent, e.g. to 1.5's behaviour?
> 

Talking about edge IRQs, where recurring hits are specifically handled
by the regular Linux code, I see no difference. 2.6.17-1.5 used to let
the ->end() handlers deal with such issue, which it usually did by _not_
unmasking the interrupt channel upon recurring IRQ detection, whilst
2.6.19-1.6 detects this situation directly from the generic handling
code, then refrains from unmasking the source under the same condition.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: How to hook genirq best

2006-12-06 Thread Jan Kiszka
Philippe Gerum wrote:
> On Wed, 2006-12-06 at 13:05 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
> 
> [...]
> 
>>> The other important issue is that patching the call sites does not
>>> preclude from analysing each and every PIC control routine, for ironing
>>> them. When the number of PICs is small enough, it's clearly safer to
>>> have all changes at one location (i.e. the PIC management file). At
>>> least, you know what has been adapted; but it's (only) a matter of
>>> (maintainer's) taste.
>> Having a look at new/updated PIC code is one (unavoidable) thing,
>> actually having to touch it and keeping the changes in sync even with
>> only minor style changes or the code is another.
>>
> 
> It's not a problem to fix minor style changes in arch-dep code; but it
> can be very annoying to forget important changes in generic the code. So
> your argument cuts both ways. Really, it's a matter of where one thinks
> the code is going to change in the most significant way. Regarding x86
> (and I'm not arguing for any other arch, here), I see less changes to
> happen in the PIC-dependent routines, than at their call sites, as more
> archs are being converted to use the genirq layer. In any case, I don't
> see this issue as being critical; it would be easy to change the
> implementation overnight without any drawback imposed on client code.

Ok, let's see how other archs work this out and then decide how to
consolidate the solutions. I would just try to find a common way, also
to make understanding the patch mechanisms easier (for future ports).

> 
> [...]
> 
>>> --- arch/i386/kernel/ipipe.c~   2006-12-03 18:12:59.0 +0100
>>> +++ arch/i386/kernel/ipipe.c2006-12-06 12:36:21.0 +0100
>>> @@ -167,10 +167,12 @@
>>>  static int __ipipe_ack_irq(unsigned irq)
>>>  {
>>> irq_desc_t *desc = irq_desc + irq;
>>> +
>>> +   desc->chip->ipipe_ack(irq);
>>> +
>> Might be NULL for some chips like fasteoi ones, no?
>>
> 
> Not for x86, ipipe_ack is always valid, and __ipipe_ack_irq() which
> calls it is arch-dependent, so we are safe. Controlling the irq_chip

lapic_chip?

Moreover, calling [ipipe_]ack is flow type specific. So you would also
have to install a no-op handler for fasteoi chips e.g. Given that
ioapic_chip seems to be used for both edge-triggered and fast irqs, your
previous version looks better.

> descriptor, instead of generalizing the logic from the generic code
> (which could not have told you that), is an advantage here.
> 
>>> if (desc->handle_irq == &handle_fasteoi_irq)
>>> desc->chip->ipipe_eoi(irq);
>>> -   else
>>> -   desc->chip->ipipe_ack(irq);
>>> +
>>> return 1;
>>>  }
>>>  
>>> This is not relevant to the way we hook genirq though; we need to handle
>>> this in the primary I-pipe handler anyway.
>>>
  Generally, the question remains for me
 if at least IRQs arriving to a stalled I-pipe domain should always be
 masked on ack and unmask when being replayed.
>>> What particular (problematic) case do you have in mind regarding this?
>> Recurring IRQs of the same source to a stalled domain, potentially
>> disturbing a higher prio domain (even if they do not get beyond I-pipe
>> core stubs). -rt has to deal with the same issue for low-prio threaded
>> IRQs, and I don't see a reason yet why we should differ when they keep
>> the line masked after the first or the second event.
>>
> 
> This problem should be addressed at a higher level: how do we prevent
> low-priority IRQs from ever happening when a high priority domain is
> running, so that such IRQs could not even hit the primary Adeos handler.
> Currently, the pipeline head optimization avoids such perturbation when
> the high priority domain is stalled by masking the interrupts at CPU
> level; what we need is to extend this to the unstalled state.

Yep.

> 
> This issue goes beyond the quick hack of masking recurring IRQs, and
> IIRC, we discussed it briefly on this list some months ago. This is what
> would bring a significant improvement wrt determinism.

As long as we do not fall back behind previous characteristics of
I-pipe, I see no problem with addressing this in an even smarter way
later. Is this patch equivalent, e.g. to 1.5's behaviour?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: How to hook genirq best

2006-12-06 Thread Philippe Gerum
On Wed, 2006-12-06 at 13:05 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:

[...]

> > The other important issue is that patching the call sites does not
> > preclude from analysing each and every PIC control routine, for ironing
> > them. When the number of PICs is small enough, it's clearly safer to
> > have all changes at one location (i.e. the PIC management file). At
> > least, you know what has been adapted; but it's (only) a matter of
> > (maintainer's) taste.
> 
> Having a look at new/updated PIC code is one (unavoidable) thing,
> actually having to touch it and keeping the changes in sync even with
> only minor style changes or the code is another.
> 

It's not a problem to fix minor style changes in arch-dep code; but it
can be very annoying to forget important changes in generic the code. So
your argument cuts both ways. Really, it's a matter of where one thinks
the code is going to change in the most significant way. Regarding x86
(and I'm not arguing for any other arch, here), I see less changes to
happen in the PIC-dependent routines, than at their call sites, as more
archs are being converted to use the genirq layer. In any case, I don't
see this issue as being critical; it would be easy to change the
implementation overnight without any drawback imposed on client code.

[...]

> > 
> > --- arch/i386/kernel/ipipe.c~   2006-12-03 18:12:59.0 +0100
> > +++ arch/i386/kernel/ipipe.c2006-12-06 12:36:21.0 +0100
> > @@ -167,10 +167,12 @@
> >  static int __ipipe_ack_irq(unsigned irq)
> >  {
> > irq_desc_t *desc = irq_desc + irq;
> > +
> > +   desc->chip->ipipe_ack(irq);
> > +
> 
> Might be NULL for some chips like fasteoi ones, no?
> 

Not for x86, ipipe_ack is always valid, and __ipipe_ack_irq() which
calls it is arch-dependent, so we are safe. Controlling the irq_chip
descriptor, instead of generalizing the logic from the generic code
(which could not have told you that), is an advantage here.

> > if (desc->handle_irq == &handle_fasteoi_irq)
> > desc->chip->ipipe_eoi(irq);
> > -   else
> > -   desc->chip->ipipe_ack(irq);
> > +
> > return 1;
> >  }
> >  
> > This is not relevant to the way we hook genirq though; we need to handle
> > this in the primary I-pipe handler anyway.
> > 
> >>  Generally, the question remains for me
> >> if at least IRQs arriving to a stalled I-pipe domain should always be
> >> masked on ack and unmask when being replayed.
> > 
> > What particular (problematic) case do you have in mind regarding this?
> 
> Recurring IRQs of the same source to a stalled domain, potentially
> disturbing a higher prio domain (even if they do not get beyond I-pipe
> core stubs). -rt has to deal with the same issue for low-prio threaded
> IRQs, and I don't see a reason yet why we should differ when they keep
> the line masked after the first or the second event.
> 

This problem should be addressed at a higher level: how do we prevent
low-priority IRQs from ever happening when a high priority domain is
running, so that such IRQs could not even hit the primary Adeos handler.
Currently, the pipeline head optimization avoids such perturbation when
the high priority domain is stalled by masking the interrupts at CPU
level; what we need is to extend this to the unstalled state.

This issue goes beyond the quick hack of masking recurring IRQs, and
IIRC, we discussed it briefly on this list some months ago. This is what
would bring a significant improvement wrt determinism.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Re: How to hook genirq best

2006-12-06 Thread Jan Kiszka
Philippe Gerum wrote:
> On Wed, 2006-12-06 at 10:01 +0100, Jan Kiszka wrote:
>> Hi all,
> 
>> I had a look at the related part in 2.6.19-i386-1.6-01 meanwhile, and
>> there seems to be a concise pattern for the irq_chip changes:
>>
>> .ipipe_ack = (.mask_ack) ? .mask_ack : .ack;
>> .ipipe_eoi = .eoi;
>>
> 
> The complete pattern is:
> 
> .ipipe_ack = .mask_ack | .ack
> .ipipe_eoi = .eoi
> .mask_ack | .ack = nop

Yep, forgot to mention the nop-setting.

> 
> Then, target routines from the addressed IRQ chip controllers have to be
> ironed with hw interrupt masking for spinlock control.
> 
>> I agree with Wolfgang, these changes indeed require a lot of patching
>> already on x86 because often two versions of the irq_chip definition
>> have to be provided (and there is the risk to miss one as it currently
>> seems to be the case in visws_apic.c).
> 
> The Colbalt PIIx4 has not been patched on purpose: it deals with a
> master multiplexed interrupt which feeds a number of virtual IRQs
> downward. Problem is that this scheme requires a specific I-pipe
> handling (like we added for ARM) to act as the demux, and which is not
> available yet for x86. So better not patch than patching erroneously or
> incompletely; it's the principle of least surprise: Cobalt+PIIx4 is not
> usable yet with the I-pipe, and one would know it immediately.

I see.

> 
> Finding the appropriate spots for patching the descriptors is basically
> a matter of grepping 'struct irq_chip' in the arch-dep section. It's
> manageable.
> 
>> I must confess that I do not see the advantage of the approach to patch
>> the irq_chip definitions directly. Rather, one of the following ways
>> should scale better over a large number of irq_chips:
>>
>> A) Perform patching during runtime when I-pipe is taking over the IRQs.
>> Should be a trivial loop over all IRQ descriptors. Either overwrite the
>> existing irq_chips (i.e. like sketched above) or provide new pseudo
>> irq_chip structures to Linux.
> 
> This approach - which used to be the one followed with legacy Adeos
> patches - was the source of major dysfunctioning at boot time on x86,
> particularly with PCI irq routing on APIC-enabled systems. Same with MSI
> stuff. It still works fine on ppc (and likely ARM) though. This said, at
> that time, we used to shuffle the IDT contents too, so maybe it's time
> to have a second look.

Yes, because the irq_chip is not the IDT. We would patch static function
pointers a few levels up now.

> 
>> B) Patch the users of chip->ack/eoi/mask_ack or whatever. Given that
>> this should basically be confined to kernel/irq/chip.c,
> 
> This was the implementation of the draft patch I sent. It's ok, as soon
> as you can rely on the above assumption. That's the whole point.
> 
>>  it looks
>> manageable too and would also avoid any useless runtime checks for NULL
>> handlers
>>  or even calling void functions.
>>
> 
> AFAIC, this is something which is perfectly acceptable for platforms
> that require/need it. This is the purpose of decoupling Adeos ports:
> allow each arch to implement the best approach. Blackfin, x86, ia64 have
> few PICs, so patching the irq_chip descriptor is more elegant than
> fiddling with the generic IRQ flow control (/me think), basically
> because this is a per-arch, per-PIC logic. ARM, and ppc's would likely
> prefer the other approach due to the huge number of PIC variants.
> 
> The other important issue is that patching the call sites does not
> preclude from analysing each and every PIC control routine, for ironing
> them. When the number of PICs is small enough, it's clearly safer to
> have all changes at one location (i.e. the PIC management file). At
> least, you know what has been adapted; but it's (only) a matter of
> (maintainer's) taste.

Having a look at new/updated PIC code is one (unavoidable) thing,
actually having to touch it and keeping the changes in sync even with
only minor style changes or the code is another.

> 
>> Another topic is how to deal with pending IRQs. I haven't looked through
>> all cases yet, but already the fasteoi one seems to differ in I-pipe
>> when comparing to a similar situation in -rt: threaded IRQs. -rt ends
>> (eoi) and masks such IRQs that are about to be deferred to thread
>> context, I-pipe only ends them.
> 
> --- arch/i386/kernel/ipipe.c~ 2006-12-03 18:12:59.0 +0100
> +++ arch/i386/kernel/ipipe.c  2006-12-06 12:36:21.0 +0100
> @@ -167,10 +167,12 @@
>  static int __ipipe_ack_irq(unsigned irq)
>  {
>   irq_desc_t *desc = irq_desc + irq;
> +
> + desc->chip->ipipe_ack(irq);
> +

Might be NULL for some chips like fasteoi ones, no?

>   if (desc->handle_irq == &handle_fasteoi_irq)
>   desc->chip->ipipe_eoi(irq);
> - else
> - desc->chip->ipipe_ack(irq);
> +
>   return 1;
>  }
>  
> This is not relevant to the way we hook genirq though; we need to handle
> this in the primary I-pipe handler anyway.
> 
>>  Generally, the question remains for me

[Xenomai-core] Re: How to hook genirq best

2006-12-06 Thread Philippe Gerum
On Wed, 2006-12-06 at 10:01 +0100, Jan Kiszka wrote:
> Hi all,

> I had a look at the related part in 2.6.19-i386-1.6-01 meanwhile, and
> there seems to be a concise pattern for the irq_chip changes:
> 
> .ipipe_ack = (.mask_ack) ? .mask_ack : .ack;
> .ipipe_eoi = .eoi;
> 

The complete pattern is:

.ipipe_ack = .mask_ack | .ack
.ipipe_eoi = .eoi
.mask_ack | .ack = nop

Then, target routines from the addressed IRQ chip controllers have to be
ironed with hw interrupt masking for spinlock control.

> I agree with Wolfgang, these changes indeed require a lot of patching
> already on x86 because often two versions of the irq_chip definition
> have to be provided (and there is the risk to miss one as it currently
> seems to be the case in visws_apic.c).

The Colbalt PIIx4 has not been patched on purpose: it deals with a
master multiplexed interrupt which feeds a number of virtual IRQs
downward. Problem is that this scheme requires a specific I-pipe
handling (like we added for ARM) to act as the demux, and which is not
available yet for x86. So better not patch than patching erroneously or
incompletely; it's the principle of least surprise: Cobalt+PIIx4 is not
usable yet with the I-pipe, and one would know it immediately.

Finding the appropriate spots for patching the descriptors is basically
a matter of grepping 'struct irq_chip' in the arch-dep section. It's
manageable.

> 
> I must confess that I do not see the advantage of the approach to patch
> the irq_chip definitions directly. Rather, one of the following ways
> should scale better over a large number of irq_chips:
> 
> A) Perform patching during runtime when I-pipe is taking over the IRQs.
> Should be a trivial loop over all IRQ descriptors. Either overwrite the
> existing irq_chips (i.e. like sketched above) or provide new pseudo
> irq_chip structures to Linux.

This approach - which used to be the one followed with legacy Adeos
patches - was the source of major dysfunctioning at boot time on x86,
particularly with PCI irq routing on APIC-enabled systems. Same with MSI
stuff. It still works fine on ppc (and likely ARM) though. This said, at
that time, we used to shuffle the IDT contents too, so maybe it's time
to have a second look.

> 
> B) Patch the users of chip->ack/eoi/mask_ack or whatever. Given that
> this should basically be confined to kernel/irq/chip.c,

This was the implementation of the draft patch I sent. It's ok, as soon
as you can rely on the above assumption. That's the whole point.

>  it looks
> manageable too and would also avoid any useless runtime checks for NULL
> handlers
>  or even calling void functions.
> 

AFAIC, this is something which is perfectly acceptable for platforms
that require/need it. This is the purpose of decoupling Adeos ports:
allow each arch to implement the best approach. Blackfin, x86, ia64 have
few PICs, so patching the irq_chip descriptor is more elegant than
fiddling with the generic IRQ flow control (/me think), basically
because this is a per-arch, per-PIC logic. ARM, and ppc's would likely
prefer the other approach due to the huge number of PIC variants.

The other important issue is that patching the call sites does not
preclude from analysing each and every PIC control routine, for ironing
them. When the number of PICs is small enough, it's clearly safer to
have all changes at one location (i.e. the PIC management file). At
least, you know what has been adapted; but it's (only) a matter of
(maintainer's) taste.

> 
> Another topic is how to deal with pending IRQs. I haven't looked through
> all cases yet, but already the fasteoi one seems to differ in I-pipe
> when comparing to a similar situation in -rt: threaded IRQs. -rt ends
> (eoi) and masks such IRQs that are about to be deferred to thread
> context, I-pipe only ends them.

--- arch/i386/kernel/ipipe.c~   2006-12-03 18:12:59.0 +0100
+++ arch/i386/kernel/ipipe.c2006-12-06 12:36:21.0 +0100
@@ -167,10 +167,12 @@
 static int __ipipe_ack_irq(unsigned irq)
 {
irq_desc_t *desc = irq_desc + irq;
+
+   desc->chip->ipipe_ack(irq);
+
if (desc->handle_irq == &handle_fasteoi_irq)
desc->chip->ipipe_eoi(irq);
-   else
-   desc->chip->ipipe_ack(irq);
+
return 1;
 }
 
This is not relevant to the way we hook genirq though; we need to handle
this in the primary I-pipe handler anyway.

>  Generally, the question remains for me
> if at least IRQs arriving to a stalled I-pipe domain should always be
> masked on ack and unmask when being replayed.

What particular (problematic) case do you have in mind regarding this?

> 
> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core