Hey Jan,

Unfortunately, assigning the irq 16 to be handled by only one cpu (which is
what I understand is the result of "echo 1 > /proc/irq/16/smp_affinity") did
not work.
After 3 hours of operation the event got triggered and the disc access
performance decreased. See below...

Well, I don't know what to try out next. Please let me know if you have any
idea of how to debug this.

Thanks so much,
 peter


###############################################################

[13536.435172] Pid: 0, comm: swapper Tainted: P
2.6.35-ipipe-2.5.4-slim #2
[13536.435174] Call Trace:
[13536.435176]  <IRQ>  [<ffffffff8108bb56>] __report_bad_irq+0x26/0xa0
[13536.435183]  [<ffffffff8108bd5c>] note_interrupt+0x18c/0x1d0
[13536.435185]  [<ffffffff8108c77d>] handle_fasteoi_irq+0xcd/0x100
[13536.435188]  [<ffffffff8100656d>] handle_irq+0x1d/0x30
[13536.435190]  [<ffffffff81005a40>] do_IRQ+0x70/0x100
[13536.435192]  [<ffffffff81092147>] __ipipe_sync_stage+0x207/0x20d
[13536.435194]  [<ffffffff810059d0>] ? do_IRQ+0x0/0x100
[13536.435196]  [<ffffffff8109214d>] ? __xirq_end+0x0/0x9c
[13536.435197]  [<ffffffff810059d0>] ? do_IRQ+0x0/0x100
[13536.435199]  [<ffffffff810926a3>] __ipipe_walk_pipeline+0x113/0x120
[13536.435202]  [<ffffffff81024414>] __ipipe_handle_irq+0x124/0x310
[13536.435204]  [<ffffffff8108bf10>] ? __ipipe_ack_fasteoi_irq+0x0/0x10
[13536.435207]  [<ffffffff814f78d3>] common_interrupt+0x13/0x2c
[13536.435208]  <EOI>  [<ffffffff810249d6>] ? __ipipe_halt_root+0x26/0x40
[13536.435213]  [<ffffffff81061191>] ? atomic_notifier_call_chain+0x11/0x20
[13536.435217]  [<ffffffff8100cbd5>] default_idle+0x45/0x50
[13536.435220]  [<ffffffff8100198a>] cpu_idle+0x7a/0xd0
[13536.435224]  [<ffffffff814e25fc>] rest_init+0xbc/0xd0
[13536.435228]  [<ffffffff817cbe98>] start_kernel+0x47a/0x486
[13536.435230]  [<ffffffff817cb31c>] x86_64_start_reservations+0x12c/0x130
[13536.435232]  [<ffffffff817cb432>] x86_64_start_kernel+0x112/0x119
[13536.435233] handlers:
[13536.435234] [<ffffffff8136ed60>] (usb_hcd_irq+0x0/0xb0)
[13536.435238] [<ffffffffa00c4c30>] (mpt_interrupt+0x0/0xa00 [mptbase])
[13536.435251] Disabling IRQ #16
r...@mandy:~# cat /proc/irq/16/smp_affinity
01
r...@mandy:~# hdparm -t /dev/sdg

/dev/sdg:
 Timing buffered disk reads:    8 MB in  3.21 seconds =   2.49 MB/sec

###############################################################




On Mon, Oct 25, 2010 at 2:43 PM, Philippe Gerum <[email protected]> wrote:

> On Mon, 2010-10-25 at 23:26 +0200, Jan Kiszka wrote:
> > Am 25.10.2010 23:21, Philippe Gerum wrote:
> > > On Mon, 2010-10-25 at 21:40 +0200, Jan Kiszka wrote:
> > >> Am 25.10.2010 21:03, Peter Pastor wrote:
> > >>> Hey Jan,
> > >>>
> > >>> I did not apply any ubuntu patch for kernel 2.6.35 (since I do not
> have
> > >>> one).  Also, good to know that I should not use xenomai patches
> together
> > >>> with ubuntu patches.
> > >>>
> > >>> Anyway, the problem occurred as well with the kernel 2.6.35 (see
> attached
> > >>> dmesg_bad_2.6.35)
> > >>> I also attached the config.
> > >>>
> > >>
> > >> ...
> > >>
> > >>> [ 5751.714643] irq 16: nobody cared (try booting with the "irqpoll"
> option)
> > >>> [ 5751.714649] Pid: 0, comm: swapper Tainted: P
>  2.6.35-ipipe-2.5.4-slim #2
> > >>> [ 5751.714653] Call Trace:
> > >>> [ 5751.714655]  <IRQ>  [<ffffffff8108bb56>]
> __report_bad_irq+0x26/0xa0
> > >>> [ 5751.714668]  [<ffffffff8108bd5c>] note_interrupt+0x18c/0x1d0
> > >>> [ 5751.714672]  [<ffffffff8108c77d>] handle_fasteoi_irq+0xcd/0x100
> > >>> [ 5751.714677]  [<ffffffff8100656d>] handle_irq+0x1d/0x30
> > >>> [ 5751.714681]  [<ffffffff81005a40>] do_IRQ+0x70/0x100
> > >>> [ 5751.714685]  [<ffffffff81092147>] __ipipe_sync_stage+0x207/0x20d
> > >>> [ 5751.714689]  [<ffffffff810059d0>] ? do_IRQ+0x0/0x100
> > >>> [ 5751.714692]  [<ffffffff8109214d>] ? __xirq_end+0x0/0x9c
> > >>> [ 5751.714696]  [<ffffffff810059d0>] ? do_IRQ+0x0/0x100
> > >>> [ 5751.714700]  [<ffffffff810926a3>]
> __ipipe_walk_pipeline+0x113/0x120
> > >>> [ 5751.714706]  [<ffffffff81024414>] __ipipe_handle_irq+0x124/0x310
> > >>> [ 5751.714708]  [<ffffffff8108bf10>] ?
> __ipipe_ack_fasteoi_irq+0x0/0x10
> > >>> [ 5751.714712]  [<ffffffff814f78d3>] common_interrupt+0x13/0x2c
> > >>> [ 5751.714713]  <EOI>  [<ffffffff810249d6>] ?
> __ipipe_halt_root+0x26/0x40
> > >>> [ 5751.714718]  [<ffffffff81061191>] ?
> atomic_notifier_call_chain+0x11/0x20
> > >>> [ 5751.714722]  [<ffffffff8100cbd5>] default_idle+0x45/0x50
> > >>> [ 5751.714725]  [<ffffffff8100198a>] cpu_idle+0x7a/0xd0
> > >>> [ 5751.714728]  [<ffffffff814f14a1>] start_secondary+0x1c1/0x1c5
> > >>> [ 5751.714730] handlers:
> > >>> [ 5751.714730] [<ffffffff8136ed60>] (usb_hcd_irq+0x0/0xb0)
> > >>> [ 5751.714735] [<ffffffffa00bac30>] (mpt_interrupt+0x0/0xa00
> [mptbase])
> > >>> [ 5751.714747] Disabling IRQ #16
> > >>
> > >> I'm not yet sure, but a first thought: We have a shared fasteoi IRQ
> > >> here, and we are on SMP. Compared to vanilla, the fasteoi flow of
> ipipe
> > >> looks so much different to me ATM that I tend to believe two cores end
> > >> up having this IRQ queued at the same time. One runs first and handles
> > >> all triggers, the second bails out like above.
> > >>
> > >> Philippe, we _end_ fasteoi in the ipipe ack path. Do we mask them
> prior
> > >> to this? What prevents a second IRQ arriving after this early eoi?
> > >
> > > All fasteoi handlers are supposed to mask+ack when the pipeline is
> > > enabled,
> >
> > What am I missing? The code I was looking at (__ipipe_ack_fasteoi) just
> > does a regular eoi at chip level.
>
> Have a look at the chip handlers.
>
> >
> > > to avoid interrupt storm due to the deferral we may introduce
> > > in the irq delivery. I do see this in the regular ioapic chip
> > > descriptor, but this is lacking with interrupt remap. I guess we could
> > > have a problem with Intel IOMMUs.
> >
> > IOMMUs should blow up the system anyway once a PCI driver is used in the
> > RT domain (DMA remapping involved Linux locks and may even allocate
> > memory). Guess we should add a !IPIPE to their Kconfig entries.
> >
>
> Yes, would make sense.
>
> > Jan
> >
>
> --
> Philippe.
>
>
>
_______________________________________________
Xenomai-help mailing list
[email protected]
https://mail.gna.org/listinfo/xenomai-help

Reply via email to