Hi everybody I’m a xenomai newbie and I’m trying to write a RTDM driver starting from one of the examples (the one from Jan Kiszka, www.captain.at)
In the private data of the driver I’m using the object rtdm_event_t irq_event; to wake up a user task waiting for an event notification coming with an interrupt. The event object is initialized when the driver open is called rtdm_event_init(&my_context->irq_event, 0); The interrupt handler do this rtdm_event_signal(&my_context ->irq_event); events++; The driver ioctl function for WAIT do this ret = rtdm_event_wait(&my_context->irq_event); The interrupt management works well : I’m able to read the “events” interrupt counter increasing with a driver ioctl GET_EVENTS from a RT user task. Problems come when a RT task calls the ioctl for WAIT : these are the messages on top of the trace (the complete trace fills about 180-200K of txt file…). ---------------------------------------- BUSINT_WAIT : entry rtdm_event_wait &my_context->irq_event = C14F6668 ---------------------------------------- ---------------------------------------- BUSINT_WAIT : exit rtdm_event_wait ret = 00000000 t1 = 0000000B-14FB6A7A t2 = 0000000B-14FDF064 ---------------------------------------- Unable to handle kernel paging request at virtual address ffffffe4 pgd = c0004000 [ffffffe4] *pgd=d0002031, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] Modules linked in: CPU: 0 PC is at xnpod_schedule+0x1e4/0x7a8 LR is at 0xc02a40ac pc : [<c0060c5c>] lr : [<c02a40ac>] Not tainted sp : c0389dfc ip : 00500082 fp : c0389e28 r10: c02a3aec r9 : 00000001 r8 : fffffcec r7 : c02a3b1c r6 : c0246d08 r5 : c02a23c8 r4 : 00000000 r3 : c02a3af8 r2 : 60000093 r1 : c02a3af8 r0 : 00000000 Flags: nZCv IRQs off FIQs on Mode SVC_32 Segment kernel Control: C000717F Table: D11F8000 DAC: 00000017 Process gatekeeper/0 (pid: 103, stack limit = 0xc0388194) Stack: (0xc0389dfc to 0xc038a000) 9de0: c004678c 9e00: 00000000 c02a23c8 c0246d08 c02a23c0 c0246d20 ffffffff 00000c20 c0389e38 9e20: c0389e2c c0061efc c0060a88 c0389e5c c0389e3c c005a590 c0061ed8 00000013 9e40: 00000062 00000000 c02a23c0 00000001 c0389e88 c0389e60 c0027040 c005a4c8 9e60: 00000013 c12deac0 00000000 c02a42f0 00000000 ffffffff 00000c20 c0389e9c 9e80: c0389e8c c0026a24 c0026fbc c0246d00 c0389ecc c0389ea0 c0060aa8 c00269dc 9ea0: c02a42f0 c0246d00 c12deac0 00000000 c02a42f0 00000000 ffffffff 00000c20 9ec0: c0389ef0 c0389ed0 c0065608 c0060a88 00000001 00000003 00000061 c0246d20 9ee0: 00000000 c0389f04 c0389ef4 c005b364 c0065518 c0246d28 c0389f30 c0389f08 9f00: c0059d28 c005b2d8 c0246d20 c02a23c0 20000093 c0246d08 c0388000 00000000 9f20: c0246d00 c0389f4c c0389f34 c0059fbc c0059b58 c02a23c0 c0246d20 c0246d20 9f40: c0389f6c c0389f50 c005a168 c0059f4c c03c0d2c c02a42c8 00000000 c02a42bc 9f60: c0389f7c c0389f70 c005a264 c005a10c c0389fc0 c0389f80 c00658c4 c005a1c8 9f80: 00000001 3adf4500 00000001 c0334820 c003ab6c c02a42c0 c02a42c0 c02a42bc 9fa0: c0388000 c0065730 c0309f40 00000001 fffffffc c0389ff4 c0389fc4 c0051ef0 9fc0: c0065740 00000001 ffffffff ffffffff 00000000 00000000 00000000 00000000 9fe0: 00000000 00000000 00000000 c0389ff8 c003fef4 c0051e14 00000004 000001a4 Backtrace: [<c0060a78>] (xnpod_schedule+0x0/0x7a8) from [<c0061efc>] (xnpod_schedule_handler+0x34/0x3c) [<c0061ec8>] (xnpod_schedule_handler+0x0/0x3c) from [<c005a590>] (__ipipe_dispatch_wired+0xd8/0xfc) [<c005a4b8>] (__ipipe_dispatch_wired+0x0/0xfc) from [<c0027040>] (__ipipe_handle_irq+0x94/0x1b4) r8 = 00000001 r7 = C02A23C0 r6 = 00000000 r5 = 00000062 r4 = 00000013 [<c0026fac>] (__ipipe_handle_irq+0x0/0x1b4) from [<c0026a24>] (ipipe_trigger_irq+0x58/0x68) [<c00269cc>] (ipipe_trigger_irq+0x0/0x68) from [<c0060aa8>] (xnpod_schedule+0x30/0x7a8) r4 = C0246D00 [<c0060a78>] (xnpod_schedule+0x0/0x7a8) from [<c0065608>] (lostage_handler+0x100/0x19c) [<c0065508>] (lostage_handler+0x0/0x19c) from [<c005b364>] (rthal_apc_handler+0x9c/0xb8) r8 = 00000000 r7 = C0246D20 r6 = 00000061 r5 = 00000003 r4 = 00000001 [<c005b2c8>] (rthal_apc_handler+0x0/0xb8) from [<c0059d28>] (__ipipe_sync_stage+0x1e0/0x268) r4 = C0246D28 [<c0059b48>] (__ipipe_sync_stage+0x0/0x268) from [<c0059fbc>] (ipipe_suspend_domain+0x80/0xac) [<c0059f3c>] (ipipe_suspend_domain+0x0/0xac) from [<c005a168>] (__ipipe_walk_pipeline+0x6c/0xbc) r6 = C0246D20 r5 = C0246D20 r4 = C02A23C0 [<c005a0fc>] (__ipipe_walk_pipeline+0x0/0xbc) from [<c005a264>] (__ipipe_restore_pipeline_head+0xac/0xc8) r7 = C02A42BC r6 = 00000000 r5 = C02A42C8 r4 = C03C0D2C [<c005a1b8>] (__ipipe_restore_pipeline_head+0x0/0xc8) from [<c00658c4>] (gatekeeper_thread+0x194/0x1b4) [<c0065730>] (gatekeeper_thread+0x0/0x1b4) from [<c0051ef0>] (kthread+0xec/0x11c) [<c0051e04>] (kthread+0x0/0x11c) from [<c003fef4>] (do_exit+0x0/0xab4) Code: e5913008 e2433001 e5813008 e2408fc5 (e59832f8) Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = c0004000 [00000000] *pgd=00000000 Internal error: Oops: 817 [#2] Modules linked in: CPU: 0 PC is at do_exit+0xa74/0xab4 LR is at vsnprintf+0x324/0x5d0 pc : [<c0040968>] lr : [<c012d394>] Not tainted sp : c0389c7c ip : c0389ba8 fp : c0389c98 r10: c0388000 r9 : c0334888 r8 : c0305d60 r7 : c0334888 r6 : c0334820 r5 : 00000020 r4 : c0389c84 r3 : 00000000 r2 : c020c3ba r1 : c029bc7e r0 : 0000002e Flags: nzCv IRQs on FIQs on Mode SVC_32 Segment kernel Control: C000717F Table: D11F8000 DAC: 00000017 Process gatekeeper/0 (pid: 103, stack limit = 0xc0388194) Stack: (0xc0389c7c to 0xc038a000) 9c60: c005a024 9c80: c03348cc c0389c84 c0389c84 c0389cac c0389c9c c0025dd4 c003ff04 00000000 9ca0: c0389ccc c0389cb0 c0028050 c0025bc0 ffffffff c0334820 00000017 c02a3b1c 9cc0: c0389d04 c0389cd0 c00283c0 c0027ff4 c0292248 c0389cf0 00000017 ffffffff 9ce0: c0242004 00000017 c02a3b1c ffffffe4 60000093 c0389db4 c0389db0 c0389d08 9d00: c002867c c0028174 c0246d20 00000000 c0389d48 c0389d20 c0059d28 c0021d50 9d20: c033ff70 000000d0 00000000 c02de800 c02b88d0 c0389df4 c0389dec c0389d58 9d40: c0389d4c c005a024 c0059b58 c0389d68 c0389d5c c005a058 c0059ff8 c0389d84 9d60: c0389d6c c008e884 c005a048 00000000 c02de800 c0389df4 c0389dac c0389d98 9d80: c0389d9c c0389d90 c005a058 ffffffff c0389de8 c0246d08 c02a3b1c fffffcec 9da0: c02a3aec c0389e28 c0389db4 c0020960 c0028650 00000000 c02a3af8 60000093 9dc0: c02a3af8 00000000 c02a23c8 c0246d08 c02a3b1c fffffcec 00000001 c02a3aec 9de0: c0389e28 00500082 c0389dfc c02a40ac c0060c5c 60000093 ffffffff c004678c 9e00: 00000000 c02a23c8 c0246d08 c02a23c0 c0246d20 ffffffff 00000c20 c0389e38 9e20: c0389e2c c0061efc c0060a88 c0389e5c c0389e3c c005a590 c0061ed8 00000013 9e40: 00000062 00000000 c02a23c0 00000001 c0389e88 c0389e60 c0027040 c005a4c8 9e60: 00000013 c12deac0 00000000 c02a42f0 00000000 ffffffff 00000c20 c0389e9c 9e80: c0389e8c c0026a24 c0026fbc c0246d00 c0389ecc c0389ea0 c0060aa8 c00269dc 9ea0: c02a42f0 c0246d00 c12deac0 00000000 c02a42f0 00000000 ffffffff 00000c20 9ec0: c0389ef0 c0389ed0 c0065608 c0060a88 00000001 00000003 00000061 c0246d20 9ee0: 00000000 c0389f04 c0389ef4 c005b364 c0065518 c0246d28 c0389f30 c0389f08 9f00: c0059d28 c005b2d8 c0246d20 c02a23c0 20000093 c0246d08 c0388000 00000000 9f20: c0246d00 c0389f4c c0389f34 c0059fbc c0059b58 c02a23c0 c0246d20 c0246d20 9f40: c0389f6c c0389f50 c005a168 c0059f4c c03c0d2c c02a42c8 00000000 c02a42bc 9f60: c0389f7c c0389f70 c005a264 c005a10c c0389fc0 c0389f80 c00658c4 c005a1c8 9f80: 00000001 3adf4500 00000001 c0334820 c003ab6c c02a42c0 c02a42c0 c02a42bc 9fa0: c0388000 c0065730 c0309f40 00000001 fffffffc c0389ff4 c0389fc4 c0051ef0 9fc0: c0065740 00000001 ffffffff ffffffff 00000000 00000000 00000000 00000000 9fe0: 00000000 00000000 00000000 c0389ff8 c003fef4 c0051e14 00000004 000001a4 Backtrace: [<c003fef4>] (do_exit+0x0/0xab4) from [<c0025dd4>] (die+0x224/0x260) [<c0025bb0>] (die+0x0/0x260) from [<c0028050>] (__do_kernel_fault+0x6c/0x7c) [<c0027fe4>] (__do_kernel_fault+0x0/0x7c) from [<c00283c0>] (do_page_fault+0x25c/0x27c) r7 = C02A3B1C r6 = 00000017 r5 = C0334820 r4 = FFFFFFFF [<c0028164>] (do_page_fault+0x0/0x27c) from [<c002867c>] (do_DataAbort+0x3c/0x118) [<c0028640>] (do_DataAbort+0x0/0x118) from [<c0020960>] (__dabt_svc+0x40/0x60) [<c0060a78>] (xnpod_schedule+0x0/0x7a8) from [<c0061efc>] (xnpod_schedule_handler+0x34/0x3c) [<c0061ec8>] (xnpod_schedule_handler+0x0/0x3c) from [<c005a590>] (__ipipe_dispatch_wired+0xd8/0xfc) [<c005a4b8>] (__ipipe_dispatch_wired+0x0/0xfc) from [<c0027040>] (__ipipe_handle_irq+0x94/0x1b4) r8 = 00000001 r7 = C02A23C0 r6 = 00000000 r5 = 00000062 r4 = 00000013 [<c0026fac>] (__ipipe_handle_irq+0x0/0x1b4) from [<c0026a24>] (ipipe_trigger_irq+0x58/0x68) [<c00269cc>] (ipipe_trigger_irq+0x0/0x68) from [<c0060aa8>] (xnpod_schedule+0x30/0x7a8) r4 = C0246D00 [<c0060a78>] (xnpod_schedule+0x0/0x7a8) from [<c0065608>] (lostage_handler+0x100/0x19c) [<c0065508>] (lostage_handler+0x0/0x19c) from [<c005b364>] (rthal_apc_handler+0x9c/0xb8) r8 = 00000000 r7 = C0246D20 r6 = 00000061 r5 = 00000003 r4 = 00000001 [<c005b2c8>] (rthal_apc_handler+0x0/0xb8) from [<c0059d28>] (__ipipe_sync_stage+0x1e0/0x268) r4 = C0246D28 [<c0059b48>] (__ipipe_sync_stage+0x0/0x268) from [<c0059fbc>] (ipipe_suspend_domain+0x80/0xac) [<c0059f3c>] (ipipe_suspend_domain+0x0/0xac) from [<c005a168>] (__ipipe_walk_pipeline+0x6c/0xbc) r6 = C0246D20 r5 = C0246D20 r4 = C02A23C0 [<c005a0fc>] (__ipipe_walk_pipeline+0x0/0xbc) from [<c005a264>] (__ipipe_restore_pipeline_head+0xac/0xc8) r7 = C02A42BC r6 = 00000000 r5 = C02A42C8 r4 = C03C0D2C [<c005a1b8>] (__ipipe_restore_pipeline_head+0x0/0xc8) from [<c00658c4>] (gatekeeper_thread+0x194/0x1b4) [<c0065730>] (gatekeeper_thread+0x0/0x1b4) from [<c0051ef0>] (kthread+0xec/0x11c) [<c0051e04>] (kthread+0x0/0x11c) from [<c003fef4>] (do_exit+0x0/0xab4) Code: e3833008 e586300c eb0708de e3a03000 (e5833000) Unable to handle kernel NULL pointer dereference at virtual address 00000004 pgd = c0004000 [00000004] *pgd=00000000 Internal error: Oops: 17 [#3] Modules linked in: … The first messages are printk from my ioctl : they are tracing the rtdm_event_wait call, with the function input parameter, the returning value and the system timer value before (t1) and after (t2) the call. Obviously the problem is not linked to the presence of these messages Problem come immediately at the first call to ioctl WAIT from RT user task It seems that rtdm_event_wait returns very quickly : always about 160-170 microsec, but this happen also if I remove the interrupt source. The problem occurs in the same manner also if I remove the rtdm_event_signal from the driver ! Also I tried modifying the waiting code with ret = rtdm_event_timedwait(&my_context->irq_event, timeout, NULL); Or changing the event object to a semaphore object rtdm_sem_t irq_sem; rtdm_sem_init(&my_context->irq_sem, 0); ret = rtdm_sem_down (&my_context->irq_sem); But the system always blocks with the same trace. The system doesn’t block only if I remove the ioctl WAIT call from the RT user task… The RT user task returning from ioctl WAIT simply increments a user event counter and sleep (rt_task_sleep), then comes back to the ioctl WAIT. But it seems that the system never returns to the RT user task code from the ioctl driver function. My system is Linux 2.6.15 and xenomai 2.3-b50 compiled for ARM ep93xx I checked also xenomai 2.3-rc1 but the problem is the same Sorry for my english and thanks in advance for any help Regards Alberto Tomasi
_______________________________________________ Xenomai-help mailing list [email protected] https://mail.gna.org/listinfo/xenomai-help
