Karl Reichert wrote:
> Jan Kiszka wrote:
>> Karl Reichert wrote:
...
>>> I put the freeze after the sleep now, please see attached files. This is
>> what printk gives:
>>> [ 7605.737664] [REK debug] tdma->current_cycle_start =
>> 1191326103544417349
>>> [ 7605.737708] [REK debug] job->offset = 2300000
>> So you sleep about 2300 us after the Sync frame reception. But your
>> frozen backtrace doesn't cover the full period, just look at the
>> timestamps. Once you can see the full period of 2300 us from falling
>> asleep until waking up and maybe also sending the packet (play with
>> back_trace_points and post_trace_points), you can be sure that the local
>> timing is correct.
> I'm sorry, I do not know how to read those files and where to look for. The 
> wiki page is a little bit short about this topic. So I attached my frozen and 
> max file again, this time with post_ pre_ and back_ points set to 1000. 
> Please tell me where I can see that I really sleep 2300 us. Thanks in advance 
> ...

See comments below. In short: timing looks ok.

> 
>> Then you will have to look at the data again that is received and
>> transmitted.
> What do you mean by that? Which data? I guess you aren't talking about 
> ethernet frames, as I already checked them via wireshark. So which data is 
> worth to check?

Compare the wireshark result (which you took on the wire? Or on one of
the nodes? Sorry, don't recall anymore) with the data one node sees:
which cycle number it receives via Sync and what it then writes out.
Next is what the other side receives. Simply track the flow and build
your own picture of what is as expected, what might be abnormal.

Sorry if my advices remain fuzzy, but I can track this thread in all
details.

Jan

--

The following is from "frozen" ("max" is also interesting, but not in
this context). Note that I won't let your mail pass to the list due to
its size. Better compress the trace next time.

...
> :    +end     0x8000000e -3762! 1463.108  default_idle+0x5a (cpu_idle+0x46)
> :|   +begin   0xfffffff4 -2299    0.249  common_interrupt+0x29 
> (default_idle+0x5c)

NIC IRQ arrives. They all go through common_interrupt, there is a
"begin" mark, and this mark carries 0xfffffff4, which is ~IRQ = 11.
Timestamp: about 2299 us before we froze the trace.

> :|   +func               -2299    0.185  __ipipe_handle_irq+0x14 
> (common_interrupt+0x2e)
> :|   +func               -2299    0.377  __ipipe_ack_irq+0x8 
> (__ipipe_handle_irq+0xaf)
> :|   +func               -2298    0.181  __ipipe_ack_level_irq+0x12 
> (__ipipe_ack_irq+0x19)
> :|   +func               -2298    0.157  mask_and_ack_8259A+0x14 
> (__ipipe_ack_level_irq+0x22)
> :|   +func               -2298+   4.369  __ipipe_spin_lock_irqsave+0x9 
> (mask_and_ack_8259A+0x29)
> :|   #func               -2294    0.269  __ipipe_spin_unlock_irqrestore+0x9 
> (mask_and_ack_8259A+0x73)
> :|   +func               -2293    0.201  __ipipe_dispatch_wired+0x16 
> (__ipipe_handle_irq+0x68)
> :|  # func               -2293    0.257  xnintr_irq_handler+0xe 
> (__ipipe_dispatch_wired+0x9f)
> :|  # func               -2293+   1.033  e1000_intr+0x11 [rt_e1000] 
> (xnintr_irq_handler+0x3b)
> :|  # func               -2292+   1.375  xnpod_get_time+0x14 (e1000_intr+0x39 
> [rt_e1000])
> :|  # func               -2290+   1.193  rt_eth_type_trans+0xe [rtnet] 
> (e1000_intr+0xda [rt_e1000])
> :|  # func               -2289    0.711  rtnetif_rx+0x11 [rtnet] 
> (e1000_intr+0xf1 [rt_e1000])
> :|  # func               -2289    0.181  e1000_alloc_rx_buffers+0xe 
> [rt_e1000] (e1000_intr+0x42d [rt_e1000])
> :|  # func               -2288    0.559  alloc_rtskb+0x14 [rtnet] 
> (e1000_alloc_rx_buffers+0x149 [rt_e1000])
> :|  # func               -2288    0.319  e1000_alloc_rx_buffers+0xe 
> [rt_e1000] (e1000_intr+0x42d [rt_e1000])
> :|  # func               -2287    0.487  alloc_rtskb+0x14 [rtnet] 
> (e1000_alloc_rx_buffers+0x149 [rt_e1000])
> :|  # func               -2287    0.491  rtdm_event_signal+0xa 
> (e1000_intr+0x40a [rt_e1000])
> :|  # func               -2287+   3.526  xnsynch_flush+0xe 
> (rtdm_event_signal+0x39)
> :|  # func               -2283    0.205  xnpod_resume_thread+0xe 
> (xnsynch_flush+0x80)
> :|  # [   -1] -<?>-   98 -2283+   5.663  xnpod_resume_thread+0x56 
> (xnsynch_flush+0x80)

We wake up the task manager (priority 98, the name is unknown to the
tracer, thus "-<?>-").

> :|  # func               -2277+   1.011  xnpod_schedule+0xe 
> (rtdm_event_signal+0x65)
> :|  # func               -2276    0.351  rthal_irq_end+0x8 
> (xnintr_irq_handler+0x110)
> :|  # func               -2276    0.185  __ipipe_end_level_irq+0x8 
> (rthal_irq_end+0x22)
> :|  # func               -2276    0.159  enable_8259A_irq+0xa 
> (__ipipe_end_level_irq+0x18)
> :|  # func               -2275+   1.339  __ipipe_spin_lock_irqsave+0x9 
> (enable_8259A_irq+0x1f)
> :|  # func               -2274    0.265  __ipipe_spin_unlock_irqrestore+0x9 
> (enable_8259A_irq+0x9c)
> :|  # func               -2274    0.845  xnpod_schedule+0xe 
> (xnintr_irq_handler+0x12a)
> :|  # [ 3348] tdmacfg -1 -2273+   9.807  xnpod_schedule+0x90 
> (xnintr_irq_handler+0x12a)
> :|  # [   -1] -<?>-   98 -2263+   6.024  xnpod_schedule+0x554 
> (xnpod_suspend_thread+0x18b)
> :|  # func               -2257    0.425  __ipipe_restore_pipeline_head+0x11 
> (rtdm_event_timedwait+0x77)
> :|  + end     0x80000000 -2257+   2.308  __ipipe_restore_pipeline_head+0x7c 
> (rtdm_event_timedwait+0x77)
> :   + func               -2254+   1.963  rt_stack_deliver+0xe [rtnet] 
> (rt_stack_mgr_task+0x4d [rtnet])
> :|  + begin   0x80000000 -2252+   1.535  rt_stack_deliver+0x17b [rtnet] 
> (rt_stack_mgr_task+0x4d [rtnet])
> :|  # func               -2251+   3.952  rtcap_rx_hook+0xa [rtcap] 
> (rt_stack_deliver+0x44 [rtnet])
> :|  # func               -2247+   1.439  ipipe_trigger_irq+0xd 
> (rtcap_rx_hook+0x8d [rtcap])
> :|  # func               -2245+   1.511  __ipipe_handle_irq+0x14 
> (ipipe_trigger_irq+0x14c)
> :|  # func               -2244    0.307  __ipipe_walk_pipeline+0xe 
> (__ipipe_handle_irq+0x81)
> :|  # func               -2244    0.173  __ipipe_restore_pipeline_head+0x11 
> (rt_stack_deliver+0x16b [rtnet])
> :|  + end     0x80000000 -2244    0.249  __ipipe_restore_pipeline_head+0x7c 
> (rt_stack_deliver+0x16b [rtnet])
> :|  + begin   0x80000000 -2243    0.777  rt_stack_deliver+0x15d [rtnet] 
> (rt_stack_mgr_task+0x4d [rtnet])
> :|  # func               -2242    0.157  __ipipe_restore_pipeline_head+0x11 
> (rt_stack_deliver+0x197 [rtnet])
> :|  + end     0x80000000 -2242    0.685  __ipipe_restore_pipeline_head+0x7c 
> (rt_stack_deliver+0x197 [rtnet])
> :   + func               -2242+   1.361  rtmac_proto_rx+0xc [rtmac] 
> (rt_stack_deliver+0xf3 [rtnet])
> :   + func               -2240+   3.214  tdma_packet_rx+0x14 [tdma] 
> (rtmac_proto_rx+0x75 [rtmac])

The TDMA RX callback.

> :|  + begin   0x80000000 -2237    0.229  tdma_packet_rx+0x5c0 [tdma] 
> (rtmac_proto_rx+0x75 [rtmac])
> :|  # func               -2237    0.173  __ipipe_restore_pipeline_head+0x11 
> (tdma_packet_rx+0x5ac [tdma])
> :|  + end     0x80000000 -2237    0.195  __ipipe_restore_pipeline_head+0x7c 
> (tdma_packet_rx+0x5ac [tdma])
> :   + func               -2236    0.193  _rtdm_synch_flush+0x16 
> (tdma_packet_rx+0x173 [tdma])
> :|  + begin   0x80000000 -2236    0.205  _rtdm_synch_flush+0x8b 
> (tdma_packet_rx+0x173 [tdma])
> :|  # func               -2236+   1.601  xnsynch_flush+0xe 
> (_rtdm_synch_flush+0x44)
> :|  # func               -2234    0.167  xnpod_resume_thread+0xe 
> (xnsynch_flush+0x80)
> :|  # [   -1] -<?>-   99 -2234+   1.125  xnpod_resume_thread+0x56 
> (xnsynch_flush+0x80)
> :|  # func               -2233    0.559  xnpod_schedule+0xe 
> (_rtdm_synch_flush+0x4e)
> :|  # [   -1] -<?>-   98 -2233+   2.216  xnpod_schedule+0x90 
> (_rtdm_synch_flush+0x4e)
> :|  # [   -1] -<?>-   99 -2230    0.331  xnpod_schedule+0x554 
> (xnpod_suspend_thread+0x18b)

We woke up and switched to the TDMA worker thread (prio 99).

> :|  # func               -2230    0.205  __ipipe_restore_pipeline_head+0x11 
> (rtdm_event_timedwait+0x77)
> :|  + end     0x80000000 -2230    0.955  __ipipe_restore_pipeline_head+0x7c 
> (rtdm_event_timedwait+0x77)
> :|  + begin   0x80000000 -2229+   3.488  tdma_worker+0x58f [tdma] 
> (xnarch_thread_redirect+0x25)
> :|  # func               -2225    0.169  __ipipe_restore_pipeline_head+0x11 
> (tdma_worker+0x6d7 [tdma])
> :|  + end     0x80000000 -2225    0.189  __ipipe_restore_pipeline_head+0x7c 
> (tdma_worker+0x6d7 [tdma])
> :   + func               -2225    0.225  rtdm_task_sleep_until+0x14 
> (tdma_worker+0x191 [tdma])

The worker is supending itself now. As the slot offset is 2300 us, we
should wake up in about that time (minus some correction calculations
that are irrelevant here due to the error being much larger).

> :|  + begin   0x80000000 -2225    0.197  rtdm_task_sleep_until+0xbb 
> (tdma_worker+0x191 [tdma])
> :|  # func               -2225+   2.280  xnpod_get_time+0x14 
> (rtdm_task_sleep_until+0x46)
> :|  # func               -2222    0.379  xnpod_suspend_thread+0xe 
> (rtdm_task_sleep_until+0x75)
> :|  # func               -2222+   7.184  xntimer_do_start_aperiodic+0xe 
> (xnpod_suspend_thread+0x175)
> :|  # func               -2215    0.169  xnpod_schedule+0xe 
> (xnpod_suspend_thread+0x18b)
> :|  # [   -1] -<?>-   99 -2215    0.899  xnpod_schedule+0x90 
> (xnpod_suspend_thread+0x18b)
> :|  # [   -1] -<?>-   98 -2214    0.353  xnpod_schedule+0x554 
> (_rtdm_synch_flush+0x4e)
> :|  # func               -2213    0.205  __ipipe_restore_pipeline_head+0x11 
> (_rtdm_synch_flush+0x6c)
> :|  + end     0x80000000 -2213    0.293  __ipipe_restore_pipeline_head+0x7c 
> (_rtdm_synch_flush+0x6c)
> :   + func               -2213    0.751  kfree_rtskb+0xe [rtnet] 
> (tdma_packet_rx+0x41 [tdma])
> :|  + begin   0x80000000 -2212+   1.325  kfree_rtskb+0x17d [rtnet] 
> (tdma_packet_rx+0x41 [tdma])
> :|  # func               -2211    0.409  __ipipe_restore_pipeline_head+0x11 
> (kfree_rtskb+0x1d7 [rtnet])
> :|  + end     0x80000000 -2210    0.217  __ipipe_restore_pipeline_head+0x7c 
> (kfree_rtskb+0x1d7 [rtnet])
> :|  + begin   0x80000000 -2210    0.257  kfree_rtskb+0x1cb [rtnet] 
> (tdma_packet_rx+0x41 [tdma])
> :|  # func               -2210    0.155  __ipipe_restore_pipeline_head+0x11 
> (kfree_rtskb+0x1b7 [rtnet])
> :|  + end     0x80000000 -2210    0.327  __ipipe_restore_pipeline_head+0x7c 
> (kfree_rtskb+0x1b7 [rtnet])
> :|  + begin   0x80000000 -2209    0.215  rt_stack_deliver+0x18b [rtnet] 
> (rt_stack_mgr_task+0x4d [rtnet])
> :|  # func               -2209    0.267  __ipipe_restore_pipeline_head+0x11 
> (rt_stack_deliver+0x14a [rtnet])
> :|  + end     0x80000000 -2209    0.247  __ipipe_restore_pipeline_head+0x7c 
> (rt_stack_deliver+0x14a [rtnet])
> :   + func               -2209    0.197  rtdm_event_wait+0xb 
> (rt_stack_mgr_task+0x16 [rtnet])
> :   + func               -2208    0.193  rtdm_event_timedwait+0x14 
> (rtdm_event_wait+0x1b)
> :|  + begin   0x80000000 -2208    0.269  rtdm_event_timedwait+0x92 
> (rtdm_event_wait+0x1b)
> :|  # func               -2208+   1.831  xnsynch_sleep_on+0xe 
> (rtdm_event_timedwait+0x128)
> :|  # func               -2206    0.253  xnpod_suspend_thread+0xe 
> (xnsynch_sleep_on+0x3c2)
> :|  # func               -2206    0.169  xnpod_schedule+0xe 
> (xnpod_suspend_thread+0x18b)
> :|  # [   -1] -<?>-   98 -2206    0.997  xnpod_schedule+0x90 
> (xnpod_suspend_thread+0x18b)
> :|  # func               -2205+   2.214  __switch_to+0xe 
> (xnpod_schedule+0x47c)
> :|  # [    0] -<?>-   -1 -2203+   1.123  xnpod_schedule+0x554 
> (xnintr_irq_handler+0x12a)
> :|   +func               -2201    0.287  __ipipe_walk_pipeline+0xe 
> (__ipipe_handle_irq+0x81)
> :|   +func               -2201    0.211  __ipipe_sync_stage+0xe 
> (__ipipe_walk_pipeline+0xb3)
> :|   #end     0x80000000 -2201    0.561  __ipipe_sync_stage+0x13c 
> (__ipipe_walk_pipeline+0xb3)
> :    #func               -2200    0.717  rtcap_signal_handler+0xe [rtcap] 
> (__ipipe_sync_stage+0x1b1)
> :|   #begin   0x80000000 -2200    0.283  rtcap_signal_handler+0x1f7 [rtcap] 
> (__ipipe_sync_stage+0x1b1)
> :|  *#func               -2199    0.281  __ipipe_restore_pipeline_head+0x11 
> (rtcap_signal_handler+0x203 [rtcap])
> :|   #end     0x80000000 -2199+   2.014  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_signal_handler+0x203 [rtcap])
> :    #func               -2197    0.877  __alloc_skb+0xe 
> (rtcap_signal_handler+0xd8 [rtcap])
> :    #func               -2196    0.183  kmem_cache_alloc+0x14 
> (__alloc_skb+0x30)
> :    #func               -2196    0.629  ipipe_check_context+0xc 
> (kmem_cache_alloc+0x27)
> :    #func               -2195    0.623  __ipipe_restore_root+0x8 
> (kmem_cache_alloc+0x68)
> :    #func               -2195    0.785  __kmalloc+0xe (__alloc_skb+0x4d)
> :    #func               -2194+   1.351  ipipe_check_context+0xc 
> (__kmalloc+0x4e)
> :    #func               -2193+   5.073  __ipipe_restore_root+0x8 
> (__kmalloc+0x8a)
> :    #func               -2188+   1.679  eth_type_trans+0xa 
> (rtcap_signal_handler+0x239 [rtcap])
> :    #func               -2186+   1.885  rtcap_kfree_rtskb+0xa [rtcap] 
> (rtcap_signal_handler+0x29f [rtcap])
> :|   #begin   0x80000000 -2184    0.653  rtcap_kfree_rtskb+0x132 [rtcap] 
> (rtcap_signal_handler+0x29f [rtcap])
> :|  *#func               -2183    0.187  __ipipe_restore_pipeline_head+0x11 
> (rtcap_kfree_rtskb+0x16c [rtcap])
> :|   #end     0x80000000 -2183    0.257  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_kfree_rtskb+0x16c [rtcap])
> :|   #begin   0x80000000 -2183    0.259  rtcap_kfree_rtskb+0x142 [rtcap] 
> (rtcap_signal_handler+0x29f [rtcap])
> :|  *#func               -2183    0.187  __ipipe_restore_pipeline_head+0x11 
> (rtcap_kfree_rtskb+0x18c [rtcap])
> :|   #end     0x80000000 -2182    0.555  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_kfree_rtskb+0x18c [rtcap])
> :    #func               -2182    0.447  netif_rx+0xe 
> (rtcap_signal_handler+0x2bf [rtcap])
> :    #func               -2181+   2.780  ipipe_check_context+0xc 
> (netif_rx+0xab)
> :    #func               -2179    0.155  __netif_rx_schedule+0xa 
> (netif_rx+0x1c6)
> :    #func               -2178    0.491  ipipe_check_context+0xc 
> (__netif_rx_schedule+0x16)
> :    #func               -2178    0.197  __ipipe_restore_root+0x8 
> (__netif_rx_schedule+0x72)
> :    #func               -2178    0.213  __ipipe_restore_root+0x8 
> (netif_rx+0x141)
> :|   #begin   0x80000000 -2178    0.201  rtcap_signal_handler+0x1f7 [rtcap] 
> (__ipipe_sync_stage+0x1b1)
> :|  *#func               -2177    0.187  __ipipe_restore_pipeline_head+0x11 
> (rtcap_signal_handler+0x2e0 [rtcap])
> :|   #end     0x80000000 -2177    0.183  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_signal_handler+0x2e0 [rtcap])
> :    #func               -2177+   1.727  irq_exit+0x8 
> (__ipipe_sync_stage+0x1b9)
> :    #func               -2175    0.347  do_softirq+0x12 (irq_exit+0x39)
> :    #func               -2175    0.169  ipipe_check_context+0xc 
> (do_softirq+0x3a)
> :    #func               -2175    0.175  __do_softirq+0xb (do_softirq+0x6d)
> :    #func               -2175    0.167  __ipipe_unstall_root+0x8 
> (__do_softirq+0x33)
> :|   #begin   0x80000000 -2174    0.409  __ipipe_unstall_root+0x4b 
> (__do_softirq+0x33)
> :|   +end     0x80000000 -2174    0.935  __ipipe_unstall_root+0x3a 
> (__do_softirq+0x33)
> :    +func               -2173    0.183  net_rx_action+0xe (__do_softirq+0x52)
> :    +func               -2173    0.289  ipipe_check_context+0xc 
> (net_rx_action+0x28)
> :    #func               -2173    0.171  __ipipe_unstall_root+0x8 
> (net_rx_action+0x61)
> :|   #begin   0x80000000 -2172    0.171  __ipipe_unstall_root+0x4b 
> (net_rx_action+0x61)
> :|   +end     0x80000000 -2172    0.165  __ipipe_unstall_root+0x3a 
> (net_rx_action+0x61)
> :    +func               -2172    0.865  ipipe_check_context+0xc 
> (net_rx_action+0x7b)
> :    +func               -2171    0.161  process_backlog+0xe 
> (net_rx_action+0xbd)
> :    +func               -2171    0.441  ipipe_check_context+0xc 
> (process_backlog+0x4a)
> :    #func               -2171    0.167  __ipipe_unstall_root+0x8 
> (process_backlog+0x87)
> :|   #begin   0x80000000 -2170    0.171  __ipipe_unstall_root+0x4b 
> (process_backlog+0x87)
> :|   +end     0x80000000 -2170    0.745  __ipipe_unstall_root+0x3a 
> (process_backlog+0x87)
> :    +func               -2170+   1.943  netif_receive_skb+0xe 
> (process_backlog+0x91)
> :    +func               -2168+   1.283  ipipe_check_context+0xc 
> (netif_receive_skb+0x10a)
> :    +func               -2166    0.611  kfree_skb+0x8 
> (netif_receive_skb+0x392)
> :    +func               -2166+   1.259  __kfree_skb+0xd (kfree_skb+0x1e)
> :    +func               -2164    0.325  kfree_skbmem+0x9 (__kfree_skb+0x96)
> :    +func               -2164    0.743  skb_release_data+0xa 
> (kfree_skbmem+0x10)
> :    +func               -2163    0.155  kfree+0xb (skb_release_data+0x5d)
> :    +func               -2163+   1.075  ipipe_check_context+0xc (kfree+0x1b)
> :    #func               -2162    0.167  __ipipe_restore_root+0x8 (kfree+0x73)
> :    #func               -2162    0.155  __ipipe_unstall_root+0x8 
> (__ipipe_restore_root+0x1b)
> :|   #begin   0x80000000 -2162    0.165  __ipipe_unstall_root+0x4b 
> (__ipipe_restore_root+0x1b)
> :|   +end     0x80000000 -2162+   1.687  __ipipe_unstall_root+0x3a 
> (__ipipe_restore_root+0x1b)
> :    +func               -2160    0.359  kmem_cache_free+0xe 
> (kfree_skbmem+0x3d)
> :    +func               -2160    0.253  ipipe_check_context+0xc 
> (kmem_cache_free+0x40)
> :    #func               -2159    0.149  __ipipe_restore_root+0x8 
> (kmem_cache_free+0x84)
> :    #func               -2159    0.433  __ipipe_unstall_root+0x8 
> (__ipipe_restore_root+0x1b)
> :|   #begin   0x80000000 -2159    0.177  __ipipe_unstall_root+0x4b 
> (__ipipe_restore_root+0x1b)
> :|   +end     0x80000000 -2159    0.231  __ipipe_unstall_root+0x3a 
> (__ipipe_restore_root+0x1b)
> :    +func               -2158    0.197  ipipe_check_context+0xc 
> (netif_receive_skb+0x1fa)
> :    +func               -2158    0.221  ipipe_check_context+0xc 
> (process_backlog+0x4a)
> :    #func               -2158    0.331  __ipipe_unstall_root+0x8 
> (process_backlog+0x10c)
> :|   #begin   0x80000000 -2158    0.175  __ipipe_unstall_root+0x4b 
> (process_backlog+0x10c)
> :|   +end     0x80000000 -2157    0.217  __ipipe_unstall_root+0x3a 
> (process_backlog+0x10c)
> :    +func               -2157    0.181  ipipe_check_context+0xc 
> (net_rx_action+0xe4)
> :    +func               -2157    0.203  ipipe_check_context+0xc 
> (net_rx_action+0xf4)
> :    #func               -2157    0.437  ipipe_check_context+0xc 
> (net_rx_action+0x120)
> :    #func               -2156    0.187  ipipe_check_context+0xc 
> (net_rx_action+0x174)
> :    #func               -2156    0.155  __ipipe_unstall_root+0x8 
> (net_rx_action+0x179)
> :|   #begin   0x80000000 -2156    0.165  __ipipe_unstall_root+0x4b 
> (net_rx_action+0x179)
> :|   +end     0x80000000 -2156    0.183  __ipipe_unstall_root+0x3a 
> (net_rx_action+0x179)
> :    +func               -2156    0.193  ipipe_check_context+0xc 
> (__do_softirq+0x6a)
> :    #func               -2155    0.175  _local_bh_enable+0xb 
> (__do_softirq+0x96)
> :    #func               -2155    0.191  __ipipe_restore_root+0x8 
> (do_softirq+0x5d)
> :    #func               -2155    0.195  ipipe_check_context+0xc 
> (irq_exit+0x43)
> :    #func               -2155    0.173  __ipipe_unstall_iret_root+0x9 
> (restore_nocheck_notrace+0x0)
> :|   #begin   0x80000000 -2155    0.205  __ipipe_unstall_iret_root+0x81 
> (restore_nocheck_notrace+0x0)
> :|   +end     0x8000000d -2155    0.555  __ipipe_unstall_iret_root+0x2c 
> (restore_nocheck_notrace+0x0)
> :|   +end     0xfffffff4 -2154    0.189  common_interrupt+0x38 
> (default_idle+0x5c)
> :|   +func               -2154    0.175  __ipipe_unstall_iret_root+0x9 
> (restore_nocheck_notrace+0x0)
> :|   +end     0x8000000d -2154    0.731  __ipipe_unstall_iret_root+0x2c 
> (restore_nocheck_notrace+0x0)
> :    +func               -2153    0.165  ipipe_suspend_domain+0xe 
> (cpu_idle+0x44)
> :|   +begin   0x80000001 -2153    0.187  ipipe_suspend_domain+0x9e 
> (cpu_idle+0x44)
> :|   +end     0x80000001 -2153    0.217  ipipe_suspend_domain+0xad 
> (cpu_idle+0x44)
> :    +func               -2152    0.357  default_idle+0x8 (cpu_idle+0x46)
> :    +func               -2152    0.163  ipipe_check_context+0xc 
> (default_idle+0x35)
> :    #func               -2152    0.161  __ipipe_unstall_root+0x8 
> (default_idle+0x50)
> :|   #begin   0x80000000 -2152    0.165  __ipipe_unstall_root+0x4b 
> (default_idle+0x50)
> :|   +end     0x80000000 -2151    0.217  __ipipe_unstall_root+0x3a 
> (default_idle+0x50)
> :    +end     0x8000000e -2151! 2137.326  default_idle+0x5a (cpu_idle+0x46)
> :|   +begin   0xffffffff   -14    0.449  common_interrupt+0x29 
> (default_idle+0x5c)

Timer IRQ.

> :|   +func                 -13+   1.437  __ipipe_handle_irq+0x14 
> (common_interrupt+0x2e)
> :|   +func                 -12    0.499  __ipipe_ack_irq+0x8 
> (__ipipe_handle_irq+0xaf)
> :|   +func                 -12    0.169  __ipipe_ack_level_irq+0x12 
> (__ipipe_ack_irq+0x19)
> :|   +func                 -11    0.157  mask_and_ack_8259A+0x14 
> (__ipipe_ack_level_irq+0x22)
> :|   +func                 -11+   1.431  __ipipe_spin_lock_irqsave+0x9 
> (mask_and_ack_8259A+0x29)
> :|   #func                 -10    0.255  __ipipe_spin_unlock_irqrestore+0x9 
> (mask_and_ack_8259A+0xf5)
> :|   +func                 -10    0.263  __ipipe_dispatch_wired+0x16 
> (__ipipe_handle_irq+0x68)
> :|  # func                  -9    0.199  xnintr_clock_handler+0x8 
> (__ipipe_dispatch_wired+0x9f)
> :|  # func                  -9    0.215  xnintr_irq_handler+0xe 
> (xnintr_clock_handler+0x17)
> :|  # func                  -9    0.217  xnpod_announce_tick+0x8 
> (xnintr_irq_handler+0x3b)
> :|  # func                  -9    0.525  xntimer_do_tick_aperiodic+0xe 
> (xnpod_announce_tick+0xf)
> :|  # func                  -8    0.425  xnthread_timeout_handler+0x8 
> (xntimer_do_tick_aperiodic+0x1f5)
> :|  # func                  -8    0.169  xnpod_resume_thread+0xe 
> (xnthread_timeout_handler+0x1e)
> :|  # [   -1] -<?>-   99    -8+   5.091  xnpod_resume_thread+0x56 
> (xnthread_timeout_handler+0x1e)
> :|  # func                  -2    0.201  xnpod_schedule+0xe 
> (xnintr_irq_handler+0x12a)
> :|  # [    0] -<?>-   -1    -2    0.881  xnpod_schedule+0x90 
> (xnintr_irq_handler+0x12a)
> :|  # [   -1] -<?>-   99    -1    0.401  xnpod_schedule+0x554 
> (xnpod_suspend_thread+0x18b)
> :|  # func                  -1    0.205  __ipipe_restore_pipeline_head+0x11 
> (rtdm_task_sleep_until+0xa0)
> :|  + end     0x80000000    -1+   1.229  __ipipe_restore_pipeline_head+0x7c 
> (rtdm_task_sleep_until+0xa0)
> <   + freeze  0x00000000     0    1.775  tdma_worker+0x198 [tdma] 
> (xnarch_thread_redirect+0x25)

The worker got woken up right on time and will now prepare to send a packet.

>     + func                   1    1.747  printk+0x14 (tdma_worker+0x1bb 
> [tdma])

Your printk instrumentation (output nicely postponed until Linux runs
again).

>     + func                   3    0.189  __ipipe_spin_lock_irqsave+0x9 
> (printk+0x6b)
>  |  + begin   0x80000001     3    0.743  __ipipe_spin_lock_irqsave+0x3d 
> (printk+0x6b)
>  |  # func                   4    0.501  vscnprintf+0xc (printk+0x93)
>  |  # func                   4    6.578  vsnprintf+0xe (vscnprintf+0x1b)
>  |  # func                  11    1.991  number+0xe (vsnprintf+0x2f9)
>  |  # func                  13    0.221  __ipipe_spin_unlock_irqrestore+0x9 
> (printk+0xa9)
>  |  + end     0x80000001    13    0.293  __ipipe_spin_unlock_irqrestore+0x2d 
> (printk+0xa9)
>     + func                  14    0.229  ipipe_trigger_irq+0xd (printk+0xb7)
>  |  + begin   0x80000001    14    0.187  ipipe_trigger_irq+0x3a (printk+0xb7)
>  |  + func                  14    0.729  __ipipe_handle_irq+0x14 
> (ipipe_trigger_irq+0xb6)
>  |  + func                  15    0.251  __ipipe_walk_pipeline+0xe 
> (__ipipe_handle_irq+0x81)
>  |  + end     0x80000001    15    0.489  ipipe_trigger_irq+0xc0 (printk+0xb7)
>     + func                  15    0.143  printk+0x14 (tdma_worker+0x1d5 
> [tdma])
>     + func                  16    0.393  __ipipe_spin_lock_irqsave+0x9 
> (printk+0x6b)
>  |  + begin   0x80000001    16    0.155  __ipipe_spin_lock_irqsave+0x3d 
> (printk+0x6b)
>  |  # func                  16    0.141  vscnprintf+0xc (printk+0x93)
>  |  # func                  16    0.319  vsnprintf+0xe (vscnprintf+0x1b)
>  |  # func                  17    0.499  number+0xe (vsnprintf+0x2f9)
>  |  # func                  17    0.147  __ipipe_spin_unlock_irqrestore+0x9 
> (printk+0xa9)
>  |  + end     0x80000001    17    0.535  __ipipe_spin_unlock_irqrestore+0x2d 
> (printk+0xa9)
>     + func                  18    1.855  tdma_xmit_request_cal_frame+0xe 
> [tdma] (tdma_worker+0x1f3 [tdma])

Here we go: the Calibration frame is about to be sent!

>     + func                  20    0.201  alloc_rtskb+0x14 [rtnet] 
> (tdma_xmit_request_cal_frame+0x31 [tdma])
>  |  + begin   0x80000000    20    0.431  alloc_rtskb+0xcb [rtnet] 
> (tdma_xmit_request_cal_frame+0x31 [tdma])
>  |  # func                  20    0.157  __ipipe_restore_pipeline_head+0x11 
> (alloc_rtskb+0xb7 [rtnet])
>  |  + end     0x80000000    20    0.561  __ipipe_restore_pipeline_head+0x7c 
> (alloc_rtskb+0xb7 [rtnet])
>     + func                  21    1.939  rt_eth_header+0x14 [rtnet] 
> (tdma_xmit_request_cal_frame+0xc7 [tdma])
>     + func                  23    0.213  rtcap_xmit_hook+0xe [rtcap] 
> (tdma_xmit_request_cal_frame+0x11c [tdma])
>  |  + begin   0x80000000    23    0.563  rtcap_xmit_hook+0x11b [rtcap] 
> (tdma_xmit_request_cal_frame+0x11c [tdma])
>  |  # func                  24    0.157  __ipipe_restore_pipeline_head+0x11 
> (rtcap_xmit_hook+0x127 [rtcap])
>  |  + end     0x80000000    24    0.193  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_xmit_hook+0x127 [rtcap])
>     + func                  24    0.347  xnpod_get_time+0x14 
> (rtcap_xmit_hook+0x9f [rtcap])
>  |  + begin   0x80000000    24    0.227  rtcap_xmit_hook+0x16b [rtcap] 
> (tdma_xmit_request_cal_frame+0x11c [tdma])
>  |  # func                  25    0.157  __ipipe_restore_pipeline_head+0x11 
> (rtcap_xmit_hook+0x157 [rtcap])
>  |  + end     0x80000000    25    0.173  __ipipe_restore_pipeline_head+0x7c 
> (rtcap_xmit_hook+0x157 [rtcap])
>     + func                  25    0.169  ipipe_trigger_irq+0xd 
> (rtcap_xmit_hook+0xfd [rtcap])
>  |  + begin   0x80000001    25    1.201  ipipe_trigger_irq+0x3a 
> (rtcap_xmit_hook+0xfd [rtcap])
>  |  + func                  26    0.207  __ipipe_handle_irq+0x14 
> (ipipe_trigger_irq+0xb6)
>  |  + func                  27    0.163  __ipipe_walk_pipeline+0xe 
> (__ipipe_handle_irq+0x81)
>  |  + end     0x80000001    27    0.837  ipipe_trigger_irq+0xc0 
> (rtcap_xmit_hook+0xfd [rtcap])
>     + func                  28    2.660  e1000_xmit_frame+0xe [rt_e1000] 
> (rtcap_xmit_hook+0x108 [rtcap])

The driver does its Xmit work.

>  |  + begin   0x80000000    30    0.331  e1000_xmit_frame+0x411 [rt_e1000] 
> (rtcap_xmit_hook+0x108 [rtcap])
>  |  # func                  31    0.157  __ipipe_restore_pipeline_head+0x11 
> (e1000_xmit_frame+0x42f [rt_e1000])
>  |  + end     0x80000000    31    1.111  __ipipe_restore_pipeline_head+0x7c 
> (e1000_xmit_frame+0x42f [rt_e1000])
>  |  + begin   0x80000000    32    1.015  e1000_xmit_frame+0x421 [rt_e1000] 
> (rtcap_xmit_hook+0x108 [rtcap])
>  |  # func                  33    0.555  xnpod_get_time+0x14 
> (e1000_xmit_frame+0x311 [rt_e1000])
>  |  # func                  33    0.183  __ipipe_restore_pipeline_head+0x11 
> (e1000_xmit_frame+0x362 [rt_e1000])
>  |  + end     0x80000000    34    1.067  __ipipe_restore_pipeline_head+0x7c 
> (e1000_xmit_frame+0x362 [rt_e1000])
>  |  + begin   0x80000000    35    0.981  tdma_worker+0x6c8 [tdma] 
> (xnarch_thread_redirect+0x25)
>  |  # func                  36    0.157  __ipipe_restore_pipeline_head+0x11 
> (tdma_worker+0x59f [tdma])
>  |  + end     0x80000000    36    0.207  __ipipe_restore_pipeline_head+0x7c 
> (tdma_worker+0x59f [tdma])
>     + func                  36    0.163  rtdm_event_wait+0xb 
> (tdma_worker+0x413 [tdma])

And the worker finally falls asleep again, this time waiting for the
next Sync event.

>     + func                  36    0.173  rtdm_event_timedwait+0x14 
> (rtdm_event_wait+0x1b)
>  |  + begin   0x80000000    36    0.217  rtdm_event_timedwait+0x92 
> (rtdm_event_wait+0x1b)
>  |  # func                  37    0.225  xnsynch_sleep_on+0xe 
> (rtdm_event_timedwait+0x128)
>  |  # func                  37    0.407  xnpod_suspend_thread+0xe 
> (xnsynch_sleep_on+0x3c2)
>  |  # func                  37    0.509  xnpod_schedule+0xe 
> (xnpod_suspend_thread+0x18b)
>  |  # [   -1] -<?>-   99    38    0.533  xnpod_schedule+0x90 
> (xnpod_suspend_thread+0x18b)
>  |  # func                  38    1.521  __switch_to+0xe 
> (xnpod_schedule+0x47c)
>  |  # [    0] -<?>-   -1    40    0.799  xnpod_schedule+0x554 
> (xnintr_irq_handler+0x12a)

Back to Linux (prio -1).

Hope that shed some light on how to interpret the traces. It's actually
fun once you can filter relevant from irrelevant information, because
those traces provide real deep insight into the system behaviour.

Attachment: signature.asc
Description: OpenPGP digital signature

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to