Karl Reichert wrote:
> Karl Reichert wrote:
> > Karl Reichert wrote:
> > > Karl Reichert wrote:
> > > > Karl Reichert wrote:
> > > > > Jan Kiszka wrote:> 
> > > > > > What I would analyse if I were you:
> > > > > >  - Is the request frame sent in the right slot according to the
> > > > sender?
> > > > > 
> > > > > Well, now I have another weird behavior. The slave sends a request
> > > > > calibration frame in cycle no 45849 and sets Reply Cycle Number to
> > > > 97655. As I'm
> > > > > using a cycle length of 5 ms, this means the slave wants the
> answer
> > > more
> > > > > then 4 minutes in the future, which is for sure to far away!
> > > > > 
> > > > > This behavior and the one observed before points to a problem when
> > > > > calculating or setting this reply cycle number. How can one
> > configure
> > > > how big this
> > > > > value is set (how big the offset between request and reply should
> > be).
> > > > If
> > > > > this is done automatically, in which function is it done?
> > > > 
> > > > I digged deeper into the code and here are the results. Please see
> > > > attached file to see me changes. Every change is marked by /* REK
> > debug
> > > */:
> > > > 
> > > > My first step was to go into module tdma_worker.c, function
> > > > do_request_cal_job(). I printed the values of tdma->current_cycle
> and
> > > job->period.
> > > > job->period is always 1, in all calls of this function.
> > > tdma->current_cycle of
> > > > course changes, but it holds an old value, for example 1521 cycles
> ago
> > > with 1
> > > > ms cycle length.
> > > > 
> > > > As a quick hack, my second step was to substitue job->period with an
> > > > constant value, 2500 in my case. Now the synchronisation works, of
> > > course
> > > > slowly, but it does.
> > > > 
> > > > The error must be the "wrong" value of on of those two variables!
> > > > 
> > > > This leads to a few questions:
> > > > 1) Is it right that job->period is always 1?
> > > > 2a) If yes (what I do not believe), do you have an idea why
> > > > tdma->current_cycle holds an "old" value?
> > > > 2b) If not, where is this value calculated and where for?
> > > > 
> > > 
> > > Hello list!
> > > 
> > > I did some further testing and this is what I found out:
> > > 
> > > 1) tdma->current_cycle holds the right value! I set the tdma cycle
> > length
> > > to 10s (yes, seconds) and also started wireshark on the master. I
> > checked
> > > the value of this variable and when I compare it to what wireshark
> tells
> > me
> > > about the sent sync frames, I see that this value is correct.
> > > 
> > > 2) The slave is able to sync with this long cycle length. The reason
> for
> > > this is that job->period is still always 1, but now there is enough
> time
> > > (nearly 10s) to send the req cal frm.
> > > 
> > > 
> > > This leads me to the following conclusions and questions:
> > > 
> > > 1) Again ... why is job->period always 1? Is this correct? I still
> don't
> > > understand where this variable stands for and why we do not simply add
> > any
> > > other constant value (like 1 or sth bigger).
> > > 
> > > 2) If it is correct that job->period contains always 1, is it
> > problematic
> > > to set any higher value there (like I did in my quick hack, see last
> > mail)?
> > > If yes, wouldn't it make sense to make this configurable (make
> > > menuconfig)?
> > > 
> > > 3) If it has to be always one, running tdma with cycle length 1ms
> would
> > > mean, that the slave must be able to send a req cal frm after a sync
> frm
> > > within less then 1ms. Is this to much for a high end Core 2 Duo? Don't
> > think so
> > > ...
> > > 
> > 
> > After checking the sourcecode again and playing around with tdmacfg -p
> > parameter, I see now the meaning of this job->period variable. When one
> > provides -p 1/2 this variable holds the value 2, so that the frame is
> sent only
> > in every second period. So it makes sense that it normally holds 1 in my
> > case.
> > 
> > So, what I see is the following:
> > Without changing the sources, it isn't possible to set any offset
> between
> > received sync frm and sent req cal frm. The TDMA module in slave mode
> > always sents the req cal frm right after receiving the sync frm (of
> course in
> > it's timeslot). It sets the "reply cycle number" field in this frame
> always
> > to the next cycle number. This behavior is hard coded and not
> configurable
> > via make menuconfig.
> > 
> > As my slave isn't able to send the req cal frm so fast, it isn't able to
> > synchronize when a short tdma cycle length is configured (1ms). Is this
> a
> > problem only of my station?
> > 
> > Why is it not possible to configure those offsets?
> > 
> > Please advice!
> > Thanks ... Karl
> 
> 
> To keep you up-to-date, just in case anyone cares ;)
> 
> With the dirty hack, after synchornization is finished, sending data
> frames every 1ms works fine. So my machine isn't to slow at all, just the
> do_request_cal_job() seems to be problematic. I will dig deeper into this now
> with I-pipe Tracer and give further information when available.
> 

Please see attached files from i-pipe debugging! I put an 
xntrace_user_freeze(0,1) before rtdm_task_sleep_until(...) in function 
do_request_cal_job()

Any ideas?

Thanks
Karl
-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
I-pipe frozen back-tracing service on 
2.6.20.15-adeos1.8.06-xenomai2.3.3/ipipe-1.8-06
------------------------------------------------------------
Freeze: 443136238412 cycles, Trace Points: 30 (+10)
Calibrated minimum trace-point overhead: 0.145 us

 +----- Hard IRQs ('|': locked)
 |+---- <unused>
 ||+--- <unused>
 |||+-- Xenomai
 ||||+- Linux ('*': domain stalled, '+': current, '#': current+stalled)
 |||||                        +---------- Delay flag ('+': > 1 us, '!': > 10 us)
 |||||                        |        +- NMI noise ('N')
 |||||                        |        |
      Type    User Val.   Time    Delay  Function (Parent)
:|  + end     0x80000000   -42+   3.828  __ipipe_restore_pipeline_head+0x7c 
(rtdm_event_timedwait+0x77)
:   + func                 -38+   9.045  rt_stack_deliver+0xe [rtnet] 
(rt_stack_mgr_task+0x4d [rtnet])
:|  # func                 -29+   3.718  rtcap_rx_hook+0xa [rtcap] 
(rt_stack_deliver+0x44 [rtnet])
:|  # func                 -25+   1.033  ipipe_trigger_irq+0xd 
(rtcap_rx_hook+0x8d [rtcap])
:|  # func                 -24+   2.026  __ipipe_handle_irq+0x14 
(ipipe_trigger_irq+0x14c)
:|  # func                 -22    0.307  __ipipe_walk_pipeline+0xe 
(__ipipe_handle_irq+0x81)
:|  # func                 -22    0.155  __ipipe_restore_pipeline_head+0x11 
(rt_stack_deliver+0x16b [rtnet])
:|  + end     0x80000000   -22    0.249  __ipipe_restore_pipeline_head+0x7c 
(rt_stack_deliver+0x16b [rtnet])
:|  + begin   0x80000000   -21+   1.781  rt_stack_deliver+0x15d [rtnet] 
(rt_stack_mgr_task+0x4d [rtnet])
:|  # func                 -20    0.155  __ipipe_restore_pipeline_head+0x11 
(rt_stack_deliver+0x197 [rtnet])
:|  + end     0x80000000   -19    0.435  __ipipe_restore_pipeline_head+0x7c 
(rt_stack_deliver+0x197 [rtnet])
:   + func                 -19+   1.731  rtmac_proto_rx+0xc [rtmac] 
(rt_stack_deliver+0xf3 [rtnet])
:   + func                 -17+   1.775  tdma_packet_rx+0x14 [tdma] 
(rtmac_proto_rx+0x75 [rtmac])
:|  + begin   0x80000000   -16    0.229  tdma_packet_rx+0x5c0 [tdma] 
(rtmac_proto_rx+0x75 [rtmac])
:|  # func                 -15    0.155  __ipipe_restore_pipeline_head+0x11 
(tdma_packet_rx+0x5ac [tdma])
:|  + end     0x80000000   -15    0.795  __ipipe_restore_pipeline_head+0x7c 
(tdma_packet_rx+0x5ac [tdma])
:   + func                 -14    0.197  _rtdm_synch_flush+0x16 
(tdma_packet_rx+0x173 [tdma])
:|  + begin   0x80000000   -14    0.203  _rtdm_synch_flush+0x8b 
(tdma_packet_rx+0x173 [tdma])
:|  # func                 -14+   1.197  xnsynch_flush+0xe 
(_rtdm_synch_flush+0x44)
:|  # func                 -13    0.165  xnpod_resume_thread+0xe 
(xnsynch_flush+0x80)
:|  # [   -1] -<?>-   99   -13    0.325  xnpod_resume_thread+0x56 
(xnsynch_flush+0x80)
:|  # func                 -12    0.193  xnpod_schedule+0xe 
(_rtdm_synch_flush+0x4e)
:|  # [   -1] -<?>-   98   -12+   3.742  xnpod_schedule+0x90 
(_rtdm_synch_flush+0x4e)
:|  # [   -1] -<?>-   99    -8    0.329  xnpod_schedule+0x554 
(xnpod_suspend_thread+0x18b)
:|  # func                  -8    0.199  __ipipe_restore_pipeline_head+0x11 
(rtdm_event_timedwait+0x77)
:|  + end     0x80000000    -8+   3.094  __ipipe_restore_pipeline_head+0x7c 
(rtdm_event_timedwait+0x77)
:|  + begin   0x80000000    -5+   4.841  tdma_worker+0x54f [tdma] 
(xnarch_thread_redirect+0x25)
:|  # func                   0    0.169  __ipipe_restore_pipeline_head+0x11 
(tdma_worker+0x697 [tdma])
:|  + end     0x80000000     0    0.173  __ipipe_restore_pipeline_head+0x7c 
(tdma_worker+0x697 [tdma])
<   + freeze  0x00000000     0    0.887  tdma_worker+0x17e [tdma] 
(xnarch_thread_redirect+0x25)
    + func                   0    0.201  rtdm_task_sleep_until+0x14 
(tdma_worker+0x198 [tdma])
 |  + begin   0x80000000     1    0.211  rtdm_task_sleep_until+0xbb 
(tdma_worker+0x198 [tdma])
 |  # func                   1    0.519  xnpod_get_time+0x14 
(rtdm_task_sleep_until+0x46)
 |  # func                   1    0.669  xnpod_suspend_thread+0xe 
(rtdm_task_sleep_until+0x75)
 |  # func                   2    1.507  xntimer_do_start_aperiodic+0xe 
(xnpod_suspend_thread+0x175)
 |  # func                   3    0.197  xnpod_schedule+0xe 
(xnpod_suspend_thread+0x18b)
 |  # [   -1] -<?>-   99     4    1.809  xnpod_schedule+0x90 
(xnpod_suspend_thread+0x18b)
 |  # [   -1] -<?>-   98     6    0.267  xnpod_schedule+0x554 
(_rtdm_synch_flush+0x4e)
 |  # func                   6    0.209  __ipipe_restore_pipeline_head+0x11 
(_rtdm_synch_flush+0x6c)
 |  + end     0x80000000     6    0.000  __ipipe_restore_pipeline_head+0x7c 
(_rtdm_synch_flush+0x6c)
I-pipe worst-case tracing service on 
2.6.20.15-adeos1.8.06-xenomai2.3.3/ipipe-1.8-06
------------------------------------------------------------
Begin: 502330607880 cycles, Trace Points: 35 (-10/+2), Length: 108 us
Calibrated minimum trace-point overhead: 0.145 us

 +----- Hard IRQs ('|': locked)
 |+---- <unused>
 ||+--- <unused>
 |||+-- Xenomai
 ||||+- Linux ('*': domain stalled, '+': current, '#': current+stalled)
 |||||                        +---------- Delay flag ('+': > 1 us, '!': > 10 us)
 |||||                        |        +- NMI noise ('N')
 |||||                        |        |
      Type    User Val.   Time    Delay  Function (Parent)
     +func                 -12    0.149  zone_watermark_ok+0xe 
(get_page_from_freelist+0x93)
     +func                 -12    0.141  ipipe_check_context+0xc 
(get_page_from_freelist+0x192)
     +func                 -12    0.177  ipipe_check_context+0xc 
(get_page_from_freelist+0x1a7)
     #func                 -11    0.137  __ipipe_restore_root+0x8 
(get_page_from_freelist+0x131)
     #func                 -11    0.815  __ipipe_unstall_root+0x8 
(__ipipe_restore_root+0x1b)
 |   #begin   0x80000000   -10    0.151  __ipipe_unstall_root+0x4b 
(__ipipe_restore_root+0x1b)
 |   +end     0x80000000   -10    0.155  __ipipe_unstall_root+0x3a 
(__ipipe_restore_root+0x1b)
     +func                 -10    0.169  ipipe_check_context+0xc 
(get_page_from_freelist+0x13b)
     +func                 -10    0.155  kmap_atomic+0x9 
(get_page_from_freelist+0x28a)
     +func                 -10   10.256  page_address+0x16 (kmap_atomic+0x7e)
>|   +begin   0xfffffff4     0+   1.069  common_interrupt+0x29 
>(get_page_from_freelist+0x296)
:|   +func                   1+   1.269  __ipipe_handle_irq+0x14 
(common_interrupt+0x2e)
:|   +func                   2+   3.312  __ipipe_ack_irq+0x8 
(__ipipe_handle_irq+0xaf)
:|   +func                   5+   2.058  __ipipe_ack_level_irq+0x12 
(__ipipe_ack_irq+0x19)
:|   +func                   7+   1.357  mask_and_ack_8259A+0x14 
(__ipipe_ack_level_irq+0x22)
:|   +func                   9+   6.428  __ipipe_spin_lock_irqsave+0x9 
(mask_and_ack_8259A+0x29)
:|   #func                  15    0.787  __ipipe_spin_unlock_irqrestore+0x9 
(mask_and_ack_8259A+0x73)
:|   +func                  16+   1.221  __ipipe_dispatch_wired+0x16 
(__ipipe_handle_irq+0x68)
:|  # func                  17+   7.152  xnintr_irq_handler+0xe 
(__ipipe_dispatch_wired+0x9f)
:|  # func                  24+   3.578  e1000_intr+0x11 [rt_e1000] 
(xnintr_irq_handler+0x3b)
:|  # func                  28+   8.669  xnpod_get_time+0x14 (e1000_intr+0x39 
[rt_e1000])
:|  # func                  36+   4.543  rt_eth_type_trans+0xe [rtnet] 
(e1000_intr+0xda [rt_e1000])
:|  # func                  41+   3.418  rtnetif_rx+0x11 [rtnet] 
(e1000_intr+0xf1 [rt_e1000])
:|  # func                  44+   4.957  e1000_alloc_rx_buffers+0xe [rt_e1000] 
(e1000_intr+0x42d [rt_e1000])
:|  # func                  49+   3.638  alloc_rtskb+0x14 [rtnet] 
(e1000_alloc_rx_buffers+0x149 [rt_e1000])
:|  # func                  53    0.459  e1000_check_64k_bound+0x9 [rt_e1000] 
(e1000_alloc_rx_buffers+0x163 [rt_e1000])
:|  # func                  53+   1.199  e1000_check_64k_bound+0x9 [rt_e1000] 
(e1000_alloc_rx_buffers+0x9e [rt_e1000])
:|  # func                  55+   6.452  alloc_rtskb+0x14 [rtnet] 
(e1000_alloc_rx_buffers+0x149 [rt_e1000])
:|  # func                  61    0.149  e1000_alloc_rx_buffers+0xe [rt_e1000] 
(e1000_intr+0x42d [rt_e1000])
:|  # func                  61+   1.055  alloc_rtskb+0x14 [rtnet] 
(e1000_alloc_rx_buffers+0x149 [rt_e1000])
:|  # func                  62+   1.663  rtdm_event_signal+0xa 
(e1000_intr+0x40a [rt_e1000])
:|  # func                  64+   5.029  xnsynch_flush+0xe 
(rtdm_event_signal+0x39)
:|  # func                  69    0.451  xnpod_resume_thread+0xe 
(xnsynch_flush+0x80)
:|  # [   -1] -<?>-   98    69+   5.621  xnpod_resume_thread+0x56 
(xnsynch_flush+0x80)
:|  # func                  75+   3.114  xnpod_schedule+0xe 
(rtdm_event_signal+0x65)
:|  # func                  78+   1.309  rthal_irq_end+0x8 
(xnintr_irq_handler+0x110)
:|  # func                  79+   1.441  __ipipe_end_level_irq+0x8 
(rthal_irq_end+0x22)
:|  # func                  81    0.169  enable_8259A_irq+0xa 
(__ipipe_end_level_irq+0x18)
:|  # func                  81+   1.111  __ipipe_spin_lock_irqsave+0x9 
(enable_8259A_irq+0x1f)
:|  # func                  82    0.229  __ipipe_spin_unlock_irqrestore+0x9 
(enable_8259A_irq+0x9c)
:|  # func                  82+   1.205  xnpod_schedule+0xe 
(xnintr_irq_handler+0x12a)
:|  # [ 2808] Xorg    -1    84!  10.510  xnpod_schedule+0x90 
(xnintr_irq_handler+0x12a)
:|  # [   -1] -<?>-   98    94!  13.959  xnpod_schedule+0x554 
(xnpod_suspend_thread+0x18b)
:|  # func                 108    0.231  __ipipe_restore_pipeline_head+0x11 
(rtdm_event_timedwait+0x77)
<|  + end     0x80000000   108    3.948  __ipipe_restore_pipeline_head+0x7c 
(rtdm_event_timedwait+0x77)
    + func                 112    2.510  rt_stack_deliver+0xe [rtnet] 
(rt_stack_mgr_task+0x4d [rtnet])
 |  + begin   0x80000000   115    0.000  rt_stack_deliver+0x17b [rtnet] 
(rt_stack_mgr_task+0x4d [rtnet])
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to