Jan Kiszka wrote:
> Karl Reichert wrote:
> > Karl Reichert wrote:
> >> Karl Reichert wrote:
> >>> Karl Reichert wrote:
> >>>> Karl Reichert wrote:
> >>>>> Karl Reichert wrote:
> >>>>>> Jan Kiszka wrote:> 
> >>>>>>> What I would analyse if I were you:
> >>>>>>>  - Is the request frame sent in the right slot according to the
> >>>>> sender?
> >>>>>> Well, now I have another weird behavior. The slave sends a request
> >>>>>> calibration frame in cycle no 45849 and sets Reply Cycle Number to
> >>>>> 97655. As I'm
> >>>>>> using a cycle length of 5 ms, this means the slave wants the
> >> answer
> >>>> more
> >>>>>> then 4 minutes in the future, which is for sure to far away!
> >>>>>>
> >>>>>> This behavior and the one observed before points to a problem when
> >>>>>> calculating or setting this reply cycle number. How can one
> >>> configure
> >>>>> how big this
> >>>>>> value is set (how big the offset between request and reply should
> >>> be).
> >>>>> If
> >>>>>> this is done automatically, in which function is it done?
> >>>>> I digged deeper into the code and here are the results. Please see
> >>>>> attached file to see me changes. Every change is marked by /* REK
> >>> debug
> >>>> */:
> >>>>> My first step was to go into module tdma_worker.c, function
> >>>>> do_request_cal_job(). I printed the values of tdma->current_cycle
> >> and
> >>>> job->period.
> >>>>> job->period is always 1, in all calls of this function.
> >>>> tdma->current_cycle of
> >>>>> course changes, but it holds an old value, for example 1521 cycles
> >> ago
> >>>> with 1
> >>>>> ms cycle length.
> >>>>>
> >>>>> As a quick hack, my second step was to substitue job->period with an
> >>>>> constant value, 2500 in my case. Now the synchronisation works, of
> >>>> course
> >>>>> slowly, but it does.
> >>>>>
> >>>>> The error must be the "wrong" value of on of those two variables!
> >>>>>
> >>>>> This leads to a few questions:
> >>>>> 1) Is it right that job->period is always 1?
> >>>>> 2a) If yes (what I do not believe), do you have an idea why
> >>>>> tdma->current_cycle holds an "old" value?
> >>>>> 2b) If not, where is this value calculated and where for?
> >>>>>
> >>>> Hello list!
> >>>>
> >>>> I did some further testing and this is what I found out:
> >>>>
> >>>> 1) tdma->current_cycle holds the right value! I set the tdma cycle
> >>> length
> >>>> to 10s (yes, seconds) and also started wireshark on the master. I
> >>> checked
> >>>> the value of this variable and when I compare it to what wireshark
> >> tells
> >>> me
> >>>> about the sent sync frames, I see that this value is correct.
> >>>>
> >>>> 2) The slave is able to sync with this long cycle length. The reason
> >> for
> >>>> this is that job->period is still always 1, but now there is enough
> >> time
> >>>> (nearly 10s) to send the req cal frm.
> >>>>
> >>>>
> >>>> This leads me to the following conclusions and questions:
> >>>>
> >>>> 1) Again ... why is job->period always 1? Is this correct? I still
> >> don't
> >>>> understand where this variable stands for and why we do not simply
> add
> >>> any
> >>>> other constant value (like 1 or sth bigger).
> >>>>
> >>>> 2) If it is correct that job->period contains always 1, is it
> >>> problematic
> >>>> to set any higher value there (like I did in my quick hack, see last
> >>> mail)?
> >>>> If yes, wouldn't it make sense to make this configurable (make
> >>>> menuconfig)?
> >>>>
> >>>> 3) If it has to be always one, running tdma with cycle length 1ms
> >> would
> >>>> mean, that the slave must be able to send a req cal frm after a sync
> >> frm
> >>>> within less then 1ms. Is this to much for a high end Core 2 Duo?
> Don't
> >>> think so
> >>>> ...
> >>>>
> >>> After checking the sourcecode again and playing around with tdmacfg -p
> >>> parameter, I see now the meaning of this job->period variable. When
> one
> >>> provides -p 1/2 this variable holds the value 2, so that the frame is
> >> sent only
> >>> in every second period. So it makes sense that it normally holds 1 in
> my
> >>> case.
> >>>
> >>> So, what I see is the following:
> >>> Without changing the sources, it isn't possible to set any offset
> >> between
> >>> received sync frm and sent req cal frm. The TDMA module in slave mode
> >>> always sents the req cal frm right after receiving the sync frm (of
> >> course in
> >>> it's timeslot). It sets the "reply cycle number" field in this frame
> >> always
> >>> to the next cycle number. This behavior is hard coded and not
> >> configurable
> >>> via make menuconfig.
> >>>
> >>> As my slave isn't able to send the req cal frm so fast, it isn't able
> to
> >>> synchronize when a short tdma cycle length is configured (1ms). Is
> this
> >> a
> >>> problem only of my station?
> >>>
> >>> Why is it not possible to configure those offsets?
> >>>
> >>> Please advice!
> >>> Thanks ... Karl
> >>
> >> To keep you up-to-date, just in case anyone cares ;)
> >>
> >> With the dirty hack, after synchornization is finished, sending data
> >> frames every 1ms works fine. So my machine isn't to slow at all, just
> the
> >> do_request_cal_job() seems to be problematic. I will dig deeper into
> this now
> >> with I-pipe Tracer and give further information when available.
> >>
> > 
> > Please see attached files from i-pipe debugging! I put an
> xntrace_user_freeze(0,1) before rtdm_task_sleep_until(...) in function
> do_request_cal_job()
> > 
> > Any ideas?
> 
> That trace per-se doesn't help. It would rather be interesting to check
> how log the caller actually slept in the end (ie. put the freeze after
> the sleep), or what was passed to the caller (printk or
> ipipe_trace_special instrumentation).
> 
> Jan
> 
I put the freeze after the sleep now, please see attached files. This is what 
printk gives:

[ 7605.737664] [REK debug] tdma->current_cycle_start = 1191326103544417349
[ 7605.737708] [REK debug] job->offset = 2300000

Please advice

Thanks
Karl
-- 
NEU! +++ GMX empfiehlt DSL-Komplettpaket von 1&1  +++
Surfen und Telefonieren!*: http://www.gmx.net/de/go/dsl
I-pipe frozen back-tracing service on 
2.6.20.15-adeos1.8.06-xenomai2.3.3/ipipe-1.8-06
------------------------------------------------------------
Freeze: 15241243223656 cycles, Trace Points: 30 (+10)
Calibrated minimum trace-point overhead: 0.145 us

 +--------------- Hard IRQs ('|': locked)
 |             +- Delay flag ('+': > 1 us, '!': > 10 us)
 |             |
  Type     Time   Function (Parent)
: func    -2131   ipipe_suspend_domain (cpu_idle)
:|begin   -2131   ipipe_suspend_domain (cpu_idle)
:|end     -2131   ipipe_suspend_domain (cpu_idle)
: func    -2130   default_idle (cpu_idle)
: func    -2130   ipipe_check_context (default_idle)
: func    -2130   __ipipe_unstall_root (default_idle)
:|begin   -2130   __ipipe_unstall_root (default_idle)
:|end     -2130   __ipipe_unstall_root (default_idle)
: end     -2129!  default_idle (cpu_idle)
:|begin     -13   common_interrupt (default_idle)
:|func      -13   __ipipe_handle_irq (common_interrupt)
:|func      -12   __ipipe_ack_irq (__ipipe_handle_irq)
:|func      -12   __ipipe_ack_level_irq (__ipipe_ack_irq)
:|func      -12   mask_and_ack_8259A (__ipipe_ack_level_irq)
:|func      -12+  __ipipe_spin_lock_irqsave (mask_and_ack_8259A)
:|func      -10   __ipipe_spin_unlock_irqrestore (mask_and_ack_8259A)
:|func      -10   __ipipe_dispatch_wired (__ipipe_handle_irq)
:|func      -10   xnintr_clock_handler (__ipipe_dispatch_wired)
:|func      -10   xnintr_irq_handler (xnintr_clock_handler)
:|func       -9   xnpod_announce_tick (xnintr_irq_handler)
:|func       -9   xntimer_do_tick_aperiodic (xnpod_announce_tick)
:|func       -8   xnthread_timeout_handler (xntimer_do_tick_aperiodic)
:|func       -8   xnpod_resume_thread (xnthread_timeout_handler)
:|[   -1]    -8+  xnpod_resume_thread (xnthread_timeout_handler)
:|func       -3   xnpod_schedule (xnintr_irq_handler)
:|[    0]    -2+  xnpod_schedule (xnintr_irq_handler)
:|[   -1]    -1+  xnpod_schedule (xnpod_suspend_thread)
:|func        0   __ipipe_restore_pipeline_head (rtdm_task_sleep_until)
:|end         0   __ipipe_restore_pipeline_head (rtdm_task_sleep_until)
< freeze      0   tdma_worker (xnarch_thread_redirect)
  func        1   printk (tdma_worker)
  func        3   __ipipe_spin_lock_irqsave (printk)
 |begin       3   __ipipe_spin_lock_irqsave (printk)
 |func        4   vscnprintf (printk)
 |func        4   vsnprintf (vscnprintf)
 |func        9   number (vsnprintf)
 |func       11   __ipipe_spin_unlock_irqrestore (printk)
 |end        11   __ipipe_spin_unlock_irqrestore (printk)
  func       11   ipipe_trigger_irq (printk)
 |begin      12   ipipe_trigger_irq (printk)
I-pipe worst-case tracing service on 
2.6.20.15-adeos1.8.06-xenomai2.3.3/ipipe-1.8-06
------------------------------------------------------------
Begin: 15848733925396 cycles, Trace Points: 35 (-10/+2), Length: 129 us
Calibrated minimum trace-point overhead: 0.145 us

 +--------------- Hard IRQs ('|': locked)
 |             +- Delay flag ('+': > 1 us, '!': > 10 us)
 |             |
  Type     Time   Function (Parent)
  func     -964   __ipipe_unstall_iret_root (restore_nocheck_notrace)
 |begin    -964   __ipipe_unstall_iret_root (restore_nocheck_notrace)
 |end      -964   __ipipe_unstall_iret_root (restore_nocheck_notrace)
  func     -948   __ipipe_divert_exception (device_not_available)
  func     -948   math_state_restore (device_not_available)
 |begin    -948   math_state_restore (device_not_available)
 |end      -947   math_state_restore (device_not_available)
  func     -946   __ipipe_unstall_iret_root (restore_nocheck_notrace)
 |begin    -946   __ipipe_unstall_iret_root (restore_nocheck_notrace)
 |end      -945   __ipipe_unstall_iret_root (restore_nocheck_notrace)
>|begin       0+  common_interrupt (<afacf4f8>)
:|func        7+  __ipipe_handle_irq (common_interrupt)
:|func       11+  __ipipe_ack_irq (__ipipe_handle_irq)
:|func       13   __ipipe_ack_level_irq (__ipipe_ack_irq)
:|func       14+  mask_and_ack_8259A (__ipipe_ack_level_irq)
:|func       15+  __ipipe_spin_lock_irqsave (mask_and_ack_8259A)
:|func       22   __ipipe_spin_unlock_irqrestore (mask_and_ack_8259A)
:|func       23+  __ipipe_dispatch_wired (__ipipe_handle_irq)
:|func       25+  xnintr_irq_handler (__ipipe_dispatch_wired)
:|func       31+  e1000_intr (xnintr_irq_handler)
:|func       35!  xnpod_get_time (e1000_intr)
:|func       46+  rt_eth_type_trans (e1000_intr)
:|func       52+  rtnetif_rx (e1000_intr)
:|func       59+  e1000_alloc_rx_buffers (e1000_intr)
:|func       63+  alloc_rtskb (e1000_alloc_rx_buffers)
:|func       67+  e1000_check_64k_bound (e1000_alloc_rx_buffers)
:|func       69   e1000_check_64k_bound (e1000_alloc_rx_buffers)
:|func       70+  alloc_rtskb (e1000_alloc_rx_buffers)
:|func       76   e1000_alloc_rx_buffers (e1000_intr)
:|func       76+  alloc_rtskb (e1000_alloc_rx_buffers)
:|func       78+  rtdm_event_signal (e1000_intr)
:|func       79+  xnsynch_flush (rtdm_event_signal)
:|func       86+  xnpod_resume_thread (xnsynch_flush)
:|[   -1]    88+  xnpod_resume_thread (xnsynch_flush)
:|func       92+  xnpod_schedule (rtdm_event_signal)
:|func       97+  rthal_irq_end (xnintr_irq_handler)
:|func       98+  __ipipe_end_level_irq (rthal_irq_end)
:|func      100   enable_8259A_irq (__ipipe_end_level_irq)
:|func      101+  __ipipe_spin_lock_irqsave (enable_8259A_irq)
:|func      102   __ipipe_spin_unlock_irqrestore (enable_8259A_irq)
:|func      103+  xnpod_schedule (xnintr_irq_handler)
:|[ 3388]   104!  xnpod_schedule (xnintr_irq_handler)
:|[   -1]   120+  xnpod_schedule (xnpod_suspend_thread)
:|func      128   __ipipe_restore_pipeline_head (rtdm_event_timedwait)
<|end       129   __ipipe_restore_pipeline_head (rtdm_event_timedwait)
  func      135   rt_stack_deliver (rt_stack_mgr_task)
 |begin     136   rt_stack_deliver (rt_stack_mgr_task)
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to