Jan Kiszka wrote:
> Karl Reichert wrote:
>> Karl Reichert wrote:
>>> Jan Kiszka wrote:
>>>> Karl Reichert wrote:
>>>>> Jan Kiszka wrote:
>>>>>> Karl Reichert wrote:
>>>> ...
>>>>>>> I put the freeze after the sleep now, please see attached files.
>>> This
>>>> is
>>>>>> what printk gives:
>>>>>>> [ 7605.737664] [REK debug] tdma->current_cycle_start =
>>>>>> 1191326103544417349
>>>>>>> [ 7605.737708] [REK debug] job->offset = 2300000
>>>>>> So you sleep about 2300 us after the Sync frame reception. But your
>>>>>> frozen backtrace doesn't cover the full period, just look at the
>>>>>> timestamps. Once you can see the full period of 2300 us from falling
>>>>>> asleep until waking up and maybe also sending the packet (play with
>>>>>> back_trace_points and post_trace_points), you can be sure that the
>>>> local
>>>>>> timing is correct.
>>>>> I'm sorry, I do not know how to read those files and where to look
>>> for.
>>>> The wiki page is a little bit short about this topic. So I attached my
>>>> frozen and max file again, this time with post_ pre_ and back_ points
>>> set to
>>>> 1000. Please tell me where I can see that I really sleep 2300 us. Thanks
>>> in
>>>> advance ...
>>>>
>>>> See comments below. In short: timing looks ok.
>>> Thanks for the detailed explanatory notes! I added a copy of this to
>>> Xenomai Wiki dealing with iPipe Tracer. Hope this will help other users to
>>> understand how to read those files.
>>>
>>>>>> Then you will have to look at the data again that is received and
>>>>>> transmitted.
>>>>> What do you mean by that? Which data? I guess you aren't talking about
>>>> ethernet frames, as I already checked them via wireshark. So which data
>>> is
>>>> worth to check?
>>>>
>>>> Compare the wireshark result (which you took on the wire? Or on one of
>>>> the nodes? Sorry, don't recall anymore) with the data one node sees:
>>>> which cycle number it receives via Sync and what it then writes out.
>>>> Next is what the other side receives. Simply track the flow and build
>>>> your own picture of what is as expected, what might be abnormal.
>>> I checked this with printk and wireshark and the result is, that
>>> everything is fine like it should:
>>> The sync frame is received and the request calibration frame sent by the
>>> slave contains TDMA_current_cycle_no + 1, which means the slave request the
>>> reply calibration frame in the next TDMA cycle. But the problem still
>>> remains ... the request calibration frame is sent much to late, so that the
>>> desired reply tdma cycle lies in the past.
>>> So I don't see the reason why this doesn't works now ... we sleep the
>>> desired time as ipipe tracer showed, we have the right tdma cycle value ...
>>> where the heck is the problem?
>> I also did a trace on the master now, to see if any problems occur there. 
>> But as one can see in attached trace file, everything is fine, the driver 
>> receives the frame ~400us before freeze, so this is not the reason.
>> I don't have any idea now what the reason for this behavoir could be.
>> - slave is sleeping desired time
>> - master is working fast enough (see attached file)
>> - tdma->current_cycle holds the right value
>>
>> any ideas?
> 
> I'm lacking the full picture, need to catch up with the details tonight
> or so.

OK, this disease needs to be cured before it becomes chronic: To help me
getting that full picture, could you send me a current and complete
Wireshark trace, taken with a third station sniffing the communication
between master and slave? If there is no hub available to create the
appropriate setup, the two traces taken on both master and slave at the
same problematic time would be an alternative.

Thanks,
Jan


Attachment: signature.asc
Description: OpenPGP digital signature

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to