Hello Jan

Ok, the overlong round trip times (rtt) from slave to master on my system were introduced by the switch (Longshine LCS-FS6108). For normal TCP/IP traffic the Longshine LCS-FS6108 switch works. I have 2 of them. So the risk that
both are out of order is relatively small.
Do you know switches witch do not work?
When using crossover rtt were ok.
I have exchanged the switch and the rtts are ok now.

Jochen




From: Jan Kiszka <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
CC: rtnet-users@lists.sourceforge.net
Subject: [RTnet-users] Re: ping requests slave to master
Date: Sun, 26 Mar 2006 17:29:09 +0200

matrix_df hotmail wrote:
> Hallo Jan
>
> I just want to know if the ping (sbin/rtping) only functions from master
> to slave and not the other way (slave to master).
>
> from master to slave it looks ok:
> ############################
> [EMAIL PROTECTED] rtnet]# sbin/rtping 10.0.0.2
> Real-time PING 10.0.0.2 56(84) bytes of data.
> 64 bytes from 10.0.0.2: icmp_seq=1 time=7518.8 us
> 64 bytes from 10.0.0.2: icmp_seq=2 time=2469.3 us
> 64 bytes from 10.0.0.2: icmp_seq=3 time=2411.3 us
> 64 bytes from 10.0.0.2: icmp_seq=4 time=7354.3 us
> 64 bytes from 10.0.0.2: icmp_seq=5 time=7294.9 us
> 64 bytes from 10.0.0.2: icmp_seq=6 time=7229.1 us
> 64 bytes from 10.0.0.2: icmp_seq=7 time=2177.7 us
> 64 bytes from 10.0.0.2: icmp_seq=8 time=7118.9 us
> 64 bytes from 10.0.0.2: icmp_seq=9 time=7061.2 us
> 64 bytes from 10.0.0.2: icmp_seq=10 time=7004.2 us
> 64 bytes from 10.0.0.2: icmp_seq=11 time=1942.9 us
> 64 bytes from 10.0.0.2: icmp_seq=12 time=6882.9 us
> 64 bytes from 10.0.0.2: icmp_seq=13 time=6825.0 us
> 64 bytes from 10.0.0.2: icmp_seq=14 time=6766.3 us
> 64 bytes from 10.0.0.2: icmp_seq=15 time=1708.1 us
> 64 bytes from 10.0.0.2: icmp_seq=16 time=1650.7 us
>
> --- 10.0.0.2 rtping statistics ---
> 16 packets transmitted, 16 received, 0% packet loss
> worst case rtt = 7518.8 us
> ####################
>
>
> but from slave to master:
> (first a tried the preconfigured version of rtnet.conf without tdma.conf
> and then with tdma.conf.
> But I get the same result.
>
> tdma.conf:
> master:
> ip 10.0.0.1
> cycle 5000
> slot 0 1000
>
> slave:
> ip 10.0.0.2
> slot 0 2000
>
>
> rtping from slave to master:
> ####################
> [EMAIL PROTECTED] rtnet]# sbin/rtping 10.0.0.1
> Real-time PING 10.0.0.1 56(84) bytes of data.
> 64 bytes from 10.0.0.1: icmp_seq=1 time=8260.6 us
> 64 bytes from 10.0.0.1: icmp_seq=2 time=378189.8 us
> 64 bytes from 10.0.0.1: icmp_seq=3 time=183124.2 us
> 64 bytes from 10.0.0.1: icmp_seq=4 time=8062.9 us
> 64 bytes from 10.0.0.1: icmp_seq=5 time=6985.4 us
> 64 bytes from 10.0.0.1: icmp_seq=6 time=426923.9 us
> 64 bytes from 10.0.0.1: icmp_seq=7 time=231859.5 us
> 64 bytes from 10.0.0.1: icmp_seq=8 time=36786.4 us
> 64 bytes from 10.0.0.1: icmp_seq=9 time=6724.5 us
> 64 bytes from 10.0.0.1: icmp_seq=10 time=456646.9 us
> 64 bytes from 10.0.0.1: icmp_seq=11 time=261582.2 us
> 64 bytes from 10.0.0.1: icmp_seq=12 time=66519.0 us
> 64 bytes from 10.0.0.1: icmp_seq=13 time=6452.3 us
> 64 bytes from 10.0.0.1: icmp_seq=14 time=481373.0 us
> 64 bytes from 10.0.0.1: icmp_seq=15 time=286308.5 us
> 64 bytes from 10.0.0.1: icmp_seq=16 time=91241.8 us
> 64 bytes from 10.0.0.1: icmp_seq=17 time=6173.2 us
>
> --- 10.0.0.1 rtping statistics ---
> 17 packets transmitted, 17 received, 0% packet loss
> worst case rtt = 481373.0 us
> #############################
> Sometimes even packets get lost.
>
> Is the normal?
> And when this is normal why?
>

It is not normal, and it requires a closer look. Just to make sure that
we didn't introduce some regression recently, I just fired up a tiny
network of an RTAI 3.3 master and a Xenomai 2.1 slave (and vice versa)
with your setup:

slave: # rtping master
...
122 packets transmitted, 122 received, 0% packet loss
worst case rtt = 9028.2 us

All fine here.

As a next step I would suggest to capture packets on master and slave
side and look for unexpected delays. Ethereal is very helpful in this
regard, specifically in combination with its filtering capability (e.g.
"!tdma" masks all sync frames).

Are you sure that there is no other RT traffic on the wire? rtping
competes with RTcfg (heartbeat) for the lowest but one packet priority
(lowest is for tunnelling), but this would only explain a single
additional cycle delay if they happen to hit the same cycle.

Jan



<< signature.asc >>




-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to