Hi Jan,
>
> How long did you run the ping test? Note that it "only" sends out one
> packet per second, while the RTmac TDMA discipline transmits one packet
> per cycle. So, if you are leaking rtskbs, the latter will stumble much
> earlier.
>
I ran rtping tests that lasted more than 15min, i.e., about 1000 packets. No
error and no packet loss occured, and I did that from both machines.
The only strange thing I once noticed on one machine is that rtt times had
negative values, like -89us, -85us, -96us... I guess this does not have to
do with my main problem (tdma rtmac) because typically with rtping I would
have +70-100us range of values.
> Did I ask this already? Check /proc/interrupts if the NIC on the master
> is assigned to Linux (in order to detect potential IRQ conflicts), also
> check /proc/rtai/<don't-know-its-name: RT IRQs> if there is progress
> /wrt handled NIC IRQs when RTnet is running.
>
Well, I think there are no IRQ conflicts because my card is at IRQ #6 and
after I do './rtnet start' I have the folowing:
##### /proc/interrupts:
CPU0 CPU1
0: 2398005 0 XT-PIC-XT timer
1: 3007 0 XT-PIC-XT i8042
2: 0 0 XT-PIC-XT cascade
8: 0 0 XT-PIC-XT rtc
10: 91059 0 XT-PIC-XT yenta, ahci, [EMAIL PROTECTED]
:0000:00:02.0
11: 146 0 XT-PIC-XT sdhci:slot0, HDA Intel
12: 41822 0 XT-PIC-XT i8042
NMI: 0 0
LOC: 2397695 2397606
ERR: 0
##### /proc/rtai/hal
** RTAI/x86:
APIC Frequency: 12468000
APIC Latency: 3944 ns
APIC Setup: 1000 ns
** Real-time IRQs used by RTAI:
#6 at ffffffff8830d1ff
#325 at ffffffff882e13ec
#328 at ffffffff882e0d0b
** RTAI extension traps:
SYSREQ=0xf6
** RTAI SYSREQs in use: #1 #2
I was not able to find anything that would give me the IRQ count in order to
see the progress. I know /proc/xenomai/irq works under xenomai, but I do not
know the rtai equivalent.
Some other output that might be of interest:
##### /proc/rtai/rtnet/*
Index Name Flags
1 rteth0 UP BROADCAST
2 rtlo UP LOOPBACK
Statistics Current Maximum
rtskb pools 6 6
rtskbs 184 184
rtskb memory need 329728 329728
RTnet 0.9.10 - built on Nov 3 2008 10:28:22
RTcap: yes
rtnetproxy: no
bug checks: no
##### /proc/rtai/rtnet/ipv4/host_route
Hash Destination HW Address Device
00 0.0.0.0 00:00:00:00:00:00 rtlo
01 127.0.0.1 00:00:00:00:00:00 rtlo
02 10.0.0.2 00:00:00:00:00:00 rtlo
3F 10.0.0.255 FF:FF:FF:FF:FF:FF rteth0
##### /proc/rtai/rtnet/host_route (SLAVE)
Hash Destination HW Address Device
00 0.0.0.0 00:00:00:00:00:00 rtlo
01 10.0.0.1 00:00:00:00:00:00 rtlo
01 127.0.0.1 00:00:00:00:00:00 rtlo
02 10.0.0.2 00:16:D3:3E:8C:B5 rteth0
3F 10.255.255.255 FF:FF:FF:FF:FF:FF rteth0
So, it seems that the error occurs after the slave has the correct master
MAC, but before the master have the correct slave MAC address.
That hang may either be the reason or a result of the actual problem,
> still don't know. Can you check with a different NIC? Even a different
> e1000 revision may be interesting. What happens if master and slave swap
> roles?
>
Yeah, I also have no idea whether the hang is a cause or a consequence of
the TDMA failure.
I am working with a laptop, so I can't check with other NIC. What e1000
revision you exactly had in mind?
Finally, when master and slave swap roles the outcome remains the same,
i.e., I again get the TDMA failure and hang (on the master). The NICs are
different though (Intel 82566M and 80003ES2LAN gigabit controllers).
Thanks,
Bodan
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users