Hi all,

I am experiencing problems when trying to bring up Ethernet interface on
RTnet enabled environment.

System is Scientific Linux 5.3 with custom built 2.6.30 x86_64 kernel
and Xenomai 2.4.9. RTnet version was initially 0.9.11 and I tried the
latest git snapshot as well.

System has three Ethernet controllers:
02:02.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet
Controller (rev 05)
        Subsystem: Intel Corporation PRO/1000 GT Desktop Adapter
        Flags: bus master, 66MHz, medium devsel, latency 52, IRQ 24
        Memory at d0220000 (32-bit, non-prefetchable) [size=128K]
        Memory at d0200000 (32-bit, non-prefetchable) [size=128K]
        I/O ports at 2000 [size=64]
        [virtual] Expansion ROM at d1000000 [disabled] [size=128K]
        Capabilities: [dc] Power Management version 2
        Capabilities: [e4] PCI-X non-bridge device

0d:00.0 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet
Controller (Copper) (rev 03)
        Subsystem: Super Micro Computer Inc Unknown device 108c
        Flags: bus master, fast devsel, latency 0, IRQ 52
        Memory at d0300000 (32-bit, non-prefetchable) [size=128K]
        I/O ports at 3000 [size=32]
        Capabilities: [c8] Power Management version 2
        Capabilities: [d0] Message Signalled Interrupts: 64bit+ Queue=0/0
Enable+
        Capabilities: [e0] Express Endpoint IRQ 0

0f:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet
Controller
        Subsystem: Super Micro Computer Inc Unknown device 109a
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at d0400000 (32-bit, non-prefetchable) [size=128K]
        I/O ports at 4000 [size=32]
        Capabilities: [c8] Power Management version 2
        Capabilities: [d0] Message Signalled Interrupts: 64bit+ Queue=0/0
Enable-
        Capabilities: [e0] Express Endpoint IRQ 0

rteth0 that I am trying to bring up in this case is actually the PCIe
82573E. The PCI based 82541PI uses a non-rtnet e1000 driver for regular
UDP/TCP traffic.

After issuing the following commands:
/sbin/rmmod e1000e;
/sbin/insmod /usr/rtnet/modules/rtnet.ko socket_rtskbs=64;
/sbin/insmod /usr/rtnet/modules/rtipv4.ko;
/sbin/insmod /usr/rtnet/modules/rtpacket.ko;
/sbin/insmod /usr/rtnet/modules/rt_loopback.ko;
/sbin/insmod /usr/rtnet/modules/rt_e1000_new.ko pciif=pcie;
/sbin/insmod /usr/rtnet/modules/rtcap.ko;
/usr/rtnet/sbin/rtifconfig rtlo up 127.0.0.1;
/usr/rtnet/sbin/rtifconfig rteth0 up 192.168.5.1 promisc netmask
255.255.255.0;

The last line dumps the following output in the dmesg (see attached
dump.txt for the entire dmesg dump):
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
irq event 52: bogus return value 2fb07cb0
Pid: 0, comm: swapper Not tainted 2.6.30-xenomai-2.4.9 #3
Call Trace:
 [<ffffffff8026c690>] ? __report_bad_irq+0x30/0x7d
 [<ffffffff8026c743>] ? note_interrupt+0x66/0x153
 [<ffffffff8026d398>] ? handle_edge_irq+0xe4/0x114
 [<ffffffff8020e315>] ? handle_irq+0x81/0x8a
 [<ffffffff8020da0c>] ? do_IRQ+0x5a/0xa3
 [<ffffffff8020d9b2>] ? do_IRQ+0x0/0xa3
 [<ffffffff802702f4>] ? __ipipe_sync_stage+0x166/0x16c
 [<ffffffff802702fa>] ? __xirq_end+0x0/0x6b
 [<ffffffff8021faef>] ? __ipipe_handle_irq+0x18a/0x275
 [<ffffffff8020c493>] ? common_interrupt+0x13/0x2c
 [<ffffffff802120c9>] ? default_idle+0x8f/0x105
 [<ffffffff80505b22>] ? __atomic_notifier_call_chain+0x46/0x69
 [<ffffffff8020aadb>] ? cpu_idle+0x59/0x8b
handlers:
[<ffffffffa0360f33>] (e1000_intr_msi_test+0x0/0x71 [rt_e1000_new])
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
e1000: rteth0: e1000_watchdog_task: NIC Link is Up 1000 Mbps Full
Duplex, Flow Control: RX/TX



Note also that in order to build the experimental e1000 driver on this
64-bit system, I needed to patch the code a little:
--- drivers/experimental/e1000/e1000_main.c     2009-08-29
17:41:04.000000000 +0200
+++ drivers/experimental/e1000/e1000_main.c~    2009-02-28
14:34:03.000000000 +0100
@@ -305,3 +305,3 @@
 static int  e1000_intr(rtdm_irq_t *irq_handle);
-static irqreturn_t e1000_intr_msi(rtdm_irq_t *irq_handle);
+static int e1000_intr_msi(rtdm_irq_t *irq_handle);
 static bool e1000_clean_tx_irq(struct e1000_adapter *adapter,

because I was getting the following compile time errors:
  CC
[M]  /usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.o
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c: In
function ‘e1000_test_msi_interrupt’:
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c:1750:
warning: passing argument 2 of ‘request_irq’ from incompatible pointer
type
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c: In
function ‘e1000_tx_csum’:
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c:3567:
warning: unused variable ‘css’
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c: At top
level:
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c:4496:
error: conflicting types for ‘e1000_intr_msi’
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c:306:
error: previous declaration of ‘e1000_intr_msi’ was here
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c: In
function ‘e1000_intr_msi’:
/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.c:4578:
warning: ‘return’ with a value, in function returning void
make[5]: ***
[/usr/src/rtnet-0.9.11.1/drivers/experimental/e1000/e1000_main.o] Error
1


Thanks in advance,
Anze Zagar.
*** RTnet 0.9.11 - built on Aug 31 2009 10:50:39 ***

RTnet: initialising real-time networking
initializing loopback...
RTnet: registered rtlo
Intel(R) PRO/1000 Network Driver - rt_e1000_new version 7.6.15.5 ported to 
RTnet (pciif: pcie)
Copyright (c) 1999-2008 Intel Corporation.
rt_e1000_new 0000:0d:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
rt_e1000_new 0000:0d:00.0: setting latency timer to 64
e1000: 0000:0d:00.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1) 
00:30:48:b8:ca:bc
RTnet: registered rteth0
e1000: rteth0: e1000_probe: Intel(R) PRO/1000 Network Connection
rt_e1000_new 0000:0f:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
rt_e1000_new 0000:0f:00.0: setting latency timer to 64
e1000: 0000:0f:00.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1) 
00:30:48:b8:ca:bd
RTnet: registered rteth1
e1000: rteth1: e1000_probe: Intel(R) PRO/1000 Network Connection
RTcap: real-time capturing interface
rtlo (): not using net_device_ops yet
rteth0 (): not using net_device_ops yet
rteth0-mac (): not using net_device_ops yet
rteth1 (): not using net_device_ops yet
rteth1-mac (): not using net_device_ops yet
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
irq event 52: bogus return value 2fb07cb0
Pid: 0, comm: swapper Not tainted 2.6.30-xenomai-2.4.9 #3
Call Trace:
 [<ffffffff8026c690>] ? __report_bad_irq+0x30/0x7d
 [<ffffffff8026c743>] ? note_interrupt+0x66/0x153
 [<ffffffff8026d398>] ? handle_edge_irq+0xe4/0x114
 [<ffffffff8020e315>] ? handle_irq+0x81/0x8a
 [<ffffffff8020da0c>] ? do_IRQ+0x5a/0xa3
 [<ffffffff8020d9b2>] ? do_IRQ+0x0/0xa3
 [<ffffffff802702f4>] ? __ipipe_sync_stage+0x166/0x16c
 [<ffffffff802702fa>] ? __xirq_end+0x0/0x6b
 [<ffffffff8021faef>] ? __ipipe_handle_irq+0x18a/0x275
 [<ffffffff8020c493>] ? common_interrupt+0x13/0x2c
 [<ffffffff802120c9>] ? default_idle+0x8f/0x105
 [<ffffffff80505b22>] ? __atomic_notifier_call_chain+0x46/0x69
 [<ffffffff8020aadb>] ? cpu_idle+0x59/0x8b
handlers:
[<ffffffffa0360f33>] (e1000_intr_msi_test+0x0/0x71 [rt_e1000_new])
rt_e1000_new 0000:0d:00.0: irq 52 for MSI/MSI-X
e1000: rteth0: e1000_watchdog_task: NIC Link is Up 1000 Mbps Full Duplex, Flow 
Control: RX/TX
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to