On 04/04/2011 12:48, Alan Bartlett wrote:
On 4 April 2011 09:59, Raimondo Giammanco [VKICC]<[email protected]>  wrote:

  I've just tested once again the card with ubuntu, now I'm 100% positive it
works (i.e. the e1g44etblk was the only wired port during iperf, eth0 was
disconnected).

  It works with ubuntu 10.04 server:
kernel 2.6.35-24-server
igb version 2.1.0-k2

It remains to be seen where the problem comes in SL (tested kernel:
2.6.32-71.18.2.el6.x86_64)

I'll see if I can compile kernel version 2.6.35 on SL or if the problem is
linked to my boot parameters (noapic acpi=off).
Hi Raimondo,

Thank you for keeping this m/l updated with the results of your
experiments. I would suggest the following steps:

(1) Check the Red Hat bug tracker for any related issues.
(2) Review the need for your current boot parameters.
(3) Build your own testing kernel from the latest stable long-term
support tarball (linux-2.6.35.12.tar.bz2).

Regards,
Alan.
Hello Alan,

 I had no time in the last few days to go back to my servers.

Today I compiled my own kernel, version 2.6.38.2 (I compiled before reading your comment about long term kernel support)...

I booted on one server and the very first test seems encouraging: I can boot without the previous flags "acpi=off noapic" and finally the pci-e network card work as supposed:

iperf -s (on a bonded interface)
########
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.130.0.102 port 5001 connected with 10.130.0.103 port 56568
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  2.10 GBytes  1.80 Gbits/sec
[  5] local 10.130.0.102 port 5001 connected with 10.130.0.103 port 56569
[  5]  0.0-10.0 sec  2.11 GBytes  1.81 Gbits/sec
[  4] local 10.130.0.102 port 5001 connected with 10.130.0.103 port 56570
[  4]  0.0-10.0 sec  2.12 GBytes  1.82 Gbits/sec
########

and ifconfig gives

#######
bond3     Link encap:Ethernet  HWaddr 00:00:00:00:02:00
          inet addr:10.130.0.102  Bcast:10.130.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:9403147 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1716406 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:14236223600 (13.2 GiB)  TX bytes:155593304 (148.3 MiB)

eth0      Link encap:Ethernet  HWaddr 00:25:90:1A:62:DE
          inet addr:10.1.0.133  Bcast:10.1.7.255  Mask:255.255.248.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5637 errors:0 dropped:299 overruns:0 frame:0
          TX packets:397 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:557787 (544.7 KiB)  TX bytes:39951 (39.0 KiB)
          Memory:fafe0000-fb000000

eth8      Link encap:Ethernet  HWaddr 00:00:00:00:02:00
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:4698471 errors:0 dropped:0 overruns:0 frame:0
          TX packets:858208 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:7113403080 (6.6 GiB)  TX bytes:77796780 (74.1 MiB)
          Memory:f87e0000-f8800000

eth9      Link encap:Ethernet  HWaddr 00:00:00:00:02:00
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:4704676 errors:0 dropped:0 overruns:0 frame:0
          TX packets:858198 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:7122820520 (6.6 GiB)  TX bytes:77796524 (74.1 MiB)
          Memory:f7fe0000-f8000000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
#########

where cat /proc/net/bounding/bond3
#######
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:93:b4:08
Slave queue ID: 0

Slave Interface: eth9
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:93:b4:09
Slave queue ID: 0
############

Where eth8 and eth9 are in the second pci-e e1g44 card.

If tomorrow's tests will confirm further, I'll need to run to run a custom kernel for the time being, until the el6 kernel are ok with my hardware.

I'll see maybe for a kernel with drbd included, that would be really nice.

Regards and thanks for the help.

Raimondo

Reply via email to