Hi (g/i)pxe developpers,

Playing recently with a new toy with an MCP79 chipset (0x10DE, 0x0AB0) in it
and iSCSI, I found that the transfer rate were anormaly slow (about 15 seconds
to load about 15MB of kernel + initrd, on a Gb network) in the (g/i)pxe phase,
transfer rate were then OK (~50MB/s) after the linux code has taken the control.
After having test with TFTP and HTTP tranfer, I found that the problem was the
same, so it had to be in the forcedeth driver.

Enabling debugging mode show me that the driver didn't stop calling
forcedeth_link_status and as a consequence nv_update_linkspeed when polling
(and each time it just found that there is no link change, so it was just a big
waste of time).
Same problem with gpxe and ipxe.

After having a look at the linux forcedeth driver, I found that the (g/i)pxe
forcedeth driver code was missing a NVREG_MIISTAT_LINKCHANGE register update,
which made the poll method thinking that a link change as occured every time
it was called.
Updating this register and testing it in forcedeth_link_status just did the 
trick !

Here are some tests that I have done :
####################################
Embedded scripts used to test perfs :
===========
#!gpxe
dhcp net0
time imgfetch http://192.168.1.10/some.iso
===========
or :
===========
#!ipxe
dhcp net0
time imgfetch http://192.168.1.10/some.iso
===========

The image size (in bytes) is :
434065408 some.iso

In every case :
- 1st and 2nd run are done "the classic way"
- 3rd run is done with an unplug/replug of the cable to test reconnection

<--------  GPXE  ---------->
Test1 : gpxe / no fix
======================
1st run : 253s / 1.64 MB/s
2nd run : 252s / 1.64 MB/s
3rd run : 260s - reconnection OK

Test2 : gpxe / fix
===================
1st run : 13s / 31.84 MB/s
2nd run : 14s / 29.57 MB/s
3rd run : 18s - reconnection OK

<--------  IPXE  ---------->
Test3 : ipxe / no fix
======================
1st run : 250s / 1.66 MB/s
2nd run : 251s / 1.65 MB/s
3rd run : 257s - reconnection OK

Test4 : ipxe / fix
===================
1st run : 10s / 41.40 MB/s
2nd run : 10s / 41.40 MB/s
3rd run : 17s - reconnection OK
####################################

I am not a network driver hacker, nor a C hacker at all, so perhaps the code
is not at the better possible place, but you got the idea ;)
Looking at how it is done in the linux kernel driver code, I don't think that
this will break other chipsets support.

If you need some kind of tests, just ask (please answer to me directly,
as I am not a subscriber of the (g/i)pxe devel mailing lists).

Cheers,

Yann
--- ipxe.orig/src/drivers/net/forcedeth.c	2011-03-20 10:02:28.136809766 +0100
+++ ipxe/src/drivers/net/forcedeth.c	2011-03-20 10:10:55.319160471 +0100
@@ -967,11 +967,18 @@
 forcedeth_link_status ( struct net_device *netdev )
 {
 	struct forcedeth_private *priv = netdev_priv ( netdev );
-
-	if ( nv_update_linkspeed ( priv ) == 1 )
-		netdev_link_up ( netdev );
-	else
-		netdev_link_down ( netdev );
+	void *ioaddr = priv->mmio_addr;
+	u32 mii_status;
+	
+	mii_status = readl ( ioaddr + NvRegMIIStatus );
+	writel ( NVREG_MIISTAT_LINKCHANGE, ioaddr + NvRegMIIStatus );
+	
+	if ( mii_status & NVREG_MIISTAT_LINKCHANGE ) {
+		if ( nv_update_linkspeed ( priv ) == 1 )
+			netdev_link_up ( netdev );
+		else
+			netdev_link_down ( netdev );
+	}
 }
 
 /**
--- gpxe.orig/src/drivers/net/forcedeth.c	2011-03-19 20:31:29.505555632 +0100
+++ gpxe/src/drivers/net/forcedeth.c	2011-03-20 10:10:38.532792825 +0100
@@ -967,11 +967,18 @@
 forcedeth_link_status ( struct net_device *netdev )
 {
 	struct forcedeth_private *priv = netdev_priv ( netdev );
-
-	if ( nv_update_linkspeed ( priv ) == 1 )
-		netdev_link_up ( netdev );
-	else
-		netdev_link_down ( netdev );
+	void *ioaddr = priv->mmio_addr;
+	u32 mii_status;
+	
+	mii_status = readl ( ioaddr + NvRegMIIStatus );
+	writel ( NVREG_MIISTAT_LINKCHANGE, ioaddr + NvRegMIIStatus );
+	
+	if ( mii_status & NVREG_MIISTAT_LINKCHANGE ) {
+		if ( nv_update_linkspeed ( priv ) == 1 )
+			netdev_link_up ( netdev );
+		else
+			netdev_link_down ( netdev );
+	}
 }
 
 /**
_______________________________________________
ipxe-devel mailing list
[email protected]
https://lists.ipxe.org/mailman/listinfo/ipxe-devel

Reply via email to