Andrew Gallatin wrote:
Nicolas Droux wrote:
[Bcc'ed [email protected] and [email protected]]

I am pleased to announce the availability of the first revision of the "Crossbow APIs for Device Drivers" document, available at the following location:

I recently ported a 10GbE driver to Crossbow.  My driver currently
has a single ring-group, and a configurable number of rings.  The
NIC hashes received traffic to the rings in hardware.

I'm having a strange issue which I do not see in the non-crossbow
version of the driver.  When I run TCP benchmarks, I'm seeing
what seems like packet loss.  Specifically, netstat shows
tcpInUnorderBytes and tcpInDupBytes increasing at a rapid rate,
and bandwidth is terrible (~1Gb/s for crossbow, 7Gb/s non-crossbow
on the same box with the same OS revision).

The first thing I suspected was that packets were getting dropped
due to my having the wrong generation number, but a dtrace probe
doesn't show any drops there.

Now I'm wondering if perhaps the interupt handler is in
the middle of a call to mac_rx_ring() when interrupts
are disabled. Am I supposed to ensure that my interrupt handler is not
calling mac_rx_ring() before my rx_ring_intr_disable()
routine returns?  Or does the mac layer serialize this?

I'm still trying to figure this out.  The code is just so large
that I'm having a hard time figuring out the big picture.

I have discovered that if I use the dladm create-vnic trick,
which disables polling, the out-of-order problem goes away.
I guess this implies that there is some problem with synchronization
between the interrupt routine, and the polling routine.

My driver looks quite a bit like the bge2 driver, in that
my interrupt handler builds an mblk chain when holding an
rx ring's lock, and then drops it before calling mac_rx_ring().
I tried changing the code, and holding the lock around mac_rx_ring(),
and I still see TCP complaining of out of order & duplicate
packets.  So perhaps things are being dropped..?

Please help..

Drew
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to