Jonathan Steel [EMAIL PROTECTED] wrote:
Hi Everyone

We recently purchased an Intel PRO/1000 GT Quad Port network card and
decided to run some stress tests to make sure it could maintain gigabit
connections on all four ports at the same time. I ran the test with five
computers and iperf-1.7.0 (newest version of iperf has speed issues with
OpenBSD). The computer with the NIC ran the iperf server in upd mode and
the four other machines ran as iperf upd clients.

The first test was using the GENERIC kernel

2 ports could sustain 900Mbit/s connections with about 30% packet drop rate
3 ports could sustain 900Mbit/s connections with about 80% packet drop rate
4 ports could sustain 900Mbit/s connections with about 95-99% packet drop
rate


The second test was using the GENERIC.MP kernel

2 ports could sustain 900Mbit/s connections with no dropped packets
3 ports could sustain 900Mbit/s connections with about 10% packet drop rate
4 ports could sustain 900Mbit/s connections with about 30-50% packet drop
rate


It would be really helpful if you gave us the before and after output of:
  netstat -ss -p ip
  netstat -ss -p tcp
  netstat -in
  netstat -m
  sysctl net
and ran
  iostat 1
for a bit during your tests.

  netstat -ss tells things about queue drops
  netstat -in tells packet counts
  netstat -m tells things about mbuf usage
  sysctl net tells things about tcp and ip settings and some duplicate
    data about queues, etc.
  iostat 1 tells about CPU usage

For instance, I suspect that the IP input queue limit is too low,
as someone else mentioned. Using the PIC for interrupt vectoring
is indeed slow. But we can't tell without at least some of the numbers
and status above.

   geoff steckel

Reply via email to