On Mon, Aug 3, 2009 at 3:52 AM, Lenny<[email protected]> wrote:
>
>
> On Sun, Aug 2, 2009 at 12:21 PM, Tim Dressel <[email protected]> wrote:
>>
>> Install on both sides, not on pfsense.
>>
>> i.e. install on a machine on the WAN side, and on the LAN site. Or if
>> you are testing between LAN and an OPT interface, put a machine on
>> both subnets and test that way.
>>
>> iPerf on pfsense will not give you a throughput of the firewall (at
>> least nothing that means anything)
>>
>> Cheers,
>>
>
> Ok, so I made some tests with IPERF.
> I just hope I used the right syntax:
>
> on the server side: iperf -s
> on the client side: iperf -c server-ip -t 60 -M 500
>
> I figured the "M 500" is needed, because my average packet size in
> production is 500.
> Unless I'm totally wrong here?
>
> The results I got were 300Mbit/sec with em driver, while I saw the taskq em0
> hit almost 90%.
> Of course, without the "M 500" option I got 750Mbit/sec. I think it was less
> than 50kpps.
>
> When doing tests with bce driver, I got 284Mbit/sec and irq256: bce0 hit
> 85%. That was 73kpps.
>
> Another thing I noticed is regarding the new em driver. I understand it's
> supposed to be the Yandex one.
> So I found someone that had a screenshot of his "top -S" and it looked like
> that:
>
>   PID USERNAME    THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
>    11 root          1 171 ki31     0K    16K CPU7   7  26.3H 100.00% idle:
> cpu7
>    12 root          1 171 ki31     0K    16K CPU6   6  23.5H 98.29% idle:
> cpu6
>    18 root          1 171 ki31     0K    16K RUN    0  21.0H 96.88% idle:
> cpu0
>    17 root          1 171 ki31     0K    16K RUN    1  20.8H 88.67% idle:
> cpu1
>    15 root          1 171 ki31     0K    16K CPU3   3  21.0H 86.96% idle:
> cpu3
>    13 root          1 171 ki31     0K    16K CPU5   5  21.0H 86.57% idle:
> cpu5
>    14 root          1 171 ki31     0K    16K CPU4   4  20.4H 86.47% idle:
> cpu4
>    16 root          1 171 ki31     0K    16K CPU2   2  20.3H 82.57% idle:
> cpu2
>    35 root          1  43    -     0K    16K WAIT   2 682:43 27.59%
> em1_rx_kthread_0
>    36 root          1  43    -     0K    16K WAIT   3 681:24 25.49%
> em1_rx_kthread_1
>    31 root          1  43    -     0K    16K WAIT   0 587:29 19.58%
> em0_rx_kthread_0
>    32 root          1  43    -     0K    16K WAIT   5 586:51 18.07%
> em0_rx_kthread_1
>    19 root          1 -32    -     0K    16K WAIT   6  21:44  3.17% swi4:
> clock sio
>    34 root          1 -68    -     0K    16K WAIT   4  37:56  0.10%
> em1_txcleaner
>    30 root          1 -68    -     0K    16K WAIT   1  29:04  0.00%
> em0_txcleaner
>    53 root          1 -68    -     0K    16K -      1  17:25  0.00% dummynet
> 1234 root          1  44    0   206M   198M select 1   9:45  0.00% bgpd
>
> while in mine I don't see these, I only see 2 taskq emX and it doesn't
> matter how many threads I input in the sysctl.conf.
>
> So what am I doing wrong and is this a normal throughput for my server?
>
> Lenny.
>
>

Hi Lenny,

I'm not sure if this would be useful or not, if you connected the
iperf server and client with a cable and repeated the same test (i.e.
not going through the router) you should be able to see what the
theoretical max is for your setup. If you compare that to the results
you just got and you don't see a huge drop (more than 20%) then that
should be pretty accurate for that. You probably should also do the
bidirectional test as well (-d option) to see if your one way
performance drops (it should not).

Reply via email to