On Mon, 10 Nov 2008 22:23:37 -0700
RB <[EMAIL PROTECTED]> wrote:

> On Mon, Nov 10, 2008 at 18:35, Celejar <[EMAIL PROTECTED]> wrote:
> > kernel 2.6.25.19.  I've been experimenting with iperf to see what kind
> > of throughput I get across wired connections, and I see a pretty
> > consistent 19.8 - 19.9 Mbps.
> 
> That's about right for the MIPS your CPU can push.  Check 'top' or the
> performance monitor of your choice, but you're more than likely
> CPU-bound.

You are right.  When monitoring iperf with htop, CPU usage is about 92
- 93 % iperf, and 6 - 7 % htop itself.

> > For comparison, when running iperf locally, the machine that's connected
> > to the router, running Debian Sid, gets about 3.8 Gbps, and the router gets
> > about 55 - 56 Mbps talking to itself.
> 
> Do I correctly presume you're running iperf against 127.0.0.1 or your
> own local IP?  If so, you're more measuring your memory bandwidth than

Yes.

> getting an indication of what your network I/O "should" be.

Do you mean that when running iperf across the loopback interface, it
is bounded by memory bandwidth and not CPU usage?  Doesn't the 100% CPU
utilization indicate that I'm running up against the CPU limit?

> Your expectations are rather high, exacerbated by misunderstanding the
> benchmark.  You can probably squeeze another several Mbps through the
> Motorola with packets approaching the MTU and tune an additional few
> percent with more focused stack tuning (sysctls), but if you watch
> your PPS throughput instead of bandwidth, you'll likely see your limit
> more clearly there.

I'm primarily wondering why I don't get more bandwidth with real
ethernet.  If my CPU can move bits to and from the network subsystem at
more than 50 Mbps when loopbacking, and the ethernet hardware gets a
theoretical 100 Mbps, why do I get less than 20 Mbps across CAT5?  Is
the entire loss due to basic ethernet overhead, compounded by the
slow CPU?

> Testing considerations (for low-power IP routers):
>  - IP loopback testing is not indicative of what real testing will be

I understand; I just did that to see the upper bandwidth limit of the
basic hardware and networking subsystem, without involving any actual
networking hardware.  And I did indeed get an interesting result, the
limit of approximately 55 Mbps.

>  - When approaching CPU-bind, count PPS, not Mbps

I'll have to look into this further.

>  - The higher you go in the IP stack for your testing, the lower your
> performance numbers will be.

Understood.

>    - corollary: a bridge will typically get higher throughput than a router
>  - Unless it will be running a performance-critical IP endpoint, don't
> run iperf _on_ the device, run it _through_ the device.

Thanks for the help

> RB

Celejar
--
mailmin.sourceforge.net - remote access via secure (OpenPGP) email
ssuds.sourceforge.net - A Simple Sudoku Solver and Generator

_______________________________________________
openwrt-users mailing list
openwrt-users@lists.openwrt.org
http://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-users

Reply via email to