hello,

I am doing some basic testings on the above mentioned scenario and I am
stuck on some limits which I consider to be very low: I cannot get more
than 27Kpps and 200Mbit/s routing performance without starting to loose
packets.

System is:

# uname -srm

OpenBSD 5.4 sparc64

# sysctl hw

hw.machine=sparc64

hw.model=SUNW,UltraSPARC-T2 (rev 0.0) @ 1415.103 MHz

hw.ncpu=32

hw.byteorder=4321

hw.pagesize=8192

hw.disknames=sd0:dc8022901cadee32,sd1:,cd0:

hw.diskcount=3

hw.cpuspeed=1415

hw.vendor=Sun

hw.product=SUNW,SPARC-Enterprise-T5120

hw.physmem=8455716864

hw.usermem=8455700480

hw.ncpufound=32

hw.allowpowerdown=1

No tuning, and no firewall to (pfctl -d).

I am routing from em0 to em1 but also tried from em0 to em5 and em4 with
em5 mixing onboard and PCI ports and results are the very same.

Output from top points the bottleneck:

load averages:  0.17,  0.21,  0.12               bgp.newtelecom.net.br18:06:20

9 processes: 8 idle, 1 on processor

CPU00:  0.0% user,  0.0% nice,  0.0% system, 98.2% interrupt,  1.8% idle

CPU01:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU02:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU03:  0.2% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.8% idle

CPU04:  0.2% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.6% idle

CPU05:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU06:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU07:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU08:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU09:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU10:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU11:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU12:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU13:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU14:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU15:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU16:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU17:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU18:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU19:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU20:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU21:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU22:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU23:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU24:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU25:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU26:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU27:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU28:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU29:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

CPU30:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle


All my NICs are getting interrupted on CPU0.

All 6 network cards are Intel 82571EB which support MSI-X and should, in
theory support IRQ balance.

So my question is, is there anything I can do to allow OpenBSD use more
than one CPU or at least choose which CPU will be used for each NIC?

What other tunings and settings and tweaks should I look for?

Is this performance expected to be so low on this machine? I got much
better numbers w/ OpenBSD on i386 servers.

Thank you for any hint ]:)

-- 
===========
Eduardo Meyer

Reply via email to