On Tue, May 29, 2012 at 11:26 AM, Kostas Zorbadelos <[email protected]> wrote:
> Greetings to all,
>
> here is a followup of an older thread [1] regading the use of OpenBSD in
> a large scale DNS anycast setup. To make the long story short, OpenBSD
> fails to meet our resolving perfomance needs for the time being. The
> main issue (from my understanding) is the lack of kernel-level thread
> support (which is something hopefully to be addressed really soon with
> rthreads).
> I stress tested BIND on Linux and OpenBSD in a VM based lab environment
> using Nominum's resperf [2] tool (details below).
> Our current numbers in the resolving customer-facing infrastructure are
> around 10K queries/second at peak. All servers currently (linux based)
> utilize more than a CPU to accomodate the load.
> I would have liked to introduce OpenBSD in my working environment
> especially for the excellent networking features, but the current
> performance numbers are prohibitive in our case. It will at least
> have to wait until rthreads support is part of a release (and of course
> I am more than willing to test snapshots if someone can direct me as to
> which I should test).
>
> The rest are the details (only for interested readers) :)
>
> The lab environment consisted of two VMs with identical configuration on
> top of VMWARE's ESXi 4.1.0. Each VM has 2 vCPUs (Intel Xeon X5650), 8GB
> RAM and 2 NICs (BIND listens on the first interface and forwards all
> resolving traffic in the second).
>
> The systems tested were CentOS 6.2 and OpenBSD 5.1 with the default BIND
> (bind-9.7.3-8.P3.el6_2.2.x86_64 for CentOS, base system's BIND for
> OpenBSD).
>
> kzorba@dmeg-dns1: ~ ->sysctl hw
> hw.machine=amd64
> hw.model=Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
> hw.ncpu=2
> hw.byteorder=1234
> hw.pagesize=4096
> hw.disknames=cd0:,sd0:6f2e8c0759d7fce1,fd0:
> hw.diskcount=3
> hw.sensors.acpiac0.indicator0=On (power supply)
> hw.sensors.vmt0.timedelta0=-5971.426309 secs, OK, Tue May 29
> 12:04:19.156
> hw.cpuspeed=2659
> hw.vendor=VMware, Inc.
> hw.product=VMware Virtual Platform
> hw.version=None
> hw.serialno=VMware-56 4d ea ff 3c cb c5 ea-3a 4d e9 45 d9 73 01 50
> hw.uuid=564deaff-3ccb-c5ea-3a4d-e945d9730150
> hw.physmem=8588820480
> hw.usermem=8588804096
> hw.ncpufound=2
> hw.allowpowerdown=1
>
> I have Gbps bandwidth on all interfaces, so bandwidth cannot affect the
> measurements. In all tests, I used an hour's real captured queries
> traffic from our production resolvers.
>
> The test scenaria were:
>
> - 3 runs with resperf's default options with a BIND restart after each
> B run to start with a clear cache
>
> - 3 more runs with the options -m 20000 -r 120 (that is go to at top
> B 20K/sec queries and reach that after 2 minutes) followed by restarts
> B after every run
>
> - 1 run with default options followed by an extra run without restart
> B (cache utilization)
>
> - 1 run with the options -m 20000 -r 120 followed by an extra run
> B without restart (cache utiliztaion)
>
> - 2 runs giving a constant 15 minutes load to the systems with resperf's
> B options -c 900 -m 20000 -r 120 -i 5
> B OpenBSD in these tests, failed to follow the load and the option I
> B used to finish the test succesfully was -m 10000 (that is in my lab
> B setup it would accomodate a steady flow of 10K queries/sec at most)
>
> In all cases, as you might expect OpenBSD had half (or less than half)
> the performance of Linux. My outcome is the thread support. I really hope
> I am not missing something else.

Probably you are aware that OpenBSD doesn't have VMware tools from
VMware available (they have impact) and that threading is enabled (and
not done yet) in OpenBSD -current so you are comparing something which
is not much comparable.

>
> Any input highly welcome.
> Regards,
>
> Kostas
>
> [1] http://marc.info/?l=openbsd-misc&m=132395738031428&w=2
> [2] http://www.nominum.com/resources/measurement-tools

Reply via email to