Giorgos Keramidas writes:

> Installing an operating system (be it FreeBSD, linux, Windows or what
> else) and failing to tune the system to perform as good as possible
> for the application, is no decent way of doing a benchmark.  And when
> is comes to benchmarks, you have to tune ALL the systems that are
> involved.  You have to perform the test on identical hardware (if such
> a thing is ever possible[1]).

No, no, no. You have to tune the systems EQUALLY. Um, how? :-)

What if some random admin was picked to tune the systems?
Maybe he is a Solaris admin, but he honestly tries to tune
the other systems. Sure you wouldn't complain that he did a
bad job if FreeBSD lost?

Driver quality varies too, so hardware choice matters. It is
not OK to test on identical hardware, unless the purchaser
selects random off-the-shelf hardware to avoid any bias.

There are 2 sane ways to benchmark:

1. Use an out-of-the box config on randomly selected hardware.
   This is what a typical low-paid admin will throw together,
   so it certainly is a valid test. It is best to run this test
   many times, since an OS may get unlucky with hardware selection.
   (tuning is equal: none at all)

2. Run an open bring-your-own hardware competition like SPECweb99.
   Every OS gets tuned by fanatical experts, and every OS gets the
   hardware it runs best on. Hardware selection can only be limited
   by purchase date and monetary value -- it isn't fair to specify
   how the money is spent. (tuning is equal: maximum possible)

In the Sysadmin article, the biggest error was that the admin
crudely tuned the FreeBSD and Linux boxes. He should have left
both with out-of-the-box limits to be fair to NT and Solaris.
It is absurd to suggest that he should have been hacking away
at compile-time constants. Every OS had a default kernel.


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to