James G. Sack (jim) wrote:

I think someone else has already referred to this, but it strikes me as
worth asking about again..

Their test results are thought-provoking.  Noting their emphasis that
they compared out-of-box untuned/unoptimized configurations, the bottom
line surprise was twofold:

Not really. Their "experiments" were all over the map. It's actually kinda hard to figure out what they are actually measuring. They change too many variables simultaneously.

2. Server 2003 was (often) an order of magnitude better than Linux
(SLES/CentOS) with similar open source "stacks".

- Number two is perplexing. Anyone have any explanations. Maybe Server
2003 is factory pre-optimized better for this kind of test? -- ie,
deliberately _avoiding_ any tuning is "unfair" (sounds kinda whiney).
Maybe their tests inadvertently (or otherwise?) introduced other biases
that invalidate their validity (still seems like trying to explain bad
news away). I thought that Linux was supposed to be pretty decent in the
underlying socketry?

How many threads get prestarted? How fast can you do a thread context switch? etc.

Who knows?  They didn't try to isolate the changes.

One thing that a lot of people don't point out, practically every "web stack" has enough performance for 99% of all the people who need one. Only the truly hardcore (like Yahoo, Google, Microsoft, etc.) need monster performance.

-a


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to