On 10/13/05, ScratchMonkey <[EMAIL PROTECTED]> wrote:
> --On Wednesday, October 12, 2005 9:50 AM -0400 Mahmoud Foda
> <[EMAIL PROTECTED]> wrote:

> To know why your system works better, we'd need to know exactly what the
> "standard" kernel was (presumably you don't mean an unadorned kernel with
> default configuration direct from Linus' build tree) and what you did to
> compile your own.

If they are both still present you can diff the configurations.

More to the point though, a stock kernel may not support HT, whereas
unless moronically configured, a newly compiled kernel should use most
of the target architecture, instead of a generic architecture.

Next, the issue of HT as an architecture. Many of the situations where
HT has been reported to perform badly are in cases where the server is
processing two or more SRCDS instances throughout 80% or above of it's
active processing time. In this scenario you will be suffering a great
deal of cache hits, and a fair few (but quite regularly timed) context
switches. What is interesting to note about those who claim better
results, is they tend to have servers loaded up with other/different
applications. Careful analysis of other performance data might lead to
a real conclusion regarding the cause of this issue. How many cache
misses are occuring with HT on, as opposed to off? What about context
switches? How do these change if you lock processes to specific
processors? The list of questions goes on. Sadly, I'm not using any HT
capable test hardware at present.

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to