I'm beginning to wonder if certain stepping Xeon's/P4's have this HT
problem. I will investigate this further, but I don't think that is
the case since I have multiple servers with different stepping xeons
and they exhibit the same issues with large servers.. Perhaps it's
cache coherency problems, or one of the many pitfalls of hyperthreading.


If they are both still present you can diff the configurations.

More to the point though, a stock kernel may not support HT, whereas
unless moronically configured, a newly compiled kernel should use most
of the target architecture, instead of a generic architecture.

Next, the issue of HT as an architecture. Many of the situations where
HT has been reported to perform badly are in cases where the server is
processing two or more SRCDS instances throughout 80% or above of it's
active processing time. In this scenario you will be suffering a great
deal of cache hits, and a fair few (but quite regularly timed) context
switches. What is interesting to note about those who claim better
results, is they tend to have servers loaded up with other/different
applications. Careful analysis of other performance data might lead to
a real conclusion regarding the cause of this issue. How many cache
misses are occuring with HT on, as opposed to off? What about context
switches? How do these change if you lock processes to specific
processors? The list of questions goes on. Sadly, I'm not using any HT
capable test hardware at present.



_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to