At 09:07 PM 1/22/2007, Richard Fennell wrote:
The idle CPU usage is not really a good benchmark. This is more a
display of how much overhead the kernel interrupt has. For example you
could very well have a 1000hz 32 bit kernel and a 250hz 64bit kernel.
The 250hz kernel will use much less cpu usage at idle as it is doing 4
times less interrupt requests. (possibly 500hz kernel as you have
suggested a 500fps max on your game server).

This could also be effected by VAC2 as you are running VAC1 or insecure
on the 64bit version.

While like you i am very dis-heartened by valves lack of 64bit support
(after purchasing 60+ 64bit AMD CPU's do to the performance increase)
however i think you MAY be comparing apples with oranges. Also remember
that Vac2 (Which i believe runs in a seperate thread) may be using some
of those interrupts (effecting your fps and cpu usage).

To get your tests more accurate both kernels should be compiled with as
similar config as possible, hardware should be exactly the same and both
game servers should run without VAC. (-insecure in the command line).

It may also be something to do with the c++ libraries and the
gettimeofday() procedure taking less time (More efficient in the kernel)
to run on 64bit. This is used a lot in HLDS/SRCDS and is the reason your
server will crash should you do an NTP update or change the time on the box.

IIRC, gettimeofday() on linux is fast because of vsyscall (they added
a page shared between every process and the kernel to do things like
gettimeofday() without doing an entire syscall)

My servers never crash when I'm doing clock discipling with xntpd..
of course, I don't use linux, so I suppose that is a linux-only.. bug.



_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to