+1 to Everything Gary said - RT kernels are generally a waste. They
might ensure more accurate wakeups, but the sleep(1) call really
limits how accurate those can be anyway even with hires timers, a
ld_preload to mess with sleep() could get you much more
accurate/efficient wakeups, but that's more involved and only really
helps CPU usage when the server is not under load.

Non-hi-res kernels tend to use a bit less CPU (though again, usually
only under low loads) because they wake up less often and waste less
time waking up and going back to sleep for the next tick. The expense
of this would be slightly less accurate gameframe times, but on the
order of <1-2ms so its not really significant.

The main thing I would worry about, then, is kernel version - newer
kernels have a better CPU scheduler, and a lot of work has been done
on this recently. Also keep in mind that "FPS" is largely bogus - a
server pulling 10k FPS can be crapper than one pulling 100. The
reasons behind this are complicated, but do yourself a favor and dont
even look at FPS - join the server and throw up net_graph 4. If you're
getting 66 updates per second (or whatever your tickrate is) and var:
is pretty stable below 10-12ms or so, your server is essentially
lag-free. The number of variables that go into effective "lag" is so
complicated that anyone claiming to notice a difference of 2ms from
kernel wakeup timings is full of it.

You'll also find plenty of people who claim to know better, or have
complex (and wrong, unsourced) explanations about why 1000FPS is good
- which is why its that much more important to just use net_graph and
sane judgement, and don't believe any of the voodoo unless you see
real results

- Neph

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to