Just because you're telling the kernel that you want 10kHz doesn't mean that
the hardware can provide it.  I'm pretty sure there is a max timer
resolution that the CPU can provide, and that is the limiting factor.  Also,
don't do 125 tick; 100 is the maximum suggested by valve (issues arise from
>100tick.  I'm sure someone else can expand on what I've said.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Arne Guski
Sent: Friday, July 27, 2007 4:19 AM
To: [email protected]
Subject: [hlds_linux] Wondering where Ticks get lost (the old server fps
thing)

Hi everyone,
i have been testing around with 1000fps servers for some time now and i
cant seem to get source running stable fps if i have more then one
server per cpu.
First of my test system is a C2D E6400 2gb ram running debian etch.
Currently running 2.6.22.1 with 4000hz (later more about that).

It can handle 2 14+ Slot Servers fine, if i set the affinity to one core
per server i even get very stable 950fps (930avg over some days) with
setting tickrate to 125 and fps_max to 2000.
However if i dont taskset them to a specific core i still get "good" fps
but much more drops, which did not surprise me because ive red alot
about srcds having problems with multicores.

Now what bothers me is that i cant get a third or even more servers
running stable high fps even though cpu on each core never exeeds 50%.
When i bind a second server on a core it will start up with much less
fps like the first one on that core (around 300 very jumpy).
If i start that third server with afinnity 3 (wich is random) it will
have better fps then when bound to a core (reaching 950 but falling down
to 300 very often).

I also noticed that how well the server can hold its fps is map related,
these two servers i have running on the system are one dust2 and another
office only server.
Ive set up mrtg to monitor fps on both of these and the dust2 only
server has some glitches where fps drop while the office server almost
never had any fps glitches.
Now with all that fps dropping stuff making very little sense to me, it
could also be related to what server was started first or whatnot.

Now the reason why i run 4000hz is that i thought the server might just
run out of ticks with a 2k kernel, wich does not apply though because it
is exactly the same with a 10k, 4k or 2khz kernel.

So why is srcds acting like that and how can i get around that "one
server per core limit"? Note here that hlds can handle multiple 1000fps
servers fine with that setup until it runs out of cpu power.

Looking forward to any suggestions, comments, idears on this one ... i
ran out of those.

--
Arne Guski



_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives,
please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux


_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to