artiecs wrote:
Martin could you also double check the numbers in the diagram box that says:
"Server at Tick 130
-Simulates world 33 times per second
-Send 20 snapshots per second"
Is this info correct? Seems that for the CSS default tickrate of 33 clients
get the 20+ snapshots/second. If i set the tickrate on my test server to 100
i get 70-80 updates a second on my client.
Yes...? :)
Note that "Server at Tick 130" means tick number 130, not -tickrate 130.
The "simulates world 33 times per second" means -tickrate 33. The "send
20 snapshots per second" means cl_updaterate 20 on the client.
Maybe i'm misinterpreting numbers though, i'm watching the far right number
on the "In" line of netgraph 3. Is that number updates(snapshots) per
second? (i think it is, but reading your doc i'm wondering if i got it
wrong)
Correct, the numbers are 1) last packet size in bytes, 2) data rate in
kbytes/sec, 3) data rate in packets (updates)/sec
Also it seems that the tickrate functions different on the Source engine
than on the HL1 engine. In HL1, tickrate would control the fps generated
server side, and only sv_maxupdaterate would control the max # of
updates/sec to clients. In Source, fps_max controls the server-side fps
(updates generated per second server-side?), and -tickrate seems to be a
command-line override of sv_maxupdaterate in the case where if -tickrate is
set to allow fewer updates/sec than sv_maxupdaterate. Am i seeing that
right? Any insight you can share would be appreciated.
I'm not sure what the definition of FPS is for the server, although it
seems to me that a server FPS greater than the tickrate is pointless.
The tickrate is how many times the server simulates the world per
second. If your server's tickrate is 33 and FPS is 99 then I'd guess
that in 2/3 frames it's not doing anything. Conversely, an FPS lower
than the tickrate will mean the tickrate is limited to the FPS (not good).
This is different from HL1 where the FPS and ticrate were one and the
same thing (with sys_ticrate setting the maximum). I can't really see
why they decoupled them in Source, so maybe there is more to it than this.
Your tickrate and sv_maxupdaterate should probably be equal. If your
tickrate is greater than sv_maxupdaterate then clients won't be able to
take advantage of the additional ticks, and if tickrate is less than
sv_maxupdaterate then as you say the tickrate would apply a cap to the
effective maximum updaterate.
In my experience, CS Source with the default tickrate 33 is unplayable -
bullets just don't register properly. The other half of the problem
though, is most clients get <100 video FPS in Source. I have a pretty
good PC (with a PCIe X800-XT) and it still drops to <30fps sometimes
(particularly on maps like de_aztec, de_compound and de_port - maps with
large open spaces). It doesn't matter how high the server's tickrate,
FPS and sv_maxupdaterate is if your client is doing <30 fps.
It also doesn't matter how high your rates are if there are other
players in the server with low rates, you will have trouble hitting
them. Because they are updating the server with their position less
often, they can walk around a corner and kill you before you have time
to react.
Also I have noticed that even with cl_updaterate and the server's
sv_maxupdaterate and tickrate set to high values, e.g. 166, my client
can't seem to do more than 80 updates/sec in each direction. I'm not
sure if this is an engine limitation or network related.
There seems to be issues with the client side prediction of user inputs
in Source as well, which was never a problem in HL1. With cl_smooth 1,
my client jerks around something awful (indicating client prediction
errors), and higher rates magnify the problem. Setting cl_smooth 0 seems
to improve matters (though logically one would expect the opposite effect).
/end spiel
-Simon
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds