Interesting. Seems like the two right hand columns never change.
On the thinkpad I get a range of:
fps: 100 ping: 58ms in : 60 2.00 k/s 33.9/s out: 60 1.20 k/s 33.9/s loss: 0 choke: 0
to
fps: 200 ping: 58ms in : 60 2.00 k/s 33.9/s out: 60 2.00 k/s 33.9/s loss: 0 choke: 0
Whisper wrote:
-- [ Picked text/plain from multipart/alternative ] The thing I have found is that to even get client IN/OUT Values to = the tickrate of the server I had to boost the sv_maxupdate rate even when running tickrates below 100 eg. 66, 60, 50 and 30. tickrate 30 is not a problem because default sv_maxupdaterate is 60 But setting tickrate to 50 or 60 would not produce IN/OUT Values to = the tickrate of the server unless the sv_maxupdaterate was set substantionally higher than the tickrate. This is all the time with my client machine running 20000/15000/101/101 (yes I'm aware that cl_rate is supposedly invalid now, but what the hell, may as well try to set it) and although not running at a 100fps even when I am running at 100fps with v-sync disabled the IN/OUT Values do ot match the values I expect to get. Steve Tilson, what are your client machines getting as IN/OUT Values under net_graph 3. The last set on the far right hand side, the ones that read as 33.9/s when you connect to a server setup with defaults?
Cheers On 5/18/05, Steve Tilson <[EMAIL PROTECTED]> wrote:
Seems we have interested admins on this topic so I have more variables to consider. I have my server.cfg max_fps set to 999 and I run the srcdsbooster resulting in the server console displaying 500fps +/- 25. Using the srcdsbooster provides the performance needed to keep the players in the server without setting the tickrate. Setting the tickrate higher resulted in my not being able to run 10 small servers (16 slots or less). (Note that I am not claiming a capability to serve 160 players on a dual xeon, about 100 is the max IMnsHO)
I recall Alfred saying something about the console fps value actually representing the simulations per second the server is running. If so then it seems logical that the higher the server fps the more players can be served a good solid and accurate update rate.
On the client side, I have 3 worksations I have run to test server responsiveness. One is a thinkpad notebook i use for worst case scenario analysis (suxors bad), a Dell P4 2.4ghz w/radeon 9800 pro for mid range testing, and a custom built box using the dual sli technology with 2 nVidea 6800 cards for top-end testing.
The difference between the low end and high end is obvious. The low end thinkpad is still playable on my servers even with a lousy 40 to 50 fps. The dual sli, on the other hand, gets superior fps due to the video setup but the update rates are also much higher. Therefore it seems the processing power on the client has a lot to do with this topic.
So what will be the optimal settings to tweak the maximum performance from both the server and client? I have studied the data rates and have proven the high data rates serves no purpose since the clients never request so much data. I have found that the maxrate serves well at 8192. The effect is a more stable running server and the data transfer is more predictable. The servers have the players and when I ask how the performance is I never hear any negative lag remarks. I have the update rate set at 100 and the minupdaterate set to 45. The minupdaterate seems to be the factor that makes a lot of difference since it appears to make the clients update more frequently.
All constructive dialog and discussion is most welcome on this topic.
Many thanks.... Steve
Whisper wrote:
-- [ Picked text/plain from multipart/alternative ] Hi guys What is the most updates a second you can get a client to receive? Even with the following server & client settings we can only seem to
manage
66-75 updates a second according to net_graph 3 *Server* sv_maxrate 20000 sv_maxupdaterate 150 tickrate 100 fps_max 300 18 players *Client* rate 20000 cl_rate 15000 cl_cmdrate 101 cl_updaterate 101 v-sync disabled The Server is only 8 hops and 40 Kilometers physically away from me and
the
actual wire distance would not be much further and with a consistent 20ms ping I am trying to work out how to get our servers to the magic 100 updates
a
second so we can serve out data at a rate that matches the normal 100 fps limit enforced on the client side, unfortunately I cannot get above 75 updates a second on the client side. What else do I have to change? Is it even possible? If it is possible what settings are you using? Does tickrate on one SRCDS process affect the tickrate on another SRCDS process on the same box like the old HLDS sys_ticrate does? Thanks --
_______________________________________________ To unsubscribe, edit your list preferences, or view the list archives,
please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds
_______________________________________________ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds
--
_______________________________________________ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds
_______________________________________________ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds

