Well done... -----Original Message----- From: Kevin Ottalini [mailto:[EMAIL PROTECTED] Sent: Wednesday, May 17, 2006 11:01 AM To: [email protected] Subject: Re: [hlds] more then 1000fps at HLDS
HLDS (HL1 servers) can easily and with little burden run at either ~500 fps or ~1000 fps. There is no control over the actual maximum FPS since it is a motherboard chipset related issue. This is controlled by the "sys_ticrate" CVAR so the max setting is: sys_ticrate 1000 Win32 servers will also need to run some sort of high-resolution timer (please see other mail threads about this). We are only talking about HLDS here (HL1 servers). Source (SRCDS) servers are quite different and (at the moment) appear to run the best at their default settings. This is not really FPS in the sense of visual FPS, but rather how often the server will process the available event information (take a "snapshot") and if needed send an update to clients that need updates. The more updates the server sends out the more bandwidth the server will use on the uplink. Clients can receive a maximum of 100 updates per second regardless of the server sys_ticrate setting. A client getting a server update is not the same thing as the video FPS that the client is actually viewing. The client graphics FPS, which for clients is controlled by the scene and event complexity and the "fps_max" CVAR could indeed be set to fps_max 1000 but anything above 100 is quite silly. Again, this "viewing FPS" has nothing to do with the server sys_ticrate setting. The client has a CVAR that tells the server how often to send updates, this is the cl_updaterate CVAR. cl_updaterate 100 is the maximum (fastest) setting which the server may or may not allow. The server can limit the client maximum via the sv_maxupdaterate CVAR. Again, this has nothing to do with the client's VISUAL FPS. OK, so why would a server operator want to run his/her server at sys_ticrate 1000? In the case of HL1 servers only, running a faster ticrate on the server can slightly improve the apparent client latency (sometimes called ping, but ping is a little different). If the server is running sys_ticrate 100 then there is a 10ms interval between server snapshots that can be sent to clients. If a client has an 80ms ping distance from the server (real ping this time) then the maximum latency is 80ms (ping) + 10ms (snapshot rate) or 90ms (latency). If the same server is running at sys_ticrate 1000, then the snapshot interval is only 1ms, so that same player will only see an 81ms latency. Is a 9 ms savings important during game play? Probably not, although there are internet players that claim to be able to feel the difference. In a LAN setting this may be different, 10ms extra may be 10X what the ping is on a LAN (but still, is this important? probably not). Running an HLDS server at a higher sys_ticrate should have the overall effect of keeping what players see on that server more accurate. This appears to be a real and valuable effect at the cost of much higher CPU utilization. The real reason that a server operator might want to run his HLDS server at sys_ticrate 1000 though is that it gives the server the ability to send updates to individual clients on a more timely basis. Again, this is not more updates, just updates that don't have to wait very long for the next server snapshot to happen. This has the overall effect on the server of spreading out client updates so they don't all happen for all clients at the same time. This can slightly lower the demand on the server uplink and might help the server to run a little smoother. Extensive testing on my HLDM server resulted in the conclusion that running sys_ticrate 1000 actually allowed me to add one additional player slot (out of 10 total) and the server had a much tighter "feel" to events with a slight improvement in accuracy. Of course, running sys_ticrate 1000 also took my average CPU utilization for a 10-player server from around 3% to around 40% for some maps. Even my old 800MHz Intel P3 server was able to run sys_ticrate 1000, the real question is are you overloading your server CPU? This is a function of the number of players, the map you are running and the sys_ticrate setting. If your CPU is running more the 50% with sys_ticrate 1000 then decrease the sys_ticrate to 500. For testing purposes, use the Server GUI (don't use -console) and look at the utilization graph. qUiCkSiLvEr ----- Original Message ----- From: "Steven Hartland" To: <[email protected]> Sent: Wednesday, May 17, 2006 6:37 AM Subject: Re: [hlds] more then 1000fps at HLDS Roman Hatsiev wrote: > Could you please provide vendor name and model? > > On 17/05/06, Ilya Oboznyy wrote: >> just get a hardware which have lower timings with windows.. Guys there is no point in running at such ridiculously high fps. So instead of wasting your time and others pursuing it, devote it improving things that can make a difference such as improving your net connection. Steve _______________________________________________ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds _______________________________________________ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds

