Above all other suggestions, why is it that the rates aren't set
properly by the connection speed options?

We're not on an Internet where no-one knows anything about they're
connectivity anymore. Most gamers can tell you if they are on 56k,
64k, 128k, 256k, 512k, 1mb, 2mb, 6mb, 8mb or higher. Same deal with
most (sensible) server operators. With that in mind, why can't the
engine ask the appropriate real world human (and non-technical)
questions of "how much bandwidth have you got, and how many people
share it?". One could go even further and run a bandwidth test first
then ask "how many people share it?" (as it's getting very common).
>From there it's easy to work out what good rate values should be.

I am still a little confused about "sv_maxrate" as there are many
people who have jumped down my throat about setting "too high" values.
The most common reason being "that's over the cap, you can't have it
that high, it doesn't work". My only real retort (and I won't get
drawn into this argument) is that I can get a large 5mb map from my
server in under 15 seconds, and no there are no res files and no
sv_downloadurl. The only way to explain this is that maxrate is
working at over 25 thousand bits per second. (notice the use of the
word thousand as 25kbps is not 25000, one of the things that always
makes me chuckle was the choice of values).

As has been said in another of the responses to this thread, the
default for cl_interp is 0.1. That's 100ms. With cl_cmdrate at 10 or
lower that's using all of the interp time with none to spare. On top
of link latencies etc you are likely looking at relatively frequent
interpolation errors. This is more of an issue than "bullet
registration at low rates". The only way I can really see to test
bullet registration is to record details of remote bot clients with
most of the random input disabled. This is why I try not to discuss
"registration", hitbox placement and latency are far more common
issues, along with FPS bounce on servers and the like. (Which by the
way many of my srcds instances are still doing, on windows and linux,
one running through a uk gsp is achieving 500fps over 95% of the time,
but drops to 60 or so at random times during play. There is an obvious
drop at the start of a round, but what I describe is mid-round and
does not seem to co-incide with any significant player events. This is
config independent - I have used defaults and the same occurs).

I don't have the (dis)pleasure of owning a dial-up account at present,
so I cannot test how much bandwidth can be streamed through 56k in the
frame sizes that Source typically generates, but I suspect that for
some less capable connections cl_cmdrate of 15 may be the limit.
(Don't forget, up is only up and without the down the game is still
un-playable).

I currently configure my servers in such a way that most 56k users
can't get a low enough ping to prevent being kicked by the high ping
kicker. This is not because I think the game is unplayable on 56k,
this is because the game is more accurate on faster connections and
in terms of training, there's no point in training against targets
that may or may not be where you see them.

Other options for bandwidth management (although most GSP's will
probably kill me for suggesting such a thing due to bandwidth costs)
is to have a system which dynamically compensates based upon the choke
and loss values. cl_cmdrate, updaterate, and rate could be trickle
increased and decreased on the client and on the server on a continual
basis, using latency, loss and choke values as an indication of the
connections performance at that speed. Clearly if properly built such
a system should provide the best end user experience and remove the
need for users to try and understand these things.

A question for Valve which has been bugging me for a while though -
How many ticks of data can you fit into one client command packet?
There is a cl_cmdbackup which defaults at 2. If we have 10 update
packets per second, each of which contains the current command, and
the previous 2, we're looking at potentially getting 30 command
updates per second. The communications timeline is looking a little
nasty though, cl_interp is at 0.1, so we're cutting it fine. In the
event of on-time arrival of a command packet, both history packets are
already more than cl_interp old, without considering link latency or
data transmission time. To me these windows seem to closely matched
for reliable processing, but I'm not sure exactly how the history
packets are handled in that scenario.

Also, is the same tick lethal damage policy FCFS?

On 7/12/05, Harley Peters <[EMAIL PROTECTED]> wrote:
> I would like to request that valve increase the minimum value for
> cl_cmdrate.
> The current minimum of 10 is to low and is causing problems. Due to the
> fact that players are setting there cl_cmdrate to 10 because it makes
> there reported ping drop to 5.
>
> cl_cmdrate 10   -   .8k/s - 9/s
> cl_cmdrate 15   -    1k/s - 12/s
> cl_cmdrate 20   -  1.4k/s - 17/s
> cl_cmdrate 30   -  1.7k/s - 19/s
>
> Nine updates per second is just not enough and must be causing hit
> registration problems.
> Who in there right mind would think they could play css and not even be
> able to handle a lk/s upload ?
>
> Harley
>
>
>
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives, please 
> visit:
> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
>

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to