This is a multi-part message in MIME format.
--
[ Picked text/plain from multipart/alternative ]

Has pingboost been disabled in the latest release of HLDS? Whether I set it to 1, 2, 
3, or not at all there doesn't seem to be a difference any longer. I updated from 
3.1.1.0 so retail DoD users could connect. However the only difference I see is far 
higher latency. This is a 32 player DoD 1.0b server on a Dual Athlon 2gig with 2gig 
RAM and 18gig 15k rpm drives running Redhat 8.0. I am using the AMD optimized binary. 
On dod_caen myself and several regular players would ping around 40-50 under 3.1.1.0. 
Now it is 120-140. That is one heck of a increase.

Here is my traceroute and pings to the server. You can clearly see I have low latency 
to the server.

traceroute to 216.127.33.237 (216.127.33.237), 30 hops max, 38 byte packets
 1  10.130.192.1 (10.130.192.1)  12.778 ms  10.535 ms  37.825 ms
 2  12.244.81.129 (12.244.81.129)  10.250 ms  34.925 ms  9.787 ms
 3  12.244.64.1 (12.244.64.1)  13.512 ms  8.814 ms  10.384 ms
 4  12.244.72.18 (12.244.72.18)  10.194 ms  52.611 ms  13.045 ms
 5  gbr1-p60.st6wa.ip.att.net (12.123.44.114)  11.992 ms  10.886 ms  10.499 ms
 6  gbr4-p70.st6wa.ip.att.net (12.122.5.161)  12.664 ms  10.734 ms  10.433 ms
 7  ggr1-p370.st6wa.ip.att.net (12.123.44.133)  10.366 ms  15.215 ms  13.012 ms
 8  so1-2-3-622M.br2.SEA1.gblx.net (208.51.243.37)  40.380 ms  11.168 ms  12.877 ms
 9  pos3-0-2488M.cr2.SEA1.gblx.net (64.213.83.181)  11.031 ms 
pos3-0-2488M.cr1.SEA1.gblx.net (64.213.83.177)  60.113 ms  13.111 ms
10  so6-0-0-2488M.ar2.SEA1.gblx.net (64.212.107.250)  15.865 ms  9.739 ms 
so7-0-0-2488M.ar2.SEA1.gblx.net (64.212.107.254)  10.717 ms
11  Swift2.ge-3-2-0.ar2.SEA1.gblx.net (64.215.248.118)  11.195 ms  12.710 ms  45.305 ms
12  66.228.202.18 (66.228.202.18)  11.141 ms  13.635 ms  15.813 ms
13  216.127.33.237 (216.127.33.237)  12.139 ms  12.998 ms  11.753 ms

PING 216.127.33.237 (216.127.33.237) from 192.168.1.68 : 56(84) bytes of data.
64 bytes from 216.127.33.237: icmp_seq=1 ttl=52 time=12.5 ms
64 bytes from 216.127.33.237: icmp_seq=2 ttl=52 time=49.4 ms
64 bytes from 216.127.33.237: icmp_seq=3 ttl=52 time=10.7 ms
64 bytes from 216.127.33.237: icmp_seq=4 ttl=52 time=14.0 ms
64 bytes from 216.127.33.237: icmp_seq=5 ttl=52 time=11.2 ms
64 bytes from 216.127.33.237: icmp_seq=6 ttl=52 time=12.5 ms
64 bytes from 216.127.33.237: icmp_seq=7 ttl=52 time=12.3 ms
64 bytes from 216.127.33.237: icmp_seq=8 ttl=52 time=14.3 ms
64 bytes from 216.127.33.237: icmp_seq=9 ttl=52 time=10.6 ms
64 bytes from 216.127.33.237: icmp_seq=10 ttl=52 time=13.7 ms
64 bytes from 216.127.33.237: icmp_seq=11 ttl=52 time=12.0 ms
64 bytes from 216.127.33.237: icmp_seq=12 ttl=52 time=32.8 ms
64 bytes from 216.127.33.237: icmp_seq=13 ttl=52 time=10.9 ms
64 bytes from 216.127.33.237: icmp_seq=14 ttl=52 time=16.4 ms
64 bytes from 216.127.33.237: icmp_seq=15 ttl=52 time=11.2 ms
64 bytes from 216.127.33.237: icmp_seq=16 ttl=52 time=10.2 ms
64 bytes from 216.127.33.237: icmp_seq=17 ttl=52 time=28.6 ms
64 bytes from 216.127.33.237: icmp_seq=18 ttl=52 time=33.1 ms
64 bytes from 216.127.33.237: icmp_seq=19 ttl=52 time=13.1 ms
64 bytes from 216.127.33.237: icmp_seq=20 ttl=52 time=11.7 ms


Another thing I noticed is that the memory never goes above 256MB where under 3.1.1.0 
it would average around 350MB when full. Maybe there is a issue with this.

Hope this helps the developers.

Brad
--

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to