Hey guys!

Thanks for all the help. The suttering problem is now gone. It was really
related to the disk I/O latency being screwed up. Since my host moved me to
another box it is alright again.

But ever since we moved to the new box, we've been experiencing some weird
CPU usage that we didn't have before. It's reaching more than 100% with
less than 18 players.



The weird thing is that it doesn't get to fill 100% on one core. I also
have a friend on the same company that's running a server identical as mine
and his CPU consumption stays at 50~60% with the server filled up (24
slots).

Here are the specs of the server:

processor       : 0
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 45
> model name      : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
> stepping        : 7
> cpu MHz         : 2012.266
> cache size      : 15360 KB
> physical id     : 0
> siblings        : 12
> core id         : 0
> cpu cores       : 6
> apicid          : 0
> initial apicid  : 0
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 13
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology
> nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
> ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt
> tsc_deadline_timer aes xsave avx lahf_lm ida arat xsaveopt pln pts dts
> tpr_shadow vnmi flexpriority ept vpid
> bogomips        : 4599.46
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
> power management:
>
> processor       : 1
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 45
> model name      : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
> stepping        : 7
> cpu MHz         : 2012.266
> cache size      : 15360 KB
> physical id     : 1
> siblings        : 12
> core id         : 0
> cpu cores       : 6
> apicid          : 32
> initial apicid  : 32
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 13
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology
> nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
> ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt
> tsc_deadline_timer aes xsave avx lahf_lm ida arat xsaveopt pln pts dts
> tpr_shadow vnmi flexpriority ept vpid
> bogomips        : 4599.37
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
> power management:
>
> processor       : 2
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 45
> model name      : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
> stepping        : 7
> cpu MHz         : 2012.266
> cache size      : 15360 KB
> physical id     : 0
> siblings        : 12
> core id         : 1
> cpu cores       : 6
> apicid          : 2
> initial apicid  : 2
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 13
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology
> nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
> ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt
> tsc_deadline_timer aes xsave avx lahf_lm ida arat xsaveopt pln pts dts
> tpr_shadow vnmi flexpriority ept vpid
> bogomips        : 4599.46
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
> power management:
>
> processor       : 3
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 45
> model name      : Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
> stepping        : 7
> cpu MHz         : 2012.266
> cache size      : 15360 KB
> physical id     : 1
> siblings        : 12
> core id         : 1
> cpu cores       : 6
> apicid          : 34
> initial apicid  : 34
> fpu             : yes
> fpu_exception   : yes
> cpuid level     : 13
> wp              : yes
> flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology
> nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2
> ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt
> tsc_deadline_timer aes xsave avx lahf_lm ida arat xsaveopt pln pts dts
> tpr_shadow vnmi flexpriority ept vpid
> bogomips        : 4599.37
> clflush size    : 64
> cache_alignment : 64
> address sizes   : 46 bits physical, 48 bits virtual
>
power management:
>

Any idea why srcds is reaching 100% of the CPU way before the core reaches
100%? If I find out what's happening, I can argue with them and they can
sort it out as they did with the I/O.

Thanks in advance!


_pilger


On 11 April 2014 13:52, pilger <pilger...@gmail.com> wrote:

> Sprays were on, but our community isn't that big and it doesn't tend to
> spam spray all that much. Still a good advice. I'll keep an eye on the
> folder and trim it from time to time. I was using iotop and ioping to track
> my i/o activities but iotop doesn't show latency so it was only useful to
> check if the stuttering was related to I/O. Ioping did the trick to confirm
> the CPU was waiting for the disk to do its reading and writing.
>
> Yun, the read ops are mainly during map changes, as far as I know. It
> definitely worths to keep an eye on, since slow map changes usually cause
> the players to think the server crashed and move on. Thanks for the inputs.
> It's nice to have some numbers to work with even if they're not so precise
> and directly related to a single server.
>
> I actually talked to the VPS provider and he moved my server to another
> node (less crowded one) and here's the result:
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *4.0 kb from . (ext4 /dev/ploop53496p1): request=8152 time=338 us4.0 kb
>> from . (ext4 /dev/ploop53496p1): request=8153 time=288 us 4.0 kb from .
>> (ext4 /dev/ploop53496p1): request=8154 time=275 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8155 time=318 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8156 time=413 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8157 time=461 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8158 time=246 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8159 time=322 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8160 time=275 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8161 time=427 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8162 time=327 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8163 time=331 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8164 time=330 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8165 time=291 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8166 time=411 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8167 time=335 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8168 time=276 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8169 time=904 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8170 time=282 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8171 time=295 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8172 time=272 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8173 time=297 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8174 time=304 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8175 time=313 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8176 time=246 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8177 time=239 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8178 time=266 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8179 time=524 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8180 time=316 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8181 time=432 us 4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8182 time=278 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8183 time=329 us4.0 kb from . (ext4
>> /dev/ploop53496p1): request=8184 time=279 us*
>
>
> It's been like this for a couple of hours so I guess it worked nicely. It
> still has a couple of > 15ms (around 40~50ms) peaks but they are very rare
> now so I guess it's all good. I'll keep monitoring it though.
>
> I'm not 100% sure this will solve the stuttering problem. I'll check it
> later on and get back here to report. Hope this helps people with similar
> problems.
>
> _pilger
>
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux

Reply via email to