I do not think this is the best place to discuss about that as people in
here tends to flame when you start talking about "realtime"(search for
my email on this list where I was asking why when you run the gameserver
as root it automatically "promotes" itself into the scheduler generally
used by realtime processes)

Personally I run the servers without binding them to specific coresand
I'm running them on highly tuned 3.2kernels + RT patcheson specific
intel cpu families(with hyper threading enabled)

As for now we are not having people reporting lags.

Il 02/11/2012 15.25, Chris Oryschak ha scritto:
> A recent thread made me wonder if i'm running my servers the most efficient
> way.
>
> For the longest time I've been running multiple servers on a single box
> that has multiple cores.  Each SRCDS instance would be assigned it's own
> core to prevent it from hopping between cores as I assumed they might
> overlap and max out that core.  Plus I remember discussion of lag
> spikes occurring when it moves between cores.
>
> That being said, ICS in another thread said he's been running his servers
> not assigned to any specific cores.
>
> Here are my questions:
> -Those of you who run multiple SRCDS instances per server, do you assign
> them each to a core?
> -Do you renice any processes?
> -Do you change the realtime scheduling?
> (
> http://wiki.fragaholics.de/index.php/EN:Linux_Optimization_Guide#Setting_your_servers_to_run_with_realtime_scheduling)
> -Any other process tweaks to give SRCDS more priority?
>
>
> I'm curious how everyone else is doing this to achieve maximum performance.
>
> Thanks.
> Chris
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives, please 
> visit:
> https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
https://list.valvesoftware.com/cgi-bin/mailman/listinfo/hlds_linux

Reply via email to