We have had the same issue. We also reach about 200megs within 2 days. This
only happens on the 3 pub servers of ours that are full most of the time.

Note: All servers are counter-strike.

Server #1 has no custom maps. It runs AdminMod, StatsMe, and MetaMod. Its an
18 man pub.
Server #2 does have some custom maps. It runs AdminMod & StatsMe. Its a 14
man pub.
Server #3 has no custom maps. It runs AdminMod, StatsMe, and MetaMod. Its a
20 man pub.

Server #1 and #2 run on the same machine, a p4 2.66.
Server #3 runs on an AMD Barton 3000+ (using i686 binary).

Both boxes run FreeBSD 5.2.1, as well as all of our others. They also use
the linux_base-debian emulator. m0gely, if I recall, don't you run FreeBSD
as well? This may be a FreeBSD specific issue, although I couldn't see why.

-Mike
----- Original Message -----
From: "m0gely" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, May 31, 2004 1:06 PM
Subject: [hlds_linux] Is the hlds_l memory leak ever going to be fixed?


> My hlds 1.1.2.5 server (CS 1.6) reaches 200M of memory in about *two
> days*.  This server starts around 65M and has no where near the player
> connections my 1.5 server gets.  I know I am not the only one who
> experiences this.  And both servers are set up identically in regards to
> addons.  I have found this to happen even with Metamod disabled.
>
> Contrast this to my 3.1.1.1e (CS 1.5) server which has been online over
> 31 days since its last restart.  It has for the last month handled just
> over 28,000 player connections and just now consuming 200M of memory.
>
> --
> - m0gely
> http://quake2.telestream.com/
> Q2 | Q3A | Counter-strike
>
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives,
please visit:
> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
>
>



_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to