Hello,
o Stefan Sayer [11/28/08 13:24]:
Hello,
o Peter Loeppky [11/26/08 20:21]:
I am running some test on the sems server with very few modules
loaded. It also include a very simple python app that plays a message
and hangs up.
wav;sipctrl;session_timer;ivr
Once I start some stress testing of sems, I see the memory usage
climb. When I stop the test and leave sems running, the memory drops
a bit, but not back to normal.
Does anyone else see this?
I could not trace the cause for this. valgrind shows it to be under main
at string rep ... so I suppose it is leaking on a string constructor. is
could there be a virtual destructor missing somewhere?
after a lot of testing and memory leak hunting with various malloc
debuggers I think that there is not really a memory leak, but it is the
STL pool allocator, and optimization in the allocation for creation of
threads (one for each session, with 1M stack). Aparently it really takes
a lot of memory allocation until memory is really freed, thus, if you
run tests only for a few thousand calls, the memory is still kept.
For example, I run sems with
load_plugins=sipctrl;wav;myapp
application=myapp
(myapp is an empty app from examples/tutorial/myapp) and generate load with
sipp -sn uac -d 3000 -r 50 -m 20000 192.168.5.106:5070
When the 20k calls have run out after a while (~32 sec after the last
call) the memory consumption drops, as you have reported, but not to the
original (low) value. But, after some more runs, I get (RES/RSS):
after 20000 calls: 54424 kb
after 40000 calls: 74984 kb
after 60000 calls: 101996 kb
after 80000 calls: 92232 kb
after 100000 calls: 85428 kb
Now I let it run for a while (no -m, 45 min, 130k calls) and it
stabilizes on ~140000 - 150000 kb. sometimes memory use grows, some
times it goes back, it seems to depend on free mem as well.
Running sems with export GLIBCXX_FORCE_NEW=1 gives a little different
results, but it also stabilizes after a while.
If you inspect the memory used with pmap, you will also see a lot of
1024 k blocks used, even though if you attach with gdb and run 'info
threads' or 'threads apply all bt' you will see only the core threads
running. Therefore I suppose there is a lot of optimization going on
about efficient pool memory allocation, which could be confusing if one
is looking only at the output of ps after a few k calls.
About the python garbage collector, its probably yet another interesting
topic... (do you get "Fatal error: GC object already tracked"?)
Stefan
Stefan
Peter
------------------------------------------------------------------------
_______________________________________________
Sems mailing list
[email protected]
http://lists.iptel.org/mailman/listinfo/sems
--
Stefan Sayer
VoIP Services
[EMAIL PROTECTED]
www.iptego.com
IPTEGO GmbH
Am Borsigturm 40
13507 Berlin
Germany
Amtsgericht Charlottenburg, HRB 101010
Geschaeftsfuehrer: Alexander Hoffmann
_______________________________________________
Sems mailing list
[email protected]
http://lists.iptel.org/mailman/listinfo/sems