I just did my first ever Valgrind analyzis of Chandler, and found that
Valgrind reports at least 5 KB of definite memory leaks just by starting
and quitting Chandler, plus 3 KB indirectly lost plus 12 MB possibly lost.

There were also lots of reports about double frees, use of uninitialized
memory, and so on.

Valgrind works on Linux. On Windows the equivalent would be the
commercial Purify. I don't know about Macs.


Here is how I run valgrind (the tutorial on the valgrind website is nice):

You should get debug Chandler bits. Then, manually set the environment
variables chandlerDebug would set. Now you are ready to run with:

valgrind --leak-check=yes $CHANDLERBIN/debug/bin/python_d Chandler.py
&>valgrind.log

with the above option valgrind will look for memory leaks (but it will
find some other problems as well).

Here are some of the definite memory leaks it reported (looks like
PyLucene and wxPython to me):

thread   notes
========+===============================================================
==9340== 2 bytes in 2 blocks are definitely lost in loss record 4 of 641
==9340==    at 0x401C422: malloc (vg_replace_malloc.c:149)
==9340==    by 0x499D534: _Jv_Malloc (prims.cc:1073)
==9340==    by 0x49C15A6:
_Jv_PrepareConstantTimeTables(java::lang::Class*) (natClass.cc:1128)
==9340==    by 0x49C25E7: java::lang::Class::initializeClass()
(natClass.cc:792)==9340==    by 0x49C25DC:
java::lang::Class::initializeClass() (Class.h:279)
==9340==    by 0x499CA6D: _Jv_AllocObjectNoFinalizer (Class.h:279)
==9340==    by 0x499CA8D: _Jv_AllocObject (prims.cc:422)
==9340==    by 0x49C36FB: _Jv_NewClass(_Jv_Utf8Const*,
java::lang::Class*, java::lang::ClassLoader*) (Class.h:245)
==9340==    by 0x49C37C7: _Jv_NewArrayClass(java::lang::Class*,
java::lang::ClassLoader*, _Jv_VTable*) (natClassLoader.cc:592)
==9340==    by 0x499D2EB: _Jv_FindClassFromSignature(char*,
java::lang::ClassLoader*) (Class.h:359)
==9340==    by 0x49C32BA: _Jv_PrepareCompiledClass(java::lang::Class*)
(natClassLoader.cc:213)
==9340==    by 0x49C2717: java::lang::Class::initializeClass()
(natClass.cc:732)

==9340== 75 (12 direct, 63 indirect) bytes in 1 blocks are definitely
lost in loss record 130 of 641
==9340==    at 0x401CC6B: operator new[](unsigned) (vg_replace_malloc.c:197)
==9340==    by 0x6DA5FDC: wxPyApp::_BootstrapApp() (helpers.cpp:445)
==9340==    by 0x6DFDA39: _wrap_PyApp__BootstrapApp (_core_wrap.cpp:31478)
==9340==    by 0x814816F: PyCFunction_Call (methodobject.c:93)
==9340==    by 0x806013D: PyObject_Call (abstract.c:1860)
==9340==    by 0x80EA94F: ext_do_call (ceval.c:3844)
==9340==    by 0x80E553B: PyEval_EvalFrameEx (ceval.c:2307)
==9340==    by 0x80E77D2: PyEval_EvalCodeEx (ceval.c:2831)
==9340==    by 0x80EA041: fast_function (ceval.c:3660)
==9340==    by 0x80E9C95: call_function (ceval.c:3585)
==9340==    by 0x80E52F6: PyEval_EvalFrameEx (ceval.c:2267)
==9340==    by 0x80E77D2: PyEval_EvalCodeEx (ceval.c:2831)

==9340== 32 bytes in 2 blocks are definitely lost in loss record 250 of 641
==9340==    at 0x401CC6B: operator new[](unsigned) (vg_replace_malloc.c:197)
==9340==    by 0x6DA9AA7: wxAcceleratorEntry_LIST_helper(_object*)
(helpers.cpp:2386)
==9340==    by 0x6E0019C: _wrap_new_AcceleratorTable (_core_wrap.cpp:32510)
==9340==    by 0x81480B2: PyCFunction_Call (methodobject.c:77)
==9340==    by 0x806013D: PyObject_Call (abstract.c:1860)
==9340==    by 0x80EA94F: ext_do_call (ceval.c:3844)
==9340==    by 0x80E553B: PyEval_EvalFrameEx (ceval.c:2307)
==9340==    by 0x80E77D2: PyEval_EvalCodeEx (ceval.c:2831)
==9340==    by 0x81477CE: function_call (funcobject.c:517)
==9340==    by 0x806013D: PyObject_Call (abstract.c:1860)
==9340==    by 0x80695E2: instancemethod_call (classobject.c:2497)
==9340==    by 0x806013D: PyObject_Call (abstract.c:1860)

==9340== 204 bytes in 3 blocks are definitely lost in loss record 405 of 641
==9340==    at 0x401D7AA: calloc (vg_replace_malloc.c:279)
==9340==    by 0x400D790: (within /lib/ld-2.3.6.so)
==9340==    by 0x400DA52: _dl_allocate_tls (in /lib/ld-2.3.6.so)
==9340==    by 0x40368C6: pthread_create@@GLIBC_2.1 (in
/lib/tls/i686/cmov/libpthread-2.3.6.so)
==9340==    by 0x4036EE7: [EMAIL PROTECTED] (in
/lib/tls/i686/cmov/libpthread-2.3.6.so)
==9340==    by 0x4A6C210: GC_pthread_create (pthread_support.c:1248)
==9340==    by 0x4A5ECDB: _Jv_ThreadStart(java::lang::Thread*,
_Jv_Thread_t*, void (*)(java::lang::Thread*)) (posix-threads.cc:424)
==9340==    by 0x49C8FFF: java::lang::Thread::start() (natThread.cc:324)
==9340==    by 0x494BBD4: j_thread_start(j_thread*) (java.cpp:1779)
==9340==    by 0x80E9726: call_function (ceval.c:3548)
==9340==    by 0x80E52F6: PyEval_EvalFrameEx (ceval.c:2267)
==9340==    by 0x80E77D2: PyEval_EvalCodeEx (ceval.c:2831)


Andi also reported that using SSL during syncing seems to cause memory
usage to grow faster than without, so I will be looking at SSL
specifically myself.

-- 
  Heikki Toivonen


Attachment: signature.asc
Description: OpenPGP digital signature

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to