I've been trying to track down some memory leaks in my pingCtlTable
implementation.  After I fixed the bugs from not freeing my own data
structures, valgrind points out two "definitely lost" chunks:

==7467== 17,568 bytes in 6 blocks are definitely lost in loss record
1,075 of 1,086
==7467==    at 0x4C9B22F: calloc (vg_replace_malloc.c:418)
==7467==    by 0x805ACDB: pingCtlTable_allocate_rowreq_ctx
(pingCtlTable_interface.c:590)
==7467==    by 0x805AE54: _mfd_pingCtlTable_rowreq_from_index
(pingCtlTable_interface.c:823)
==7467==    by 0x805B22D: _mfd_pingCtlTable_object_lookup
(pingCtlTable_interface.c:892)
==7467==    by 0x6FD6894: _baby_steps_access_multiplexer (baby_steps.c:501)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)
==7467==    by 0x6FEE01E: netsnmp_call_next_handler (agent_handler.c:640)
==7467==    by 0x6FD630D: _baby_steps_helper (baby_steps.c:312)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)
==7467==    by 0x6FEE01E: netsnmp_call_next_handler (agent_handler.c:640)
==7467==    by 0x6FDCFE9: netsnmp_row_merge_helper_handler (row_merge.c:342)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)

==7467== 57,312 bytes in 24 blocks are definitely lost in loss record
1,080 of 1,086
==7467==    at 0x4C9B22F: calloc (vg_replace_malloc.c:418)
==7467==    by 0x805A25A: pingCtlTable_allocate_data
(pingCtlTable_interface.c:554)
==7467==    by 0x805A357: _mfd_pingCtlTable_undo_setup
(pingCtlTable_interface.c:2210)
==7467==    by 0x6FD6894: _baby_steps_access_multiplexer (baby_steps.c:501)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)
==7467==    by 0x6FEE01E: netsnmp_call_next_handler (agent_handler.c:640)
==7467==    by 0x6FD630D: _baby_steps_helper (baby_steps.c:312)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)
==7467==    by 0x6FEE01E: netsnmp_call_next_handler (agent_handler.c:640)
==7467==    by 0x6FDCFE9: netsnmp_row_merge_helper_handler (row_merge.c:342)
==7467==    by 0x6FEDAAE: netsnmp_call_handler (agent_handler.c:526)
==7467==    by 0x6FEE01E: netsnmp_call_next_handler (agent_handler.c:640)

Neither one of these is the obvious possibility:
pingCtlTable_release_rowreq_ctx() calls SNMP_FREE(rowreq_ctx);
pingCtlTable_release_data() calls free(data).

I'd be happy to run experiments to try to track these down, but could
use some guidance as to where to start: what tracing should I enable
and what should I be looking for?  This level of leakage (6 + 24)
happens after a few weeks worth of normal usage, so it's a little bit
of a needle-in-the-haystack problem, and I'm looking for guidance to
reduce the size of the haystack :-)

Thanks,
  Bill

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Reply via email to