On Thu, Apr 25, 2013 at 1:42 PM, Sebastian Feld
<[email protected]> wrote:
> On Wed, Apr 24, 2013 at 11:10 PM, Roland Mainz <[email protected]> 
> wrote:
>> On Wed, Apr 24, 2013 at 10:14 PM, Roland Mainz <[email protected]> 
>> wrote:
>>> On Wed, Apr 24, 2013 at 12:45 AM, John Reiser <[email protected]> wrote:
>>>>> Does valgrind provide any replacements for glibc's
>>>>> |__malloc_initialize_hook()| ? It seems this call and it's |*hook*()|
>>>>> siblings are depreciated now (at least in SuSE >=12.3) ...
>>>>
>>>> There is no glibc replacement.  [And the reasoning is correct.]
>>>> There is no valgrind replacement.
>>>> You must change your basic approach.
>>>>
>>>> We went through this just 6 months ago.
>>>> Check the archives of this mailing list:
>>>>
>>>>    [Valgrind-users] __malloc_hook
>>>>    Amir Szekely  <[email protected]>
>>>>    10/19/2012
>>>>
>>>> That thread contains code that works.
>>>> [The modification to detect the first use is obvious.]
>>>
>>> Grumpf... I tried that... but the combination how the stuff I'd like
>>> to instrument+debug is build+used makes that solution more or less
>>> impossible (for example... the allocator system lives in a seperate
>>> namespace, e.g. it has |malloc()|&&|free()| etc. but all symbols are
>>> prefixed with |_ast|, e.g. |_ast_malloc()|, |_ast_free()| etc.).
>>>
>>> I tried to work around the issues with the API provided in
>>> <valgrind/valgrind.h> ... but it seems this doesn't detect any
>>> read-from-unallocated etc. or even the plain double-free situations
>>> (patch below) ... erm... is the API around
>>> |VALGRIND_MALLOCLIKE_BLOCK()| know to work in valgrind-3.8.1 ?
>>>
>>> -- snip --
>>> --- src/lib/libast/vmalloc/vmbest.c      2012-06-28 22:12:14.000000000 +0200
>>> +++ src/lib/libast/vmalloc/vmbest.c     2013-04-24 03:03:44.207373019 +0200
>>> @@ -10,40 +10,42 @@
>>>  *          http://www.eclipse.org/org/documents/epl-v10.html           *
>>>  *         (with md5 checksum b35adb5213ca9657e911e9befb180842)         *
>>>  *                                                                      *
>>>  *              Information and Software Systems Research               *
>>>  *                            AT&T Research                             *
>>>  *                           Florham Park NJ                            *
>>>  *                                                                      *
>>>  *                 Glenn Fowler <[email protected]>                  *
>>>  *                  David Korn <[email protected]>                   *
>>>  *                   Phong Vo <[email protected]>                    *
>>>  *                                                                      *
>>>  ***********************************************************************/
>>>  #if defined(_UWIN) && defined(_BLD_ast)
>>>
>>>  void _STUB_vmbest(){}
>>>
>>>  #else
>>>
>>>  #include       "vmhdr.h"
>>>
>>> +#include <valgrind/valgrind.h>
>>> +
>>>  /*     Best-fit allocation method. This is based on a best-fit strategy
>>>  **     using a splay tree for storage of lists of free blocks of the same
>>>  **     size. Recent free blocks may be cached for fast reuse.
>>>  **
>>>  **     Written by Kiem-Phong Vo, [email protected], 01/16/94.
>>>  */
>>>
>>>  #ifdef DEBUG
>>>  static int     N_free;         /* # of free calls                      */
>>>  static int     N_alloc;        /* # of alloc calls                     */
>>>  static int     N_resize;       /* # of resize calls                    */
>>>  static int     N_wild;         /* # allocated from the wild block      */
>>>  static int     N_last;         /* # allocated from last free block     */
>>>  static int     N_reclaim;      /* # of bestreclaim calls               */
>>>  #endif /*DEBUG*/
>>>
>>>  #define COMPACT                8       /* factor to decide when to
>>> compact     */
>>>
>>>  /* Check to see if a block is in the free tree */
>>>  #if __STD_C
>>> @@ -692,41 +694,44 @@
>>>
>>>                         if(VMWILD(vd,np))
>>>                         {       SIZE(np) &= ~BITS;
>>>                                 SELF(np) = np;
>>>                                 ap = NEXT(np); /**/ASSERT(ISBUSY(SIZE(ap)));
>>>                                 SETPFREE(SIZE(ap));
>>>                                 vd->wild = np;
>>>                         }
>>>                         else    vd->free = np;
>>>                 }
>>>
>>>                 SETBUSY(SIZE(tp));
>>>         }
>>>
>>>  done:
>>>         if(tp && !local && (vd->mode&VM_TRACE) && _Vmtrace &&
>>> VMETHOD(vd) == VM_MTBEST)
>>>                 
>>> (*_Vmtrace)(vm,NIL(Vmuchar_t*),(Vmuchar_t*)DATA(tp),orgsize,0);
>>>
>>>         CLRLOCK(vm,local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 0);
>>>
>>> -       return tp ? DATA(tp) : NIL(Void_t*);
>>> +       void *res= tp ? DATA(tp) : NIL(Void_t*);
>>> +       if (!local)
>>> +               VALGRIND_MALLOCLIKE_BLOCK(res, size, 0, 0);
>>> +       return res;
>>>  }
>>>
>>>  #if __STD_C
>>>  static long bestaddr(Vmalloc_t* vm, Void_t* addr, int local )
>>>  #else
>>>  static long bestaddr(vm, addr, local)
>>>  Vmalloc_t*     vm;     /* region allocating from       */
>>>  Void_t*                addr;   /* address to check             */
>>>  int            local;
>>>  #endif
>>>  {
>>>         reg Seg_t*      seg;
>>>         reg Block_t     *b, *endb;
>>>         reg long        offset;
>>>         reg Vmdata_t*   vd = vm->data;
>>>
>>>         /**/ASSERT(local ? (vd->lock == 1) : 1 );
>>>         SETLOCK(vm, local);
>>>
>>>         offset = -1L; b = endb = NIL(Block_t*);
>>> @@ -816,40 +821,43 @@
>>>                         vd->free = bp;
>>>                 else
>>>                 {       /**/ASSERT(!vmonlist(CACHE(vd)[S_CACHE], bp) );
>>>                         LINK(bp) = CACHE(vd)[S_CACHE];
>>>                         CACHE(vd)[S_CACHE] = bp;
>>>                 }
>>>
>>>                 /* coalesce on freeing large blocks to avoid fragmentation 
>>> */
>>>                 if(SIZE(bp) >= 2*vd->incr)
>>>                 {       bestreclaim(vd,NIL(Block_t*),0);
>>>                         if(vd->wild && SIZE(vd->wild) >= COMPACT*vd->incr)
>>>                                 KPVCOMPACT(vm,bestcompact);
>>>                 }
>>>         }
>>>
>>>         if(!local && _Vmtrace && (vd->mode&VM_TRACE) && VMETHOD(vd) ==
>>> VM_MTBEST )
>>>                 (*_Vmtrace)(vm,(Vmuchar_t*)data,NIL(Vmuchar_t*), (s&~BITS), 
>>> 0);
>>>
>>>         CLRLOCK(vm, local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 
>>> 0);
>>>
>>> +       if (!local)
>>> +               VALGRIND_FREELIKE_BLOCK(data, 0);
>>> +
>>>         return 0;
>>>  }
>>>
>>>  #if __STD_C
>>>  static Void_t* bestresize(Vmalloc_t* vm, Void_t* data, reg size_t
>>> size, int type, int local)
>>>  #else
>>>  static Void_t* bestresize(vm, data, size, type, local)
>>>  Vmalloc_t*     vm;             /* region allocating from       */
>>>  Void_t*                data;           /* old block of data            */
>>>  reg size_t     size;           /* new size                     */
>>>  int            type;           /* !=0 to move, <0 for not copy */
>>>  int            local;
>>>  #endif
>>>  {
>>>         reg Block_t     *rp, *np, *t;
>>>         size_t          s, bs;
>>>         size_t          oldz = 0,  orgsize = size;
>>>         Void_t          *oldd = 0, *orgdata = data;
>>>         Vmdata_t        *vd = vm->data;
>>>
>>> @@ -936,40 +944,46 @@
>>>                         {       if(type&VM_RSCOPY)
>>>                                         memcpy(data, oldd, bs);
>>>
>>>                         do_free: /* reclaim these right away */
>>>                                 SETJUNK(SIZE(rp));
>>>                                 LINK(rp) = CACHE(vd)[S_CACHE];
>>>                                 CACHE(vd)[S_CACHE] = rp;
>>>                                 bestreclaim(vd, NIL(Block_t*), S_CACHE);
>>>                         }
>>>                 }
>>>         }
>>>
>>>         if(data && (type&VM_RSZERO) && (size = SIZE(BLOCK(data))&~BITS) > 
>>> oldz )
>>>                 memset((Void_t*)((Vmuchar_t*)data + oldz), 0, size-oldz);
>>>
>>>         if(!local && _Vmtrace && data && (vd->mode&VM_TRACE) &&
>>> VMETHOD(vd) == VM_MTBEST)
>>>                 (*_Vmtrace)(vm, (Vmuchar_t*)orgdata, (Vmuchar_t*)data,
>>> orgsize, 0);
>>>
>>>         CLRLOCK(vm, local); /**/ASSERT(_vmbestcheck(vd, NIL(Block_t*)) == 
>>> 0);
>>>
>>> +       if (!local)
>>> +       {
>>> +               VALGRIND_FREELIKE_BLOCK(orgdata, 0);
>>> +               VALGRIND_MALLOCLIKE_BLOCK(data, size, 0, 0);
>>> +       }
>>> +
>>>         return data;
>>>  }
>>>
>>>  #if __STD_C
>>>  static long bestsize(Vmalloc_t* vm, Void_t* addr, int local )
>>>  #else
>>>  static long bestsize(vm, addr, local)
>>>  Vmalloc_t*     vm;     /* region allocating from       */
>>>  Void_t*                addr;   /* address to check             */
>>>  int            local;
>>>  #endif
>>>  {
>>>         Seg_t           *seg;
>>>         Block_t         *b, *endb;
>>>         long            size;
>>>         Vmdata_t        *vd = vm->data;
>>>
>>>         SETLOCK(vm, local);
>>>
>>>         size = -1L;
>>> -- snip --
>>
>> ... aaand more digging: I found
>> http://code.google.com/p/valgrind-variant/source/browse/trunk/valgrind/coregrind/m_replacemalloc/vg_replace_malloc.c#1175
>> which seems to be from one of the valgrind forks... what about
>> talking-up that idea and provide a command line option called
>> --allocator-sym-redirect which works like passing down a small list of
>> symbol mappings to instruct valgrind that it should monitor some extra
>> allocators.
>>
>> Example:
>> $ valgrind  
>> "--allocator-sym-redirect=sh_malloc=malloc,sh_free=free,sh_calloc=calloc"
>> ... # would instruct valgrind to take function |sh_malloc()| as an
>> alternative |malloc(), |sh_free()| as alternative |free()| version
>> etc. etc.
>>
>> The only issue is that if multiple allocators are active within a
>> single process we may need some kind of "grouping" to explain valgrind
>> that memory allocated by |sh_malloc()| can not be freed by |tcfree()|
>> or |_ast_free()| ... maybe it could be done using '{'- and '}'-pairs,
>> e.g. $ valgrind
>> "--allocator-sym-redirect={sh_malloc=malloc,sh_free=free,sh_calloc=calloc},{_ast_malloc=malloc,_ast_free=free,_ast_calloc=calloc}"
>> ... #
>
> The idea of (finally!) providing such an option sounds like a very
> good idea. Until now the only way to probe python and bash4 via
> valgrind is to poke in the valgrind sources (which should never
> happen).
>
> I also think the idea to let valgrind detect mixing of different
> allocators is a very valuable feature since this has been a source of
> more and more bugs. Usually happens in complex projects with use many
> different shared libraries, all with their own memory allocators.

Uhm... was there any feedback yet for that idea ?

----

Bye,
Roland

-- 
  __ .  . __
 (o.\ \/ /.o) [email protected]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)

------------------------------------------------------------------------------
AlienVault Unified Security Management (USM) platform delivers complete
security visibility with the essential security capabilities. Easily and
efficiently configure, manage, and operate all of your security controls
from a single console and one unified framework. Download a free trial.
http://p.sf.net/sfu/alienvault_d2d
_______________________________________________
Valgrind-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to