If you're only running with a few MPI processes, you might be able to get away 
with:

mpirun -np 4 valgrind ./my_mpi_application

If you run any more than that, the output gets too jumbled and you should 
output each process' valgrind stdout to a different file with the --log-file 
option (IIRC).

I personally like these valgrind options:

valgrind --num-callers=50 --db-attach=yes --tool=memcheck --leak-check=yes 
--show-reachable=yes



On May 18, 2011, at 8:49 AM, Paul van der Walt wrote:

> Hi Jeff,
> 
> Thanks for the response.
> 
> On 18 May 2011 13:30, Jeff Squyres <jsquy...@cisco.com> wrote:
>> *Usually* when we see segv's in calls to alloc, it means that there was 
>> previously some kind of memory bug, such as an array overflow or something 
>> like that (i.e., something that stomped on the memory allocation tables, 
>> causing the next alloc to fail).
>> 
>> Have you tried running your code through a memory-checking debugger?
> 
> I sort-of tried with valgrind, but I'm not really sure how to
> interpret the output (I'm not such a C-wizard). I'll have another look
> a little later then and report back. I suppose I should RTFM on how to
> properly invoke valgrind so it makes sense with an MPI program?
> 
> Paul
> 
> -- 
> O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to