Okay cool, mine already breaks with P=2, so I'll try this soon. Thanks
for the impatient-idiot's-guide :)
On 18 May 2011 14:15, Jeff Squyres wrote:
> If you're only running with a few MPI processes, you might be able to get
> away with:
>
> mpirun -np 4 valgrind ./my_mpi_application
>
> If you r
If you're only running with a few MPI processes, you might be able to get away
with:
mpirun -np 4 valgrind ./my_mpi_application
If you run any more than that, the output gets too jumbled and you should
output each process' valgrind stdout to a different file with the --log-file
option (IIRC).
Hi Jeff,
Thanks for the response.
On 18 May 2011 13:30, Jeff Squyres wrote:
> *Usually* when we see segv's in calls to alloc, it means that there was
> previously some kind of memory bug, such as an array overflow or something
> like that (i.e., something that stomped on the memory allocation
*Usually* when we see segv's in calls to alloc, it means that there was
previously some kind of memory bug, such as an array overflow or something like
that (i.e., something that stomped on the memory allocation tables, causing the
next alloc to fail).
Have you tried running your code through a
Hi all,
I hope to provide enough information to make my problem clear. I
have been debugging a lot after continually getting a segfault
in my program, but then I decided to try and run it on another
node, and it didn't segfault! The program which causes this
strange behaviour can be downloaded wit