On Fri, Jun 26, 2009 at 6:24 PM, Dominic
Walsh<dwa...@abingdon.oilfield.slb.com> wrote:
> I have just been trying out SCALI MPI with Valgrind and have observed that
> when running across multiple nodes the process seem to end up spinning in
> MPI_Init. Smaller problems running with shared memory appear fine.
>
> The configuration is:
>
> 8 Nodes connected by Infiniband
> 2 Sockets/Node with 1 MPI process each
> Plenty of spare memory
> No competing processes – memcheck is at 100%
>
> My hunch is there is weird packet dropping effect but hoping someone out
> there has an idea. Was hoping I could tell Valgrind to skip instrumenting
> libmpi.so? Either I didn’t read the manual well enough or it breaks a
> fundamental principle (which I suspect it might)

Have you noticed the thread about Valgrind and memcpy() on the
ofa-general mailing list ?

See also http://lists.openfabrics.org/pipermail/general/2009-June/060037.html.

Bart.

------------------------------------------------------------------------------
_______________________________________________
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to