Re: [OMPI users] built-in memchecker support
Gilles Gouaillardetwrites: > Dave, > > the builtin memchecker can detect MPI usage errors such as modifying > the buffer passed to MPI_Isend() before the request completes OK, thanks. The implementation looks rather different, and it's not clear without checking the code in detail how it differs from the preload library (which does claim to check at least some correctness) or why that that sort of check has to be built in. > all the extra work is protected > if ( running_under_valgrind() ) { >extra_checks(); > } > > so if you are not running under valgrind, the overhead should be unnoticeable Thanks. Is there a good reason not to enable it by default, then? (Apologies that I've just found and checked the FAQ entry, and it does actually say that, in contradiction to the paper it references. I assume the implementation has changed since then.) A deficiency of the preload library I just realized is that it says it's only MPI-2. ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users
Re: [OMPI users] built-in memchecker support
Dave, the builtin memchecker can detect MPI usage errors such as modifying the buffer passed to MPI_Isend() before the request completes all the extra work is protected if ( running_under_valgrind() ) { extra_checks(); } so if you are not running under valgrind, the overhead should be unnoticeable Cheers, Gilles On Thu, Aug 24, 2017 at 8:37 PM, Christoph Niethammerwrote: > Hi Dave, > > The memchecker interface is an addition which allows other tools to be used > as well. > > A more recent one is memPin [1]. > As stated in the cited paper, the overhead is minimal when not attached to a > tool. > From my experience a program running under pin tool control runs much faster > than valgrind. > > Best > Christoph Niethammer > > [1] > http://www.springer.com/cda/content/document/cda_downloaddocument/9783642373480-c1.pdf?SGWID=0-0-45-1397615-p175067491 > > > > - Original Message - > From: "Dave Love" > To: "Open MPI Users" > Sent: Thursday, August 24, 2017 1:22:17 PM > Subject: [OMPI users] built-in memchecker support > > Apropos configuration parameters for packaging: > > Is there a significant benefit to configuring built-in memchecker > support, rather than using the valgrind preload library? I doubt being > able to use another PMPI tool directly at the same time counts. > > Also, are there measurements of the performance impact of configuring, > but not using, it with recent hardware and software? I don't know how > relevant the results in https://www.open-mpi.org/papers/parco-2007/ > would be now, especially on a low-latency network. > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users
Re: [OMPI users] built-in memchecker support
Hi Dave, The memchecker interface is an addition which allows other tools to be used as well. A more recent one is memPin [1]. As stated in the cited paper, the overhead is minimal when not attached to a tool. >From my experience a program running under pin tool control runs much faster >than valgrind. Best Christoph Niethammer [1] http://www.springer.com/cda/content/document/cda_downloaddocument/9783642373480-c1.pdf?SGWID=0-0-45-1397615-p175067491 - Original Message - From: "Dave Love"To: "Open MPI Users" Sent: Thursday, August 24, 2017 1:22:17 PM Subject: [OMPI users] built-in memchecker support Apropos configuration parameters for packaging: Is there a significant benefit to configuring built-in memchecker support, rather than using the valgrind preload library? I doubt being able to use another PMPI tool directly at the same time counts. Also, are there measurements of the performance impact of configuring, but not using, it with recent hardware and software? I don't know how relevant the results in https://www.open-mpi.org/papers/parco-2007/ would be now, especially on a low-latency network. ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users