Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Ashley Pittman
On Mon, 2009-10-26 at 16:21 -0400, Jeff Squyres wrote:

> there's a tiny/ 
> small amount of overhead inserted by OMPI telling Valgrind "this  
> memory region is ok", but we live in an intensely competitive HPC  
> environment.

I may be wrong but I seem to remember Julian saying the overhead is
twelve cycles for the valgrind calls.  Of course calculating what to
pass to valgrind may add to this.

> The option to enable this Valgrind Goodness in OMPI is --with- 
> valgrind.  I *think* the option may be the same for libibverbs, but I  
> don't remember offhand.
> 
> That being said, I'm guessing that we still have bunches of other  
> valgrind warnings that may be legitimate.  We can always use some help  
> to stamp out these warnings...  :-)

I note there is a bug for this, being "Valgrind clean" is a very
desirable feature for any software and particularly a library IMHO.

https://svn.open-mpi.org/trac/ompi/ticket/1720

Ashley,

-- 

Ashley Pittman, Bath, UK.

Padb - A parallel job inspection tool for cluster computing
http://padb.pittman.org.uk



Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Jeff Squyres wrote:

> Verbs and Open MPI don't have these options on by default because a)
> you need to compile against Valgrind's header files to get them to
> work, and b) there's a tiny/small amount of overhead inserted by OMPI
> telling Valgrind "this memory region is ok", but we live in an
> intensely competitive HPC environment.

It's certainly competitive, but we spend most of our implementation time
getting things correct rather than tuning.  The huge speed benefits come
from algorithmic advances, and finding bugs quickly makes the
implementation of new algorithms easier.  I'm not arguing that it should
be on by default, but it's helpful to have an environment where the
lower-level libs are valgrind-clean.  These days, I usually revert to
MPICH when hunting something with valgrind, but use OMPI most other
times.

> The option to enable this Valgrind Goodness in OMPI is --with-valgrind. 
> I *think* the option may be the same for libibverbs, but I don't
> remember offhand.

I see plenty of warning over btl sm.  Several variations, including the
excessive

--enable-debug --enable-mem-debug --enable-mem-profile \
   --enable-memchecker --with-valgrind=/usr

were not sufficient.  (I think everything in this line except
--with-valgrind increases the number of warnings, but it's nontrivial
with plain --with-valgrind.)


Thanks,

Jed



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jeff Squyres
There's a whole class of valgrind warnings that are generated when you  
use OS-bypass networks like OpenFabrics.  The verbs library and Open  
MPI can be configured and compiled with additional instructions that  
tell Valgrind where the "problematic" spots are, and that the memory  
is actually ok (because it's memory that came from outside of  
Valgrind's scope of influence).  Verbs and Open MPI don't have these  
options on by default because a) you need to compile against  
Valgrind's header files to get them to work, and b) there's a tiny/ 
small amount of overhead inserted by OMPI telling Valgrind "this  
memory region is ok", but we live in an intensely competitive HPC  
environment.


The option to enable this Valgrind Goodness in OMPI is --with- 
valgrind.  I *think* the option may be the same for libibverbs, but I  
don't remember offhand.


That being said, I'm guessing that we still have bunches of other  
valgrind warnings that may be legitimate.  We can always use some help  
to stamp out these warnings...  :-)



On Oct 26, 2009, at 4:09 PM, Jed Brown wrote:


Samuel K. Gutierrez wrote:
> Hi Jed,
>
> I'm not sure if this will help, but it's worth a try.  Turn off  
OMPI's

> memory wrapper and see what happens.
>
> c-like shell
> setenv OMPI_MCA_memory_ptmalloc2_disable 1
>
> bash-like shell
> export OMPI_MCA_memory_ptmalloc2_disable=1
>
> Also add the following MCA parameter to you run command.
>
> --mca mpi_leave_pinned 0

Thanks for the tip, but these make very little difference.

Jed


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Jeff Squyres
jsquy...@cisco.com



Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Samuel K. Gutierrez wrote:
> Hi Jed,
> 
> I'm not sure if this will help, but it's worth a try.  Turn off OMPI's
> memory wrapper and see what happens.
> 
> c-like shell
> setenv OMPI_MCA_memory_ptmalloc2_disable 1
> 
> bash-like shell
> export OMPI_MCA_memory_ptmalloc2_disable=1
> 
> Also add the following MCA parameter to you run command.
> 
> --mca mpi_leave_pinned 0

Thanks for the tip, but these make very little difference.

Jed



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Samuel K. Gutierrez

Hi Jed,

I'm not sure if this will help, but it's worth a try.  Turn off OMPI's  
memory wrapper and see what happens.


c-like shell
setenv OMPI_MCA_memory_ptmalloc2_disable 1

bash-like shell
export OMPI_MCA_memory_ptmalloc2_disable=1

Also add the following MCA parameter to you run command.

--mca mpi_leave_pinned 0

--
Samuel K. Gutierrez
Los Alamos National Laboratory


On Oct 26, 2009, at 1:41 PM, Jed Brown wrote:


Jeff Squyres wrote:

Using --enable-debug adds in a whole pile of developer-level run-time
checking and whatnot.  You probably don't want that on production  
runs.


I have found that --enable-debug --enable-memchecker actually produces
more valgrind noise than leaving them off.  Are there options to make
Open MPI strict about initializing and freeing memory?  At one point I
tried to write policy files, but even with judicious globbing, I kept
getting different warnings when run on a different program.  (All  
these

codes were squeaky-clean under MPICH2.)

Jed

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Jed Brown
Jeff Squyres wrote:
> Using --enable-debug adds in a whole pile of developer-level run-time
> checking and whatnot.  You probably don't want that on production runs.

I have found that --enable-debug --enable-memchecker actually produces
more valgrind noise than leaving them off.  Are there options to make
Open MPI strict about initializing and freeing memory?  At one point I
tried to write policy files, but even with judicious globbing, I kept
getting different warnings when run on a different program.  (All these
codes were squeaky-clean under MPICH2.)

Jed



signature.asc
Description: OpenPGP digital signature


Re: [OMPI users] memchecker overhead?

2009-10-26 Thread Brock Palen

On Oct 26, 2009, at 3:29 PM, Jeff Squyres wrote:


On Oct 26, 2009, at 3:23 PM, Brock Palen wrote:


Is there a large overhead for
--enable-debug --enable-memchecker?



--enable-debug, yes, there is a pretty large penalty.  --enable- 
debug is really only intended for Open MPI developers.  If you just  
want an OMPI that was compiled with debugging symbols, then just add  
-g to the CFLAGS/CXXFLAGS in OMPI's configure, perhaps like this:


Interesting, we were just looking at the memchecker functionality and  
don't want to double the number of MPI builds we offer. In the Debugg  
FAQ section 10

http://www.open-mpi.org/faq/?category=debugging#memchecker_how

It says you need --enable-debug to use --enable-memchecker, is this  
really the case then?




 shell$ ./configure CFLAGS=-g CXXFLAGS=-g ...

Using --enable-debug adds in a whole pile of developer-level run- 
time checking and whatnot.  You probably don't want that on  
production runs.


I'll let the HLRS guys comment on the cost of --enable-memchecker; I  
suspect the answer will be "it depends".


--
Jeff Squyres
jsquy...@cisco.com

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users