Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
On Wed, Sep 30, 2009 at 3:16 PM, Peter Kjellstromwrote: > Not MPI aware, but, you could watch network traffic with a tool such as > collectl in real-time. collectl is a great idea. I am going to try that now. -- Rahul
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
On Tuesday 29 September 2009, Rahul Nabar wrote: > On Tue, Sep 29, 2009 at 10:40 AM, Eugene Lohwrote: > > to know. It sounds like you want to be able to watch some % utilization > > of a hardware interface as the program is running. I *think* these tools > > (the ones on the FAQ, including MPE, Vampir, and Sun Studio) are not of > > that class. > > You are correct. A real time tool would be best that sniffs at the MPI > traffic. Post mortem profilers would be the next best option I assume. > I was trying to compile MPE but gave up. Too many errors. Trying to > decide if I should prod on or look at another tool. Not MPI aware, but, you could watch network traffic with a tool such as collectl in real-time. /Peter signature.asc Description: This is a digitally signed message part.
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
It seems icc or icpc does not know about the option -mp, so "mpiCC -g -O2 -mp simple_mpi.c" complains about -mp. > mpicc -g -O2 -mp simple_mpi.c cc1: error: unrecognized command line option "-mp" > mpicxx -g -O2 -mp simple_mpi.c cc1plus: error: unrecognized command line option "-mp" I used mpich2's compiler wrapper built with intel compiler for testing (since the error comes from the native compiler so it does not matter what MPI implementation is used) Also, CC should be set to mpicc not mpiCC (which is C++ MPI compiler wrapper) Anyway, I suggest you try configure MPE as follows: ./configure CC=icc F77=ifort MPI_CC=/usr/local/ompi-ifort/bin/mpicc MPI_F77=/usr/local/ompi-ifort/bin/mpif77 --prefix=.. mpe2-1.0.6p1 isn't the latest. The latest mpe2 is bundled with mpich2, currently at 1.1.1p1, and just use mpich2/src/mpe2 Let me know how it goes. A.Chan - "Rahul Nabar"wrote: > On Tue, Sep 29, 2009 at 1:33 PM, Anthony Chan > wrote: > > > > Rahul, > > > > > > > What errors did you see when compiling MPE for OpenMPI ? > > Can you send me the configure and make outputs as seen on > > your terminal ? ALso, what version of MPE are you using > > with OpenMPI ? > > Version: mpe2-1.0.6p1 > > ./configure FC=ifort CC=icc CXX=icpc F77=ifort CFLAGS="-g -O2 -mp" > FFLAGS="-mp -recursive" CXXFLAGS="-g -O2" CPPFLAGS=-DpgiFortran > MPI_CC=/usr/local/ompi-ifort/bin/mpiCC > MPI_F77=/usr/local/ompi-ifort/bin/mpif77 > MPI_LIBS=/usr/local/ompi-ifort/lib/ > Configuring MPE Profiling System with 'FC=ifort' 'CC=icc' 'CXX=icpc' > 'F77=ifort' 'CFLAGS=-g -O2 -mp' 'FFLAGS=-mp -recursive' 'CXXFLAGS=-g > -O2' 'CPPFLAGS=-DpgiFortran' 'MPI_CC=/usr/local/ompi-ifort/bin/mpiCC' > 'MPI_F77=/usr/local/ompi-ifort/bin/mpif77' > 'MPI_LIBS=/usr/local/ompi-ifort/lib/' > checking for current directory name... /src/mpe2-1.0.6p1 > checking gnumake... yes using --no-print-directory > checking BSD 4.4 make... no - whew > checking OSF V3 make... no > checking for virtual path format... VPATH > User supplied MPI implmentation (Good Luck!) > checking for gcc... icc > checking for C compiler default output file name... a.out > checking whether the C compiler works... yes > checking whether we are cross compiling... no > checking for suffix of executables... > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether icc accepts -g... yes > checking for icc option to accept ANSI C... none needed > checking whether MPI_CC has been set ... > /usr/local/ompi-ifort/bin/mpiCC > checking whether we are using the GNU Fortran 77 compiler... no > checking whether ifort accepts -g... yes > checking whether MPI_F77 has been set ... > /usr/local/ompi-ifort/bin/mpif77 > checking for the linkage of the supplied MPI C definitions ... no > configure: error: Cannot link with basic MPI C program! > Check your MPI include paths, MPI libraries and MPI CC > compiler > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
On Tue, Sep 29, 2009 at 1:33 PM, Anthony Chanwrote: > > Rahul, > > > What errors did you see when compiling MPE for OpenMPI ? > Can you send me the configure and make outputs as seen on > your terminal ? ALso, what version of MPE are you using > with OpenMPI ? Version: mpe2-1.0.6p1 ./configure FC=ifort CC=icc CXX=icpc F77=ifort CFLAGS="-g -O2 -mp" FFLAGS="-mp -recursive" CXXFLAGS="-g -O2" CPPFLAGS=-DpgiFortran MPI_CC=/usr/local/ompi-ifort/bin/mpiCC MPI_F77=/usr/local/ompi-ifort/bin/mpif77 MPI_LIBS=/usr/local/ompi-ifort/lib/ Configuring MPE Profiling System with 'FC=ifort' 'CC=icc' 'CXX=icpc' 'F77=ifort' 'CFLAGS=-g -O2 -mp' 'FFLAGS=-mp -recursive' 'CXXFLAGS=-g -O2' 'CPPFLAGS=-DpgiFortran' 'MPI_CC=/usr/local/ompi-ifort/bin/mpiCC' 'MPI_F77=/usr/local/ompi-ifort/bin/mpif77' 'MPI_LIBS=/usr/local/ompi-ifort/lib/' checking for current directory name... /src/mpe2-1.0.6p1 checking gnumake... yes using --no-print-directory checking BSD 4.4 make... no - whew checking OSF V3 make... no checking for virtual path format... VPATH User supplied MPI implmentation (Good Luck!) checking for gcc... icc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether icc accepts -g... yes checking for icc option to accept ANSI C... none needed checking whether MPI_CC has been set ... /usr/local/ompi-ifort/bin/mpiCC checking whether we are using the GNU Fortran 77 compiler... no checking whether ifort accepts -g... yes checking whether MPI_F77 has been set ... /usr/local/ompi-ifort/bin/mpif77 checking for the linkage of the supplied MPI C definitions ... no configure: error: Cannot link with basic MPI C program! Check your MPI include paths, MPI libraries and MPI CC compiler
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
Rahul, - "Rahul Nabar"wrote: > Post mortem profilers would be the next best option I assume. > I was trying to compile MPE but gave up. Too many errors. Trying to > decide if I should prod on or look at another tool. What errors did you see when compiling MPE for OpenMPI ? Can you send me the configure and make outputs as seen on your terminal ? ALso, what version of MPE are you using with OpenMPI ? A.Chan
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
Hi, I'm writing a simple post-mortem profiling tool that provides some of the information that you are looking for. That being said, the tool, Loba, isn't publicly available just yet. In the mean time, take a look at mpiP (http://mpip.sourceforge.net/). -- Samuel K. Gutierrez Los Alamos National Laboratory On Tue, Sep 29, 2009 at 10:40 AM, Eugene Lohwrote: to know. It sounds like you want to be able to watch some % utilization of a hardware interface as the program is running. I *think* these tools (the ones on the FAQ, including MPE, Vampir, and Sun Studio) are not of that class. You are correct. A real time tool would be best that sniffs at the MPI traffic. Post mortem profilers would be the next best option I assume. I was trying to compile MPE but gave up. Too many errors. Trying to decide if I should prod on or look at another tool. -- Rahul ___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
On Tue, Sep 29, 2009 at 10:40 AM, Eugene Lohwrote: > to know. It sounds like you want to be able to watch some % utilization of > a hardware interface as the program is running. I *think* these tools (the > ones on the FAQ, including MPE, Vampir, and Sun Studio) are not of that > class. You are correct. A real time tool would be best that sniffs at the MPI traffic. Post mortem profilers would be the next best option I assume. I was trying to compile MPE but gave up. Too many errors. Trying to decide if I should prod on or look at another tool. -- Rahul
Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?
If MPE and Vampir represent the class of tools you're interested in, there is a performance-tool FAQ at http://www.open-mpi.org/faq/?category=perftools listing some other tools in this class. Note that these are really postmortem tools. That is, you typically run the code first and then look at results later. In certain cases, you can start looking at results while the job is still running, but mostly these tools are built to do postmortem analysis. That may still work for you. E.g., Sun Studio Analyzer (which happens to be the only one of the tools I know well) allows you to look at in-flight messages or bytes -- either in general or for a specific connection. But I'm guessing these are indirect ways of looking at what you really want to know. It sounds like you want to be able to watch some % utilization of a hardware interface as the program is running. I *think* these tools (the ones on the FAQ, including MPE, Vampir, and Sun Studio) are not of that class. But maybe the indirect, postmortem methods suffice. You decide. Matthieu Brucher wrote: You can try MPE (free) or Vampir (not free, but can be integrated inside OpenMPI). 2009/9/29 Rahul Nabar: I have a code that seems to run about 40% faster when I bond together twin eth interfaces. The question, of course, arises: is it really producing so much traffic to keep twin 1 Gig eth interfaces busy? I don't really believe this but need a way to check. What are good tools to monitior the MPI performance of a running job. Basically what throughput loads is it imposing on the eth interfaces. Any suggestions? The code does not seem to produce much of disk I/O as profiled via strace (if at all NFS I/O is a bottleneck).
[OMPI users] profile the performance of a MPI code: how much traffic is being generated?
I have a code that seems to run about 40% faster when I bond together twin eth interfaces. The question, of course, arises: is it really producing so much traffic to keep twin 1 Gig eth interfaces busy? I don't really believe this but need a way to check. What are good tools to monitior the MPI performance of a running job. Basically what throughput loads is it imposing on the eth interfaces. Any suggestions? The code does not seem to produce much of disk I/O as profiled via strace (if at all NFS I/O is a bottleneck). -- Rahul