On Wed, Sep 30, 2009 at 3:16 PM, Peter Kjellstrom wrote:
> Not MPI aware, but, you could watch network traffic with a tool such as
> collectl in real-time.
collectl is a great idea. I am going to try that now.
--
Rahul
On Tuesday 29 September 2009, Rahul Nabar wrote:
> On Tue, Sep 29, 2009 at 10:40 AM, Eugene Loh wrote:
> > to know. It sounds like you want to be able to watch some % utilization
> > of a hardware interface as the program is running. I *think* these tools
> > (the ones on
It seems icc or icpc does not know about the option -mp, so
"mpiCC -g -O2 -mp simple_mpi.c" complains about -mp.
> mpicc -g -O2 -mp simple_mpi.c
cc1: error: unrecognized command line option "-mp"
> mpicxx -g -O2 -mp simple_mpi.c
cc1plus: error: unrecognized command line option "-mp"
I used
On Tue, Sep 29, 2009 at 1:33 PM, Anthony Chan wrote:
>
> Rahul,
>
>
> What errors did you see when compiling MPE for OpenMPI ?
> Can you send me the configure and make outputs as seen on
> your terminal ? ALso, what version of MPE are you using
> with OpenMPI ?
Version:
Rahul,
- "Rahul Nabar" wrote:
> Post mortem profilers would be the next best option I assume.
> I was trying to compile MPE but gave up. Too many errors. Trying to
> decide if I should prod on or look at another tool.
What errors did you see when compiling MPE for
Hi,
I'm writing a simple post-mortem profiling tool that provides some of
the information that you are looking for. That being said, the tool,
Loba, isn't publicly available just yet. In the mean time, take a
look at mpiP (http://mpip.sourceforge.net/).
--
Samuel K. Gutierrez
Los
On Tue, Sep 29, 2009 at 10:40 AM, Eugene Loh wrote:
> to know. It sounds like you want to be able to watch some % utilization of
> a hardware interface as the program is running. I *think* these tools (the
> ones on the FAQ, including MPE, Vampir, and Sun Studio) are not of
If MPE and Vampir represent the class of tools you're interested in,
there is a performance-tool FAQ at
http://www.open-mpi.org/faq/?category=perftools listing some other tools
in this class.
Note that these are really postmortem tools. That is, you typically run
the code first and then
I have a code that seems to run about 40% faster when I bond together
twin eth interfaces. The question, of course, arises: is it really
producing so much traffic to keep twin 1 Gig eth interfaces busy? I
don't really believe this but need a way to check.
What are good tools to monitior the MPI