Re: [OMPI users] ompi-restart issue : ompi-restart doesn't work across nodes - possible installation problem or environment setting problem??

2008-10-08 Thread arun dhakne
I have configured with the additional flags(--enable-ft-thread
--enable-mpi-threads) but there is no change in behaviour, it still
gives seg fault.
open mpi version:
Open MPI: 1.3a1r19685

blcr version:
version 0.7.3


The core file is attached.
hello.c is sample mpi program whose core is dumped is also attached.

~]$ ompi-restart ompi_global_snapshot_11219.ckpt
--
mpirun noticed that process rank 0 with PID 11288 on node
acl-cadi-pentd-1.cse.buffalo.edu exited on signal 11 (Segmentation
fault).
--
2 total processes killed (some possibly by mpirun during cleanup)


Best,


On Mon, Oct 6, 2008 at 6:44 PM, Josh Hursey  wrote:
> The installation looks ok, though I'm not sure what is causing the segfault
> of the restarted process. Two things to try. First can you send me a
> backtrace from the core file that is generated from the segmentation fault.
> That will provide insight into what is causing it.
>
> Second you may try to enable the C/R thread which allows for a checkpoint to
> progress when an application is in a computation loop instead of only when
> it is in the MPI library. To do so configure with these additional flags:
>  --enable-ft-thread --enable-mpi-threads
>
> What version of Open MPI are you using? What version of BLCR?
>
> Best,
> Josh
>
> On Oct 6, 2008, at 3:55 PM, arun dhakne wrote:
>
>> Hi all,
>>
>> This is the procedure i have followed to install openmpi. Is there
>> some installation or environment setting problem in here?
>> an openmpi program with 4 process is run across 2 dual-core intel
>> machines, with 2 processes running on each of the machine.
>>
>> ompi-checkpoint is successful but ompi-restart fails with following error
>>
>>
>> $:> ompi-restart ompi_global_snapshot_6045.ckpt
>> --
>> mpirun noticed that process rank 0 with PID 6372 on node
>> acl-cadi-pentd-1.cse.buffalo.edu exited on signal 11 (Segmentation
>> fault).
>> --
>>
>> Open-mpi installation steps:
>> ./configure --prefix=/home/csgrad/audhakne/.openmpi --with-ft=cr
>> --with-blcr=/usr/lib64 --enable-debug
>> make
>> make install
>>
>>
>>
>> export
>> LD_LIBRARY_PATH=$HOME/.openmpi/lib/:$HOME/.openmpi/lib/openmpi:/usr/lib64
>> export PATH=$HOME/.openmpi/bin:$PATH
>>
>> NOTE: blcr is installed as a module
>> $:> lsmod | grep blcr
>>
>> blcr  117892  0
>> blcr_vmadump   58264  1 blcr
>> blcr_imports   46080  2 blcr,blcr_vmadump
>>
>> Please let me know if there is problem with above procedure, thanks a
>> lot for your time.
>>
>> Best.
>>
>> -- Forwarded message --
>> From: arun dhakne 
>> Date: Tue, Sep 30, 2008 at 12:52 AM
>> Subject: ompi-restart issue : ompi-restart doesn't work across nodes
>> To: Open MPI Users 
>>
>>
>> Hi all,
>>
>> I had gone through some previous ompi-restart issues but i couldn't
>> find anything similar to this problem.
>>
>> I have installed blcr, and configured open-mpi 'openmpi-1.3a1r19645'
>>
>> i) If the sample mpi program say ( np 4 on single machine that is
>> without any hostfile )is ran and I try to checkpoint it, it happens
>> successfully and even ompi-restart works in this case.
>>
>> ii) If the sample mpi program is ran across say 2 different nodes and
>> checkpoint happens successfully BUT ompi-restart throws following
>> error:
>>
>> $ ompi-restart ompi_global_snapshot_7604.ckpt
>> --
>> mpirun noticed that process rank 3 with PID 9590 on node
>> acl-cadi-pentd-1.cse.buffalo.edu exited on signal 11 (Segmentation
>> fault).
>> --
>>
>> Please let me know if more information is needed.
>>
>> --
>> Thanks and Regards,
>> Arun U. Dhakne
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Thanks and Regards,
Arun U. Dhakne
Graduate Student
Computer Science and Engineering Dept.
State University of New York at Buffalo


core.tar.gz
Description: GNU Zip compressed data
#include 
#include 
int main (int argc, char *argv[])
{
 int rank, size;
 int i;
 int send, recv;
 MPI_Init (, );  /* starts MPI */
 MPI_Comm_rank (MPI_COMM_WORLD, );/* get current process
id */
 MPI_Comm_size (MPI_COMM_WORLD, );/* get number of
processes */


 printf( "Hello world from process %d of %d\n", rank, size );
 for (i=0; i < 100; i++){
   send = i;
   if (rank==0){
   MPI_Send(, 1, 

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres

On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:

Make sure you don't use a "debug" build of Open MPI. If you use  
trunk, the build system detects it and turns on debug by default. It  
really kills performance. --disable-debug will remove all those  
nasty printfs from the critical path.


You can easily tell if you have a debug build of OMPI with the  
ompi_info command:


shell$ ompi_info | grep debug
  Internal debug support: no
Memory debugging support: no
shell$

You want to see "no" for both of those.

--
Jeff Squyres
Cisco Systems




Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh

Eugene Loh wrote:


Sangamesh B wrote:


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.


I don't understand MPICH very well, but it seemed as though some of 
the flags used in building MPICH are supposed to be added in 
automatically to the mpicc/etc compiler wrappers.


Again, this may not apply to your case, but I found out some more 
details on my theory.


If you build MPICH2 like this:

   % configure CFLAGS=-O2
   % make

then when you use "mpicc" to build your application, you automatically 
get that optimization flag built in.


What had confused me was that I tried confirming the theory by building 
MPICH2 like this:


   % configure --enable-fast
   % make

That does *NOT* up the mpicc optimization level (despite their 
documentation).


Re: [OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Sudhakar Mahalingam

Jed,

You are correct. I found an "Open" macro defined in our another header  
file which was included before the mpi header files (Actually this  
order was working fine with the mpich-1.2.7 but both openmpi-1.2.7 and  
MPICH-2 complained and threw errors to me). Now when I change the  
order of inclusion (i.e., first mpi and then my other header file),  
the code compiles and builds fine.


Thanks,

Sudhakar



On Wed, Oct 8, 2008 at 21:19, Sudhakar Mahalingam  
 wrote:
> I am having a problem about "Open" Macro's number of arguments,  
when I try
> to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5  
machine. The
> error message is given below. When I look at the file.h and  
file_inln.h
> header files in the cxx folder, I am seeing that the "Open"  
function indeed
> takes four arguments but I don't know why there is this error about  
the
> number of arguments of 4. Does anyone else seen this type of error  
before ?.


MPI::File::Open is an inline function, not a macro. You must have an
unqualified Open macro defined in this compilation unit. Maybe in one
of the headers that were included in your code before hdf5.h. Does it
work if you include hdf5.h first?

Jed 

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread George Bosilca
One thing to look for is the process distribution. Based on the  
application communication pattern, the process distribution can have a  
tremendous impact on the execution time. Imagine that the application  
split the processes in two equal groups based on the rank and only  
communicate in each group. If such a group end-up on the same node,  
then it will use sm for communications. On the opposite, if they end- 
up spread across the nodes they will use TCP (which obviously has a  
bigger latency and a smaller bandwidth) and the overall performance  
will be greatly impacted.


By default, Open MPI use the following strategy to distribute  
processes: if a node has several processors, then consecutive ranks  
will be started on the same node. As an example in your case (2 nodes  
with 4 processors each), the ranks 0-3 will be started on the first  
host, while the ranks 4-7 on the second one. I don't know what is the  
default distribution for MPICH2 ...


Anyway, there is a easy way to check if the process distribution is  
the root of your problem. Please execute your application twice, once  
providing to mpirun the --bynode argument, and once with the --byslot.


  george.

On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:


Hi All,

   I wanted to switch from mpich2/mvapich2 to OpenMPI, as  
OpenMPI supports both ethernet and infiniband. Before doing that I  
tested an application 'GROMACS' to compare the performance of MPICH2  
& OpenMPI. Both have been compiled with GNU compilers.


After this benchmark, I came to know that OpenMPI is slower than  
MPICH2.


This benchmark is run on a AMD dual core, dual opteron processor.  
Both have compiled with default configurations.


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.

Any comments ..?

Thanks,
Sangamesh
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brian Dobbins
Hi guys,

[From Eugene Loh:]

> OpenMPI - 25 m 39 s.
>> MPICH2  -  15 m 53 s.
>>
> With regards to your issue, do you have any indication when you get that
> 25m39s timing if there is a grotesque amount of time being spent in MPI
> calls?  Or, is the slowdown due to non-MPI portions?


  Just to add my two cents: if this job *can* be run on less than 8
processors (ideally, even on just 1), then I'd recommend doing so.  That is,
run it with OpenMPI and with MPICH2 on 1, 2 and 4 processors as well.  If
the single-processor jobs still give vastly different timings, then perhaps
Eugene is on the right track and it comes down to various computational
optimizations and not so much the message-passing that's make a difference.
Timings from 2 and 4 process runs might be interesting as well to see how
this difference changes with process counts.

  I've seen differences between various MPI libraries before, but nothing
quite this severe either.  If I get the time, maybe I'll try to set up
Gromacs tonight -- I've got both MPICH2 and OpenMPI installed here and can
try to duplicate the runs.   Sangamesh, is this a standard benchmark case
that anyone can download and run?

  Cheers,
  - Brian


Brian Dobbins
Yale Engineering HPC


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh

Sangamesh B wrote:

I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI 
supports both ethernet and infiniband. Before doing that I tested an 
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. 
Both have been compiled with GNU compilers.


After this benchmark, I came to know that OpenMPI is slower than MPICH2.

This benchmark is run on a AMD dual core, dual opteron processor. Both 
have compiled with default configurations.


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.


I agree with Samuel that this difference is strikingly large.

I had a thought that might not apply to your case, but I figured I'd 
share it anyhow.


I don't understand MPICH very well, but it seemed as though some of the 
flags used in building MPICH are supposed to be added in automatically 
to the mpicc/etc compiler wrappers.  That is, if you specified CFLAGS=-O 
to build MPICH, then if you compiled an application with mpicc you would 
automatically get -O.  At least that was my impression.  Maybe I 
misunderstood the documentation.  (If you want to use some flags just 
for building MPICH but you don't want users to get those flags 
automatically when they use mpicc, you're supposed to use flags like 
MPICH2LIB_CFLAGS instead of just CFLAGS when you run "configure".)


Not only may this theory not apply to your case, but I'm not even sure 
it holds water.  I just tried building MPICH2 with --enable-fast turned 
on.  The "configure" output indicates I'm getting CFLAGS=-O2, but when I 
run "mpicc -show" it seems to invoke gcc without any optimization flags 
by default.


So, I guess I'm sending this mail less to help you and more as a request 
that someone might improve my understanding.


With regards to your issue, do you have any indication when you get that 
25m39s timing if there is a grotesque amount of time being spent in MPI 
calls?  Or, is the slowdown due to non-MPI portions?


Re: [OMPI users] Problem building OpenMPi with SunStudio compilers

2008-10-08 Thread Ethan Mallove
On Mon, Oct/06/2008 12:24:48PM, Ray Muno wrote:
> Ethan Mallove wrote:
> 
> >> Now I get farther along but the build fails at (small excerpt)
> >>
> >> mutex.c:(.text+0x30): multiple definition of `opal_atomic_cmpset_32'
> >> asm/.libs/libasm.a(asm.o):asm.c:(.text+0x30): first defined here
> >> threads/.libs/mutex.o: In function `opal_atomic_cmpset_64':
> >> mutex.c:(.text+0x50): multiple definition of `opal_atomic_cmpset_64'
> >> asm/.libs/libasm.a(asm.o):asm.c:(.text+0x50): first defined here
> >> make[2]: *** [libopen-pal.la] Error 1
> >> make[2]: Leaving directory 
> >> `/home/muno/OpenMPI/SunStudio/openmpi-1.2.7/opal'
> >> make[1]: *** [all-recursive] Error 1
> >> make[1]: Leaving directory 
> >> `/home/muno/OpenMPI/SunStudio/openmpi-1.2.7/opal'
> >> make: *** [all-recursive] Error 1
> >>
> >> I based the configure on what was found in the FAQ here.
> >>
> >> http://www.open-mpi.org/faq/?category=building#build-sun-compilers
> >>
> >> Perhaps this is much more specific to our platform/OS.
> >>
> >> The environment is AMD Opteron, Barcelona running Centos 5
> >> (Rocks 5.03) with SunStudio 12 compilers.
> >>
> > 
> > Unfortunately I haven't seen the above issue, so I don't
> > have a workaround to propose. There are some issues that
> > have been fixed with GCC-style inline assembly in the latest
> > Sun Studio Express build. Could you try it out?
> > 
> >   http://developers.sun.com/sunstudio/downloads/express/index.jsp
> > 
> > -Ethan
> > 
> > 
> 
> Looks like it dies at the exact same spot. I have the C++
> failure as well (supplied ld does not work).
> 

Can you send your full config.log. This will help in an
attempt to reproduce your issue.

-Ethan


> -- 
> 
>  Ray Muno   http://www.aem.umn.edu/people/staff/muno
>  University of Minnesota   e-mail:   m...@aem.umn.edu
>  Aerospace Engineering and MechanicsPhone: (612) 625-9531
>  110 Union St. S.E.   FAX: (612) 626-1558
>  Minneapolis, Mn 55455
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Jed Brown
On Wed, Oct 8, 2008 at 21:19, Sudhakar Mahalingam  wrote:
> I am having a problem about "Open" Macro's number of arguments, when I try
> to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5 machine. The
> error message is given below. When I look at the file.h and file_inln.h
> header files in the cxx folder, I am seeing that the "Open" function indeed
> takes four arguments but I don't know why there is this error about the
> number of arguments of 4. Does anyone else seen this type of error before ?.

MPI::File::Open is an inline function, not a macro.  You must have an
unqualified Open macro defined in this compilation unit.  Maybe in one
of the headers that were included in your code before hdf5.h.  Does it
work if you include hdf5.h first?

Jed


[OMPI users] compilation error about Open Macro when building the code with OpenMPI on Mac OS 10.5.5

2008-10-08 Thread Sudhakar Mahalingam

Hi,

I am having a problem about "Open" Macro's number of arguments, when I  
try to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5  
machine. The error message is given below. When I look at the file.h  
and file_inln.h header files in the cxx folder, I am seeing that the  
"Open" function indeed takes four arguments but I don't know why there  
is this error about the number of arguments of 4. Does anyone else  
seen this type of error before ?.


Thanks for your help.

Sudhakar

/usr/local/mpi/bin/mpicxx -DHAVE_CONFIG_H -I. -I. -I.. -I../advisor - 
I../physics -I../otools -I../otools -I../config -I../xg -I/usr/local/ 
hdf5mpi/include -I/usr/local/txphysics-2.1/include -I/usr/local/ 
petscmpi/include -I/usr/local/petscmpi/bmake/darwin9.5.0-c-debug - 
I.-O3 -pipe -funroll-loops -Wall -Wno-unused  -O3 -pipe -funroll- 
loops   -DQT3_SUPPORT -DUNIX -DMPI_VERSION   -DNOX -c -o OopicMain.o  
OopicMain.cpp

In file included from /usr/local/hdf5mpi/include/H5public.h:54,
   from /usr/local/hdf5mpi/include/hdf5.h:24,
   from ../otools/dumpHDF5.h:20,
   from ../physics/plsmadev.h:35,
   from OopicMain.h:42,
   from OopicMain.cpp:20:
/usr/local/openmpi-1.2.7/include/mpi.h:162:1: warning: "MPI_VERSION"  
redefined
:1:1: warning: this is the location of the previous  
definition
In file included from /usr/local/openmpi-1.2.7/include/openmpi/ompi/ 
mpi/cxx/mpicxx.h:200,

   from /usr/local/openmpi-1.2.7/include/mpi.h:1795,
   from /usr/local/hdf5mpi/include/H5public.h:54,
   from /usr/local/hdf5mpi/include/hdf5.h:24,
   from ../otools/dumpHDF5.h:20,
   from ../physics/plsmadev.h:35,
   from OopicMain.h:42,
   from OopicMain.cpp:20:
/usr/local/openmpi-1.2.7/include/openmpi/ompi/mpi/cxx/file.h:124:25:  
error: macro "Open" passed 4 arguments, but takes just 1
In file included from /usr/local/openmpi-1.2.7/include/openmpi/ompi/ 
mpi/cxx/mpicxx.h:257,

   from /usr/local/openmpi-1.2.7/include/mpi.h:1795,
   from /usr/local/hdf5mpi/include/H5public.h:54,
   from /usr/local/hdf5mpi/include/hdf5.h:24,
   from ../otools/dumpHDF5.h:20,
   from ../physics/plsmadev.h:35,
   from OopicMain.h:42,
   from OopicMain.cpp:20:
/usr/local/openmpi-1.2.7/include/openmpi/ompi/mpi/cxx/file_inln.h: 
189:27: error: macro "Open" passed 4 arguments, but takes just 1
/usr/local/openmpi-1.2.7/include/openmpi/ompi/mpi/cxx/file_inln.h:187:  
error: invalid function declaration

make[2]: *** [OopicMain.o] Error 1
make[2]: *** Waiting for unfinished jobs
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres

On Oct 8, 2008, at 10:58 AM, Ashley Pittman wrote:


You probably already know this but the obvious candidate here is the
memcpy() function, icc sticks in it's own which in some cases is much
better than the libc one.  It's unusual for compilers to have *huge*
differences from code optimisations alone.



Yep -- memcpy is one of the things that we're looking at.  Haven't  
heard back on the results from the next round of testing yet (one of  
the initial suggestions we had was to separate openib vs. sm  
performance and see if one of them yielded an obvious difference).


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Terry Dontje

Yann,

Well, when you use f90 to link it passed the linker the -t option which is 
described in the manpage with the following:

   Turns off the warning for multiply-defined symbols  that
have different sizes or different alignments.

That's why :-)

To your original question should you worry about this?  Answer is no.

The reason why follows, after digging in the OMPI code no you should not.  The 
reason being is what is happening is in the Fortran library we define 
MPI_STATUS_IGNORE to be a size of 5 integers so when you pass it to an MPI call 
you don't get an error from the compiler complaining that the argument doesn't 
match the parameter type it was expecting.  However, we also define 
MPI_STATUS_IGNORE in a common block to overlap the libmpi.so variable 
mpi_fortran_status_ignore which is a pointer to an integer.  This is done so 
when you pass MPI_STATUS_IGNORE to an MPI call it can recognize this as a 
special MPI_STATUS_IGNORE value and operate appropriately (ie don't return 
values back via the status structure.

The problem is that when libmpi_f90.so is made the size of 
mpi_fortran_status_ignore is assumed to be 5 integers (ie 0x14) but libmpi.so 
has it defined as a pointer to an integer in your case 8 bytes.  Since 
libmpi.so is actually doing nothing except looking at the address of the common 
block you do not run the risk of having issues with the size being off so 
ignoring the size differences of the symbol is ok.


Sorry it took so long for me to piece all of this together.  I actually mucked 
with this before about 9 months ago.  I guess it was such a traumatic 
experience that I blanked out the workings :-).

--td


Date: Wed, 08 Oct 2008 15:58:11 +0200
From: "Yann JOBIC" 
Subject: Re: [OMPI users] OMPI link error with petsc 2.3.3
To: Open MPI Users 
Message-ID: <48ecbc73.2020...@polytech.univ-mrs.fr>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I just tried to link with mpif90. And that's working! I don't have the 
warning.

(the small change from your command : PIC, not fPIC)

I'm trying to compile PETSC with the new linker.

How come we don't have the warning ?

Thanks,

Yann





Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen


Jeff,

You probably already know this but the obvious candidate here is the
memcpy() function, icc sticks in it's own which in some cases is much
better than the libc one.  It's unusual for compilers to have *huge*
differences from code optimisations alone.


I know this is off topic, but I was interested in this performance,   
I compared dcopy() from blas, memcpy() and just C code with optimizer  
turned up in PGI/7.2


Results are here:

http://www.mlds-networks.com/index.php/component/option,com_mojo/ 
Itemid,29/p,49/






Ashley,

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ashley Pittman
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote:
> - Have you tried compiling Open MPI with something other than GCC?   
> Just this week, we've gotten some reports from an OMPI member that  
> they are sometimes seeing *huge* performance differences with OMPI  
> compiled with GCC vs. any other compiler (Intel, PGI, Pathscale).
> We  
> are working to figure out why; no root cause has been identified yet.

Jeff,

You probably already know this but the obvious candidate here is the
memcpy() function, icc sticks in it's own which in some cases is much
better than the libc one.  It's unusual for compilers to have *huge*
differences from code optimisations alone.

Ashley,



Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres

On Oct 8, 2008, at 10:26 AM, Sangamesh B wrote:

- What version of Open MPI are you using?  Please send the  
information listed here:

1.2.7

   http://www.open-mpi.org/community/help/

- Did you specify to use mpi_leave_pinned?
No
Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don't  
know if leave pinned behavior benefits Gromacs or not, but it likely  
won't hurt)


I see from your other mail that you are not using IB.  If you're only  
using TCP, then mpi_leave_pinned will have little/no effect.



- Did you enable processor affinity?
No
 Use "--mca mpi_paffinity_alone 1" on your mpirun command line.
Will use these options in the next benchmark

- Are you sure that Open MPI didn't fall back to ethernet (and not  
use IB)?  Use "--mca btl openib,self" on your mpirun command line.
I'm using TCP. There is no infiniband support. But eventhough the  
results can be compared?


Yes, they should be comparable.  We've always known that our TCP  
support is "ok" but not "great" (truthfully: we've not tuned it nearly  
as extensively as we've tuned our other transports).  But such a huge  
performance difference is surprising.


It this one 1 or more nodes?  It might be useful to delineate between  
TCP and shared memory performance difference.  I believe that MPICH2's  
shmem performance is likely to be better than OMPI v1.2's, but like  
TCP, it shouldn't be *that* huge.



- Have you tried compiling Open MPI with something other than GCC?
No.
 Just this week, we've gotten some reports from an OMPI member that  
they are sometimes seeing *huge* performance differences with OMPI  
compiled with GCC vs. any other compiler (Intel, PGI, Pathscale).   
We are working to figure out why; no root cause has been identified  
yet.

I'll try for other than gcc and comeback to you


That would be most useful; thanks.

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
FYI attached here OpenMPI install details

On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B  wrote:

>
>
> On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres  wrote:
>
>> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>>
>>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>>> supports both ethernet and infiniband. Before doing that I tested an
>>> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
>>> have been compiled with GNU compilers.
>>>
>>> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
>>>
>>> This benchmark is run on a AMD dual core, dual opteron processor. Both
>>> have compiled with default configurations.
>>>
>>> The job is run on 2 nodes - 8 cores.
>>>
>>> OpenMPI - 25 m 39 s.
>>> MPICH2  -  15 m 53 s.
>>>
>>
>>
>> A few things:
>>
>> - What version of Open MPI are you using?  Please send the information
>> listed here:
>>
> 1.2.7
>
>>
>>http://www.open-mpi.org/community/help/
>>
>> - Did you specify to use mpi_leave_pinned?
>
> No
>
>> Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don't know
>> if leave pinned behavior benefits Gromacs or not, but it likely won't hurt)
>>
>
>> - Did you enable processor affinity?
>
> No
>
>>  Use "--mca mpi_paffinity_alone 1" on your mpirun command line.
>>
> Will use these options in the next benchmark
>
>>
>> - Are you sure that Open MPI didn't fall back to ethernet (and not use
>> IB)?  Use "--mca btl openib,self" on your mpirun command line.
>>
> I'm using TCP. There is no infiniband support. But eventhough the results
> can be compared?
>
>>
>> - Have you tried compiling Open MPI with something other than GCC?
>
> No.
>
>>  Just this week, we've gotten some reports from an OMPI member that they
>> are sometimes seeing *huge* performance differences with OMPI compiled with
>> GCC vs. any other compiler (Intel, PGI, Pathscale).  We are working to
>> figure out why; no root cause has been identified yet.
>>
> I'll try for other than gcc and comeback to you
>
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>


*
** **
** WARNING:  This email contains an attachment of a very suspicious type.  **
** You are urged NOT to open this attachment unless you are absolutely **
** sure it is legitimate.  Opening this attachment may cause irreparable   **
** damage to your computer and your files.  If you have any questions  **
** about the validity of this message, PLEASE SEEK HELP BEFORE OPENING IT. **
** **
** This warning was added by the IU Computer Science Dept. mail scanner.   **
*


<>


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres  wrote:

> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
>> have been compiled with GNU compilers.
>>
>> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
>>
>> This benchmark is run on a AMD dual core, dual opteron processor. Both
>> have compiled with default configurations.
>>
>> The job is run on 2 nodes - 8 cores.
>>
>> OpenMPI - 25 m 39 s.
>> MPICH2  -  15 m 53 s.
>>
>
>
> A few things:
>
> - What version of Open MPI are you using?  Please send the information
> listed here:
>
1.2.7

>
>http://www.open-mpi.org/community/help/
>
> - Did you specify to use mpi_leave_pinned?

No

> Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don't know if
> leave pinned behavior benefits Gromacs or not, but it likely won't hurt)
>

> - Did you enable processor affinity?

No

>  Use "--mca mpi_paffinity_alone 1" on your mpirun command line.
>
Will use these options in the next benchmark

>
> - Are you sure that Open MPI didn't fall back to ethernet (and not use IB)?
>  Use "--mca btl openib,self" on your mpirun command line.
>
I'm using TCP. There is no infiniband support. But eventhough the results
can be compared?

>
> - Have you tried compiling Open MPI with something other than GCC?

No.

>  Just this week, we've gotten some reports from an OMPI member that they
> are sometimes seeing *huge* performance differences with OMPI compiled with
> GCC vs. any other compiler (Intel, PGI, Pathscale).  We are working to
> figure out why; no root cause has been identified yet.
>
I'll try for other than gcc and comeback to you

>
> --
> Jeff Squyres
> Cisco Systems
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen  wrote:

> Your doing this on just one node?  That would be using the OpenMPI SM
> transport,  Last I knew it wasn't that optimized though should still be much
> faster than TCP.
>

its on 2 nodes. I'm using TCP only. There is no infiniband hardware.

>
> I am surpised at your result though I do not have MPICH2 on the cluster
> right now I don't have time to compare.
>
> How did you run the job?


MPICH2:

time /opt/mpich2/gnu/bin/mpirun -machinefile ./mach -np 8
/opt/apps/gromacs333/bin/mdrun_mpi | tee gro_bench_8p

OpenMPI:

$ time /opt/ompi127/bin/mpirun -machinefile ./mach -np 8
/opt/apps/gromacs333_ompi/bin/mdrun_mpi | tee gromacs_openmpi_8process


>
>
> Brock Palen
> www.umich.edu/~brockp 
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
>
>
>
>
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>  Hi All,
>>
>>   I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
>> have been compiled with GNU compilers.
>>
>> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
>>
>> This benchmark is run on a AMD dual core, dual opteron processor. Both
>> have compiled with default configurations.
>>
>> The job is run on 2 nodes - 8 cores.
>>
>> OpenMPI - 25 m 39 s.
>> MPICH2  -  15 m 53 s.
>>
>> Any comments ..?
>>
>> Thanks,
>> Sangamesh
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Samuel Sarholz

Hi,

my experience is that OpenMPI has slightly less latency and less 
bandwidth than Intel MPI (which is based on mpich2) using InfiniBand.

I don't remember the numbers using shared memory.

As you are seeing a huge difference, I would suspect that either 
something with your compilation is strange or more probable that you hit 
the cc-numa effect of the Opteron.
You might want to bind the MPI processes (and even clean the filesystem 
caches) to avoid that effect.


best regards,
Samuel

Sangamesh B wrote:

Hi All,

   I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI 
supports both ethernet and infiniband. Before doing that I tested an 
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. 
Both have been compiled with GNU compilers.


After this benchmark, I came to know that OpenMPI is slower than MPICH2.

This benchmark is run on a AMD dual core, dual opteron processor. Both 
have compiled with default configurations.


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.

Any comments ..?

Thanks,
Sangamesh


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres

On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:

   I wanted to switch from mpich2/mvapich2 to OpenMPI, as  
OpenMPI supports both ethernet and infiniband. Before doing that I  
tested an application 'GROMACS' to compare the performance of MPICH2  
& OpenMPI. Both have been compiled with GNU compilers.


After this benchmark, I came to know that OpenMPI is slower than  
MPICH2.


This benchmark is run on a AMD dual core, dual opteron processor.  
Both have compiled with default configurations.


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.



A few things:

- What version of Open MPI are you using?  Please send the information  
listed here:


http://www.open-mpi.org/community/help/

- Did you specify to use mpi_leave_pinned?  Use "--mca  
mpi_leave_pinned 1" on your mpirun command line (I don't know if leave  
pinned behavior benefits Gromacs or not, but it likely won't hurt)


- Did you enable processor affinity?  Use "--mca mpi_paffinity_alone  
1" on your mpirun command line.


- Are you sure that Open MPI didn't fall back to ethernet (and not use  
IB)?  Use "--mca btl openib,self" on your mpirun command line.


- Have you tried compiling Open MPI with something other than GCC?   
Just this week, we've gotten some reports from an OMPI member that  
they are sometimes seeing *huge* performance differences with OMPI  
compiled with GCC vs. any other compiler (Intel, PGI, Pathscale).  We  
are working to figure out why; no root cause has been identified yet.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen
Your doing this on just one node?  That would be using the OpenMPI SM  
transport,  Last I knew it wasn't that optimized though should still  
be much faster than TCP.


I am surpised at your result though I do not have MPICH2 on the  
cluster right now I don't have time to compare.


How did you run the job?

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:


Hi All,

   I wanted to switch from mpich2/mvapich2 to OpenMPI, as  
OpenMPI supports both ethernet and infiniband. Before doing that I  
tested an application 'GROMACS' to compare the performance of  
MPICH2 & OpenMPI. Both have been compiled with GNU compilers.


After this benchmark, I came to know that OpenMPI is slower than  
MPICH2.


This benchmark is run on a AMD dual core, dual opteron processor.  
Both have compiled with default configurations.


The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.

Any comments ..?

Thanks,
Sangamesh
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Terry Dontje

Yann,

Your whole compile process in your email below shows you using mpicc to 
link your executable.  Can you please try and do the following for 
linkage  instead?


mpif90 -fPIC -m64  -o solv_ksp solv_ksp.o 
-R/opt/lib/petsc/lib/amd-64-openmpi_no_debug 
-L/opt/lib/petsc/lib/amd-64-openmpi_no_debug -lpetscsnes -lpetscksp 
-lpetscdm -lpetscmat -lpetscvec -lpetsc   -lX11  -lsunperf -lsunmath -lm 
-ldl -R/opt/mx/lib/amd64 -R/opt/SUNWhpc/HPC8.0/lib/amd64 
-R/opt/SUNWhpc/HPC8.0/lib/amd64 -L/opt/SUNWhpc/HPC8.0/lib/amd64 -lmpi 
-lopen-rte -lopen-pal -lnsl -lrt -lm -lsocket  
-R/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/lib/amd64 
-L/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/prod/lib/amd64 
-L/opt/SUNWspro/prod/lib/amd64 -R/usr/ccs/lib/amd64 -L/usr/ccs/lib/amd64 
-R/lib/64 -L/lib/64 -R/usr/lib/64 -L/usr/lib/64 -lm -lfui -lfai -lfsu 
-lsunmath -lmtsk -lm   -ldl  -R/usr/ucblib


--td
Date: Wed, 08 Oct 2008 14:14:50 +0200 From: "Yann JOBIC" 
 Subject: Re: [OMPI users] OMPI link error 
with petsc 2.3.3 To: Open MPI Users  Message-ID: 
<48eca43a.1060...@polytech.univ-mrs.fr> Content-Type: text/plain; 
charset=ISO-8859-1; format=flowed Hello, I used cc to compile. I tried 
to use mpicc/mpif90 to compile PETSC, but it changed nothing. I still 
have the same error. I'm giving you the whole compile proccess : 
4440p-jobic% gmake solv_ksp mpicc -o solv_ksp.o -c -fPIC -m64 
-I/opt/lib/petsc -I/opt/lib/petsc/bmake/amd-64-openmpi_no_debug 
-I/opt/lib/petsc/include -I/opt/SUNWhpc/HPC8.0/include 
-I/opt/SUNWhpc/HPC8.0/include/amd64 -I. -D__SDIR__="" solv_ksp.c mpicc 
-fPIC -m64 -o solv_ksp solv_ksp.o 
-R/opt/lib/petsc/lib/amd-64-openmpi_no_debug 
-L/opt/lib/petsc/lib/amd-64-openmpi_no_debug -lpetscsnes -lpetscksp 
-lpetscdm -lpetscmat -lpetscvec -lpetsc -lX11 -lsunperf -lsunmath -lm 
-ldl -R/opt/mx/lib/amd64 -R/opt/SUNWhpc/HPC8.0/lib/amd64 
-R/opt/SUNWhpc/HPC8.0/lib/amd64 -L/opt/SUNWhpc/HPC8.0/lib/amd64 -lmpi 
-lopen-rte -lopen-pal -lnsl -lrt -lm -lsocket -lmpi_f77 -lmpi_f90 
-R/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/lib/amd64 
-L/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/prod/lib/amd64 
-L/opt/SUNWspro/prod/lib/amd64 -R/usr/ccs/lib/amd64 
-L/usr/ccs/lib/amd64 -R/lib/64 -L/lib/64 -R/usr/lib/64 -L/usr/lib/64 
-lm -lfui -lfai -lfsu -lsunmath -lmtsk -lm -ldl -R/usr/ucblib ld: 
warning: symbol `mpi_fortran_status_ignore_' has differing sizes: 
(file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14); 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken /usr/bin/rm 
-f solv_ksp.o Thanks for your help, Yann Terry Dontje wrote:

> Yann,
>
> How were you trying to link your code with PETSc?  Did you use mpif90 
> or mpif77 wrappers or were you using cc or mpicc wrappers?  I ran some 
> basic tests that test the usage of MPI_STATUS_IGNORE using mpif90 (and 
> mpif77) and it works fine.  However I was able to generate a similar 
> error as you did when tried to link things with the cc program. 
> If you are using cc to link could you possibly try to use mpif90 to 
> link your code?

>
> --td
>
> Date: Tue, 07 Oct 2008 16:55:14 +0200
> From: "Yann JOBIC" 
> Subject: [OMPI users] OMPI link error with petsc 2.3.3
> To: Open MPI Users 
> Message-ID: <48eb7852.6070...@polytech.univ-mrs.fr>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hello,
>
> I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, 
> and solaris 10u5

>
> I've got this error when linking a PETSc code :
> ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
>(file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
> /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

>/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken
>
>
> Isn't it very strange ?
>
> Have you got any idea on the way to solve it ?
>
> Many thanks,
>
> Yann
>

>>   
  

>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




-- ___ Yann JOBIC HPC engineer Polytech 
Marseille DME IUSTI-CNRS UMR 6595 Technop?le de Ch?teau Gombert 5 rue 
Enrico Fermi 13453 Marseille cedex 13 Tel : (33) 4 91 10 69 39 ou (33) 
4 91 10 69 43 Fax : (33) 4 91 10 69 69




Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ray Muno
I would be interested in what others have to say about this as well.

We have been doing a bit of performance testing since we are deploying a
new cluster and it is our first InfiniBand based set up.

In our experience, so far, OpenMPI is coming out faster than MVAPICH.
Comparisons were made with different compilers, PGI and Pathscale. We do
not have a running implementation of OpenMPI with SunStudio compilers.

Our tests were with actual user codes running on up to 600 processors so
far.


Sangamesh B wrote:
> Hi All,
> 
>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
> supports both ethernet and infiniband. Before doing that I tested an
> application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
> have been compiled with GNU compilers.
> 
> After this benchmark, I came to know that OpenMPI is slower than MPICH2.
> 
> This benchmark is run on a AMD dual core, dual opteron processor. Both have
> compiled with default configurations.
> 
> The job is run on 2 nodes - 8 cores.
> 
> OpenMPI - 25 m 39 s.
> MPICH2  -  15 m 53 s.
> 
> Any comments ..?
> 
> Thanks,
> Sangamesh
> 

-Ray Muno
 Aerospace Engineering.


[OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
Hi All,

   I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
supports both ethernet and infiniband. Before doing that I tested an
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
have been compiled with GNU compilers.

After this benchmark, I came to know that OpenMPI is slower than MPICH2.

This benchmark is run on a AMD dual core, dual opteron processor. Both have
compiled with default configurations.

The job is run on 2 nodes - 8 cores.

OpenMPI - 25 m 39 s.
MPICH2  -  15 m 53 s.

Any comments ..?

Thanks,
Sangamesh


Re: [OMPI users] OMPI link error with petsc 2.3.3

2008-10-08 Thread Yann JOBIC

Hello,

I used cc to compile. I tried to use mpicc/mpif90 to compile PETSC, but 
it changed nothing.

I still have the same error.

I'm giving you the whole compile proccess :
4440p-jobic% gmake solv_ksp
mpicc -o solv_ksp.o -c -fPIC -m64 -I/opt/lib/petsc 
-I/opt/lib/petsc/bmake/amd-64-openmpi_no_debug -I/opt/lib/petsc/include 
-I/opt/SUNWhpc/HPC8.0/include -I/opt/SUNWhpc/HPC8.0/include/amd64 -I. 
-D__SDIR__="" solv_ksp.c
mpicc -fPIC -m64  -o solv_ksp solv_ksp.o 
-R/opt/lib/petsc/lib/amd-64-openmpi_no_debug 
-L/opt/lib/petsc/lib/amd-64-openmpi_no_debug -lpetscsnes -lpetscksp 
-lpetscdm -lpetscmat -lpetscvec -lpetsc   -lX11  -lsunperf -lsunmath -lm 
-ldl -R/opt/mx/lib/amd64 -R/opt/SUNWhpc/HPC8.0/lib/amd64 
-R/opt/SUNWhpc/HPC8.0/lib/amd64 -L/opt/SUNWhpc/HPC8.0/lib/amd64 -lmpi 
-lopen-rte -lopen-pal -lnsl -lrt -lm -lsocket -lmpi_f77 -lmpi_f90 
-R/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/lib/amd64 
-L/opt/SUNWspro/lib/amd64 -R/opt/SUNWspro/prod/lib/amd64 
-L/opt/SUNWspro/prod/lib/amd64 -R/usr/ccs/lib/amd64 -L/usr/ccs/lib/amd64 
-R/lib/64 -L/lib/64 -R/usr/lib/64 -L/usr/lib/64 -lm -lfui -lfai -lfsu 
-lsunmath -lmtsk -lm   -ldl  -R/usr/ucblib

ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken
/usr/bin/rm -f solv_ksp.o


Thanks for your help,

Yann

Terry Dontje wrote:

Yann,

How were you trying to link your code with PETSc?  Did you use mpif90 
or mpif77 wrappers or were you using cc or mpicc wrappers?  I ran some 
basic tests that test the usage of MPI_STATUS_IGNORE using mpif90 (and 
mpif77) and it works fine.  However I was able to generate a similar 
error as you did when tried to link things with the cc program. 
If you are using cc to link could you possibly try to use mpif90 to 
link your code?


--td

Date: Tue, 07 Oct 2008 16:55:14 +0200
From: "Yann JOBIC" 
Subject: [OMPI users] OMPI link error with petsc 2.3.3
To: Open MPI Users 
Message-ID: <48eb7852.6070...@polytech.univ-mrs.fr>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hello,

I'm using openmpi 1.3r19400 (ClusterTools 8.0), with sun studio 12, 
and solaris 10u5


I've got this error when linking a PETSc code :
ld: warning: symbol `mpi_fortran_status_ignore_' has differing sizes:
   (file /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so value=0x8; file 
/opt/SUNWhpc/HPC8.0/lib/amd64/libmpi_f90.so value=0x14);

   /opt/SUNWhpc/HPC8.0/lib/amd64/libmpi.so definition taken


Isn't it very strange ?

Have you got any idea on the way to solve it ?

Many thanks,

Yann

  


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
___

Yann JOBIC
HPC engineer
Polytech Marseille DME
IUSTI-CNRS UMR 6595
Technopôle de Château Gombert
5 rue Enrico Fermi
13453 Marseille cedex 13
Tel : (33) 4 91 10 69 39
 ou  (33) 4 91 10 69 43
Fax : (33) 4 91 10 69 69