Re: [OMPI users] General question on the implementation ofa"scheduler" on client side...

2010-05-21 Thread Jeff Squyres
On May 21, 2010, at 3:13 AM, Olivier Riff wrote: > -> That is what I was thinking about to implement. As you mentioned, and > specifically for my case where I mainly send short messages, there might not > be much win. By the way, are their some benchmarks testing sequential > MPI_ISend versus

Re: [OMPI users] Some Questions on Building OMPI on Linux Em64t

2010-05-21 Thread Michael E. Thomadakis
Hello, I am resending this because I am not sure if it was sent out to the OMPI list. Any help would be greatly appreciated. best Michael On 05/19/10 13:19, Michael E. Thomadakis wrote: Hello, I would like to build OMPI V1.4.2 and make it available to our users at the Supercomputing

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-21 Thread Patrick Geoffray
Hi Jose, On 5/21/2010 6:54 AM, José Ignacio Aliaga Estellés wrote: We have used the lspci -vvxxx and we have obtained: bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit Ethernet Controller (Copper) (rev 02) This is the output for the Intel GigE NIC, you should look at the

Re: [OMPI users] [sge::tight-integration] slot scheduling and resources handling

2010-05-21 Thread Reuti
Hi, Am 21.05.2010 um 17:19 schrieb Eloi Gaudry: > Hi Reuti, > > Yes, the openmpi binaries used were build after having used the --with-sge > during configure, and we only use those binaries on our cluster. > > [eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info > MCA ras: gridengine

Re: [OMPI users] [sge::tight-integration] slot scheduling and resources handling

2010-05-21 Thread Eloi Gaudry
Hi Reuti, Yes, the openmpi binaries used were build after having used the --with-sge during configure, and we only use those binaries on our cluster. [eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info Package: Open MPI root@moe Distribution Open MPI: 1.3.3 Open MPI

Re: [OMPI users] An error occured in MPI_Bcast; MPI_ERR_TYPE: invalid datatype

2010-05-21 Thread Tom Rosmond
Your fortran call to 'mpi_bcast' needs a status parameter at the end of the argument list. Also, I don't think 'MPI_INT' is correct for fortran, it should be 'MPI_INTEGER'. With these changes the program works OK. T. Rosmond On Fri, 2010-05-21 at 11:40 +0200, Pankatz, Klaus wrote: > Hi folks,

Re: [OMPI users] An error occured in MPI_Bcast; MPI_ERR_TYPE: invalid datatype

2010-05-21 Thread Eugene Loh
Pankatz, Klaus wrote: Hi folks, openMPI 1.4.1 seems to have another problem with my machine, or something on it. This little program here (compiled with mpif90) startet with mpiexec -np 4 a.out produces the following output: Suriprisingly the same thing written in C-Code (compiled with

Re: [OMPI users] [sge::tight-integration] slot scheduling and resources handling

2010-05-21 Thread Reuti
Hi, Am 21.05.2010 um 14:11 schrieb Eloi Gaudry: > Hi there, > > I'm observing something strange on our cluster managed by SGE6.2u4 when > launching a parallel computation on several nodes, using OpenMPI/SGE tight- > integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are >

[OMPI users] [sge::tight-integration] slot scheduling and resources handling

2010-05-21 Thread Eloi Gaudry
Hi there, I'm observing something strange on our cluster managed by SGE6.2u4 when launching a parallel computation on several nodes, using OpenMPI/SGE tight- integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are not used by OpenMPI, as if OpenMPI was doing is own

Re: [OMPI users] GM + OpenMPI bug ...

2010-05-21 Thread José Ignacio Aliaga Estellés
Hi, We have used the lspci -vvxxx and we have obtained: bi00: 04:01.0 Ethernet controller: Intel Corporation 82544EI Gigabit Ethernet Controller (Copper) (rev 02) bi00: Subsystem: Intel Corporation PRO/1000 XT Server Adapter bi00: Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+

[OMPI users] An error occured in MPI_Bcast; MPI_ERR_TYPE: invalid datatype

2010-05-21 Thread Pankatz, Klaus
Hi folks, openMPI 1.4.1 seems to have another problem with my machine, or something on it. This little program here (compiled with mpif90) startet with mpiexec -np 4 a.out produces the following output: Suriprisingly the same thing written in C-Code (compiled with mpiCC) works without a

Re: [OMPI users] General question on the implementation of a"scheduler" on client side...

2010-05-21 Thread Olivier Riff
Hello Jeff, thanks for your detailed answer. 2010/5/20 Jeff Squyres > You're basically talking about implementing some kind of > application-specific protocol. A few tips that may help in your design: > > 1. Look into MPI_Isend / MPI_Irecv for non-blocking sends and