Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3

2013-02-05 Thread Jeff Squyres (jsquyres)
On Feb 5, 2013, at 2:18 PM, Eugene Loh wrote: > Sorry for the dumb question, but who maintains this code? OMPI, or upstream > in the hwloc project? Where should the fix be made? The version of hwloc in the v1.6 series is frozen at a somewhat-older version of hwloc

Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3

2013-02-05 Thread Eugene Loh
On 02/05/13 13:20, Eugene Loh wrote: On 02/05/13 00:30, Siegmar Gross wrote: now I can use all our machines once more. I have a problem on Solaris 10 x86_64, because the mapping of processes doesn't correspond to the rankfile. A few comments. First of all, the heterogeneous environment had

Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3

2013-02-05 Thread Eugene Loh
On 02/05/13 00:30, Siegmar Gross wrote: now I can use all our machines once more. I have a problem on Solaris 10 x86_64, because the mapping of processes doesn't correspond to the rankfile. I removed the output from "hostfile" and wrapped around long lines. tyr rankfiles 114 cat rf_ex_sunpc #

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread Gus Correa
On 02/05/2013 08:52 AM, Jeff Squyres (jsquyres) wrote: To add to what Reuti said, if you enable PBS support in Open MPI, when users "mpirun ..." in a PBS job, Open MPI will automatically use the PBS native launching mechanism, which won't let you run outside of the servers allocated to that

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread John Hearns
Lart your users. Its the only way. they will thank you for it it, eventually. www.catb.org/jargon/html/L/LART.html

Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3

2013-02-05 Thread Jeff Squyres (jsquyres)
Siegmar -- We've been talking about this offline. Can you send us an lstopo output from your Solaris machine? Send us the text output and the xml output, e.g.: lstopo > solaris.txt lstopo solaris.xml Thanks! On Feb 5, 2013, at 12:30 AM, Siegmar Gross

Re: [OMPI users] Checkpointing an MPI application with OMPI

2013-02-05 Thread Josh Hursey
This is a bit late in the thread, but I wanted to add one more note. The functionality that made it to v1.6 is fairly basic in terms of C/R support in Open MPI. It supported a global checkpoint write, and (for a time) a simple staged option (I think that is now broken). In the trunk (about 3

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread Jeff Squyres (jsquyres)
To add to what Reuti said, if you enable PBS support in Open MPI, when users "mpirun ..." in a PBS job, Open MPI will automatically use the PBS native launching mechanism, which won't let you run outside of the servers allocated to that job. Concrete example: if you qsub a job and are

Re: [OMPI users] control openmpi or force to use pbs?

2013-02-05 Thread Reuti
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Am 05.02.2013 um 11:24 schrieb Duke Nguyen: > Please advise me how to force our users to use pbs instead of "mpirun > --hostfile"? Or how do I control mpirun so that any user using "mpirun > --hostfile" will not overload the cluster? We have OpenMPI

Re: [OMPI users] OpenMPI 1.6 with Intel 11 Compiler

2013-02-05 Thread Matthias Jurenz
Did you verified that your icpc works properly? Can you compile other C++ applications with icpc? It might be that your version of icpc isn't supported with that version of gcc. I've found a ticket where a similar problem was reported: https://svn.open-mpi.org/trac/ompi/ticket/3077 The solution

Re: [OMPI users] All_to_allv algorithm patch

2013-02-05 Thread Iliev, Hristo
Hi, This is the users mailing list. There is a separate one for questions related to Open MPI development - de...@open-mpi.org. Besides, why don't you open a ticket in the Open MPI Trac at https://svn.open-mpi.org/trac/ompi/ and post there patches against trunk? My experience shows that even

Re: [OMPI users] I have still a problem with rankfiles in openmpi-1.6.4rc3

2013-02-05 Thread Siegmar Gross
Hi now I can use all our machines once more. I have a problem on Solaris 10 x86_64, because the mapping of processes doesn't correspond to the rankfile. I removed the output from "hostfile" and wrapped around long lines. tyr rankfiles 114 cat rf_ex_sunpc # mpiexec -report-bindings -rf

Re: [OMPI users] MPI_THREAD_FUNNELED and enable-mpi-thread-multiple

2013-02-05 Thread Roland Schulz
On Mon, Jan 28, 2013 at 9:20 PM, Brian Budge wrote: > I believe that yes, you have to compile enable-mpi-thread-multiple to > get anything other than SINGLE. > I just tested that compiling with enable-opal-multi-threads also makes MPI_Init_thread return