Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread Reuti via users
Am 25.07.2019 um 23:00 schrieb David Laidlaw: > Here is most of the command output when run on a grid machine: > > dblade65.dhl(101) mpiexec --version > mpiexec (OpenRTE) 2.0.2 This is some time old. I would suggest to install a fresh one. You can even compile one in your home directory and

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread David Laidlaw via users
Here is most of the command output when run on a grid machine: dblade65.dhl(101) mpiexec --version mpiexec (OpenRTE) 2.0.2 dblade65.dhl(102) ompi_info | grep grid MCA ras: gridengine (MCA v2.1.0, API v2.0.0, Component v2.0.2) dblade65.dhl(103) c denied: host

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread David Laidlaw via users
Thanks for the input, John. Here are some responses (inline): On Thu, Jul 25, 2019 at 1:21 PM John Hearns via users < users@lists.open-mpi.org> wrote: > Have you checked your ssh between nodes? > ssh is not allowed between nodes, but my understanding is that processes should be getting set up

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread Reuti via users
Am 25.07.2019 um 18:59 schrieb David Laidlaw via users: > I have been trying to run some MPI jobs under SGE for almost a year without > success. What seems like a very simple test program fails; the ingredients > of it are below. Any suggestions on any piece of the test, reasons for >

Re: [OMPI users] Question about OpenMPI paths

2019-07-25 Thread Ewen Chan via users
All: Whoops. My apologies to everybody. Accidentally pressed the wrong combination of buttons on the keyboard and sent this email out prematurely. Please disregard. Thank you. Sincerely, Ewen From: users on behalf of Ewen Chan via users Sent: July 25,

Re: [OMPI users] can't run MPI job under SGE

2019-07-25 Thread John Hearns via users
Have you checked your ssh between nodes? Also how is your Path set up? There is a difference between interactive and non interactive login sessions I advuse A. Construct a hosts file and mpirun by hand B. Use modules rather than. Bashrc files C. Slurm On Thu, 25 Jul 2019, 18:00 David Laidlaw

[OMPI users] can't run MPI job under SGE

2019-07-25 Thread David Laidlaw via users
I have been trying to run some MPI jobs under SGE for almost a year without success. What seems like a very simple test program fails; the ingredients of it are below. Any suggestions on any piece of the test, reasons for failure, requests for additional info, configuration thoughts, etc. would

Re: [OMPI users] bash: orted: command not found -- ran through the FAQ already

2019-07-25 Thread Jeff Squyres (jsquyres) via users
On Jul 25, 2019, at 10:31 AM, Ewen Chan via users wrote: > > Here's my configuration: > > OS: CentOS 7.6.1810 x86_64 (it's a fresh install. I installed it last night.) > OpenMPI version: 1.10.7 (that was the version that was available in the > CentOS install repo) > path to mpirun:

[OMPI users] bash: orted: command not found -- ran through the FAQ already

2019-07-25 Thread Ewen Chan via users
To Whom It May Concern: I'm trying to run Converge CFD by Converge Science using OpenMPI and I am getting the error: bash: orted: command not found I've already read and executed the FAQ about adding OpenMPI to my PATH and LD_LIBRARY_PATH

[OMPI users] Question about OpenMPI paths

2019-07-25 Thread Ewen Chan via users
To Whom It May Concern: I am trying to run Converge CFD by Converge Science using OpenMPI in CentOS 7.6.1810 x86_64 and I am getting the error: bash: orted: command not found I've already read the FAQ: https://www.open-mpi.org/faq/?category=running#adding-ompi-to-path Here's my system setup,

Re: [OMPI users] How is the rank determined (Open MPI and Podman)

2019-07-25 Thread Adrian Reber via users
On Wed, Jul 24, 2019 at 09:46:13PM +, Jeff Squyres (jsquyres) wrote: > On Jul 24, 2019, at 5:16 PM, Ralph Castain via users > wrote: > > > > It doesn't work that way, as you discovered. You need to add this > > information at the same place where vader currently calls modex send, and > >