Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Ralph Castain
Sure - just add --with-openib=no --with-psm=no to your config line and we'll ignore it On Jan 24, 2013, at 7:09 AM, Sabuj Pattanayek wrote: > ahha, with --display-allocation I'm getting : > > mca: base: component_find: unable to open >

Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Brock Palen
On Jan 24, 2013, at 10:10 AM, Sabuj Pattanayek wrote: > or do i just need to compile two versions, one with IB and one without? You should not need to, we have OMPI compiled for openib/psm and run that same install on psm/tcp and verbs(openib) based gear. All the nodes assigned to your job

Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Sabuj Pattanayek
or do i just need to compile two versions, one with IB and one without? On Thu, Jan 24, 2013 at 9:09 AM, Sabuj Pattanayek wrote: > ahha, with --display-allocation I'm getting : > > mca: base: component_find: unable to open >

Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Sabuj Pattanayek
ahha, with --display-allocation I'm getting : mca: base: component_find: unable to open /sb/apps/openmpi/1.6.3/x86_64/lib/openmpi/mca_mtl_psm: libpsm_infinipath.so.1: cannot open shared object file: No such file or directory (ignored) I think the system I compiled it on has different ib libs

Re: [OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Ralph Castain
How did you configure OMPI? If you add --display-allocation to your cmd line, does it show all the nodes? On Jan 24, 2013, at 6:34 AM, Sabuj Pattanayek wrote: > Hi, > > I'm submitting a job through torque/PBS, the head node also runs the > Moab scheduler, the .pbs file has

[OMPI users] openmpi 1.6.3, job submitted through torque/PBS + Moab (scheduler) only land on one node even though multiple nodes/processors are specified

2013-01-24 Thread Sabuj Pattanayek
Hi, I'm submitting a job through torque/PBS, the head node also runs the Moab scheduler, the .pbs file has this in the resources line : #PBS -l nodes=2:ppn=4 I've also tried something like : #PBS -l procs=56 and at the end of script I'm running : mpirun -np 8 cat /dev/urandom > /dev/null or