Re: [OMPI users] bproc problems

2007-04-27 Thread Daniel Gruner
Thanks to both you and David Gunter. I disabled pty support and it now works. There is still the issue of the mpirun default being "-byslot", which causes all kinds of trouble. Only by using "-bynode" do things work properly. Daniel On Thu, Apr 26, 2007 at 02:28:33PM -0600, gshipman wrote: >

Re: [OMPI users] bproc problems

2007-04-26 Thread gshipman
There is a known issue on BProc 4 w.r.t. pty support. Open MPI by default will try to use ptys for I/O forwarding but will revert to pipes if ptys are not available. You can "safely" ignore the pty warnings, or you may want to rerun configure and add: --disable-pty-support I say "safely"

Re: [OMPI users] bproc problems

2007-04-26 Thread David Gunter
You can eliminate the "[n17:30019] odls_bproc: openpty failed, using pipes instead" message by configuring OMPI with the --disable-pty- support flag, as there is a bug in BProc that causes that to happen. -david -- David Gunter HPC-4: HPC Environments: Parallel Tools Team Los Alamos National L

[OMPI users] bproc problems

2007-04-26 Thread Daniel Gruner
Hi I have been testing OpenMPI 1.2, and now 1.2.1, on several BProc- based clusters, and I have found some problems/issues. All my clusters have standard ethernet interconnects, either 100Base/T or Gigabit, on standard switches. The clusters are all running Clustermatic 5 (BProc 4.x), and range