Re: [OMPI users] long initialization

2014-08-28 Thread Timur Ismagilov
In OMPI 1.9a1r32604 I get much better results: $ time mpirun --mca oob_tcp_if_include ib0 -np 1 ./hello_c Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI semenov@compiler-2 Distribution, ident: 1.9a1r32604, repo rev: r32604, Aug 26, 2014 (nightly snapshot tarball), 146) real 0m4.1

Re: [OMPI users] long initialization

2014-08-28 Thread Timur Ismagilov
I enclosure 2 files with output of two foloowing commands (OMPI 1.9a1r32570) $time mpirun --leave-session-attached -mca oob_base_verbose 100 -np 1 ./hello_c >& out1.txt (Hello, world, I am ) real 1m3.952s user 0m0.035s sys 0m0.107s $time mpirun --leave-session-attached -mca oob_base_verbose

Re: [OMPI users] long initialization

2014-08-27 Thread Ralph Castain
How bizarre. Please add "--leave-session-attached -mca oob_base_verbose 100" to your cmd line On Aug 27, 2014, at 4:31 AM, Timur Ismagilov wrote: > When i try to specify oob with --mca oob_tcp_if_include from ifconfig>, i alwase get error: > > $ mpirun --mca oob_tcp_if_include ib0 -np 1 ./he

Re: [OMPI users] long initialization

2014-08-27 Thread Timur Ismagilov
When i try to specify oob with --mca oob_tcp_if_include , i alwase get error: $ mpirun  --mca oob_tcp_if_include ib0 -np 1 ./hello_c -- An ORTE daemon has unexpectedly failed after launch and before communicating back to mpiru

Re: [OMPI users] long initialization

2014-08-26 Thread Ralph Castain
I think something may be messed up with your installation. I went ahead and tested this on a Slurm 2.5.4 cluster, and got the following: $ time mpirun -np 1 --host bend001 ./hello Hello, World, I am 0 of 1 [0 local peers]: get_cpubind: 0 bitmap 0,12 real0m0.086s user0m0.039s sys 0m0.

Re: [OMPI users] long initialization

2014-08-26 Thread Timur Ismagilov
I'm using slurm 2.5.6 $salloc -N8 --exclusive -J ompi -p test $ srun hostname node1-128-21 node1-128-24 node1-128-22 node1-128-26 node1-128-27 node1-128-20 node1-128-25 node1-128-23 $ time mpirun -np 1 --host node1-128-21 ./hello_c Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI s

Re: [OMPI users] long initialization

2014-08-26 Thread Timur Ismagilov
Hello! Here is my time results: $time mpirun -n 1 ./hello_c Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI semenov@compiler-2 Distribution, ident: 1.9a1r32570, repo rev: r32570, Aug 21, 2014 (nightly snapshot tarball), 146) real 1m3.985s user 0m0.031s sys 0m0.083s Fri, 22 Aug 2

Re: [OMPI users] long initialization

2014-08-22 Thread Ralph Castain
I'm also puzzled by your timing statement - I can't replicate it: 07:41:43 $ time mpirun -n 1 ./hello_c Hello, world, I am 0 of 1, (Open MPI v1.9a1, package: Open MPI rhc@bend001 Distribution, ident: 1.9a1r32577, repo rev: r32577, Unreleased developer copy, 125) real0m0.547s user0m0.04

Re: [OMPI users] long initialization

2014-08-22 Thread Mike Dubman
Hi, The default delimiter is ";" . You can change delimiter with mca_base_env_list_delimiter. On Fri, Aug 22, 2014 at 2:59 PM, Timur Ismagilov wrote: > Hello! > If i use latest night snapshot: > > $ ompi_info -V > Open MPI v1.9a1r32570 > >1. In programm hello_c initialization takes ~1 min

[OMPI users] long initialization

2014-08-22 Thread Timur Ismagilov
Hello! If i use latest night snapshot: $ ompi_info -V Open MPI v1.9a1r32570 * In programm hello_c initialization takes ~1 min In ompi 1.8.2rc4 and ealier it takes ~1 sec(or less) * if i use  $mpirun  --mca mca_base_env_list 'MXM_SHM_KCOPY_MODE=off,OMP_NUM_THREADS=8' --map-by slot:pe=8 -np 1 ./h