Re: [OMPI users] Default value of btl_openib_memalign_threshold

2015-05-25 Thread Xavier Besseron
Hi, Thanks for your reply Ralph. The option only I'm using when configuring OpenMPI is '--prefix'. When checking the config.log file, I see configure:208504: checking whether the openib BTL will use malloc hooks configure:208510: result: yes so I guess it is properly enabled (full config.log

Re: [OMPI users] Default value of btl_openib_memalign_threshold

2015-05-25 Thread Ralph Castain
I found the problem - someone had a typo in btl_openib_mca.c. The threshold need to be set to the module eager limit as that is the only thing defined at that point. Thanks for bringing it to our attention! I’ll set it up to go into 1.8.6 > On May 25, 2015, at 3:04 AM, Xavier Besseron

[OMPI users] NAS Parallel Benchmark implementation for (open) MPI/C

2015-05-25 Thread etcamargo
Hi, All I am looking for a NAS Parallel Bechmark (NAS-PB) reference implementation coded in MPI/C language. I see that the NAS official website has a MPI/fortran implementation. There is a NAS-PB reference implementation in (open)MPI/C? Thanks in advance, Edson

Re: [OMPI users] Default value of btl_openib_memalign_threshold

2015-05-25 Thread Xavier Besseron
Good that it will be fixed in the next release! In the meantime, and because it might impact other users, I would like to ask my sysadmins to set btl_openib_memalign_threshold=12288 in etc/openmpi-mca-params.conf on our clusters. Do you see any good reason not doing it? Thanks! Xavier On

[OMPI users] MXM problem

2015-05-25 Thread Timur Ismagilov
Hello! I use ompi-v1.8.4 from hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2; OFED-1.5.4.1; CentOS release 6.2; infiniband 4x FDR I have two problems: 1. I can not use mxm : 1.a) $mpirun --mca pml cm --mca mtl mxm -host node5,node14,node28,node29 -mca plm_rsh_no_tree_spawn 1 -np 4 ./hello

Re: [OMPI users] Default value of btl_openib_memalign_threshold

2015-05-25 Thread Ralph Castain
I don’t see a problem with it. FWIW: I’m getting ready to release 1.8.6 in the next week > On May 25, 2015, at 8:46 AM, Xavier Besseron wrote: > > Good that it will be fixed in the next release! > > In the meantime, and because it might impact other users, > I would

Re: [OMPI users] MXM problem

2015-05-25 Thread Ralph Castain
I can’t speak to the mxm problem, but the no-tree-spawn issue indicates that you don’t have password-less ssh authorized between the compute nodes > On May 25, 2015, at 8:55 AM, Timur Ismagilov wrote: > > Hello! > > I use ompi-v1.8.4 from

Re: [OMPI users] MXM problem

2015-05-25 Thread Timur Ismagilov
I can password-less ssh to all nodes: base$ ssh node1 node1$ssh node2 Last login: Mon May 25 18:41:23 node2$ssh node3 Last login: Mon May 25 16:25:01 node3$ssh node4 Last login: Mon May 25 16:27:04 node4$ Is this correct? In ompi-1.9 i do not have no-tree-spawn problem. Понедельник, 25 мая

Re: [OMPI users] MXM problem

2015-05-25 Thread Mike Dubman
Hi Timur, seems that yalla component was not found in your OMPI tree. can it be that your mpirun is not from hpcx? Can you please check LD_LIBRARY_PATH,PATH, LD_PRELOAD and OPAL_PREFIX that it is pointing to the right mpirun? Also, could you please check that yalla is present in the ompi_info -l

Re: [OMPI users] MXM problem

2015-05-25 Thread Timur Ismagilov
Hi, Mike, that is what i have: $ echo $LD_LIBRARY_PATH | tr ":" "\n" /gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2/fca/lib     /gpfs/NETHOME/oivt1/nicevt/itf/sources/hpcx-v1.3.0-327-icc-OFED-1.5.3-redhat6.2/hcoll/lib      

Re: [OMPI users] MXM problem

2015-05-25 Thread Mike Dubman
scif is a OFA device from Intel. can you please select export MXM_IB_PORTS=mlx4_0:1 explicitly and retry On Mon, May 25, 2015 at 8:26 PM, Timur Ismagilov wrote: > Hi, Mike, > that is what i have: > > $ echo $LD_LIBRARY_PATH | tr ":" "\n" >

[OMPI users] Fwd: Re[4]: MXM problem

2015-05-25 Thread Timur Ismagilov
I did as you said, but got an error: node1$ export MXM_IB_PORTS=mlx4_0:1 node1$  ./mxm_perftest     Waiting for connection...          Accepted