Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-05 Thread r...@open-mpi.org
> On Sep 5, 2016, at 11:25 AM, George Bosilca wrote: > > Thanks for all these suggestions. I could get the expected bindings by 1) > removing the vm and 2) adding hetero. This is far from an ideal setting, as > now I have to make my own machinefile for every single run, or spawn daemons > on

Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-05 Thread George Bosilca
Thanks for all these suggestions. I could get the expected bindings by 1) removing the vm and 2) adding hetero. This is far from an ideal setting, as now I have to make my own machinefile for every single run, or spawn daemons on all the machines on the cluster. Wouldn't it be useful to make the d

Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-03 Thread r...@open-mpi.org
Ah, indeed - if the node where mpirun is executing doesn’t match the compute nodes, then you must remove that --novm option. Otherwise, we have no way of knowing what the compute node topology looks like. > On Sep 3, 2016, at 4:13 PM, Gilles Gouaillardet > wrote: > > George, > > If i unders

Re: [OMPI devel] OMPI devel] Question about Open MPI bindings

2016-09-03 Thread Gilles Gouaillardet
George, If i understand correctly, you are running mpirun on dancer, which has 2 sockets, 4 cores per socket and 2 hwthreads per core, and orted are running on arc[00-08], though the tasks only run on arc00, which has 2 sockets, 10 cores per socket and 2 hwthreads per core to me, it looks like O