> On Sep 5, 2016, at 11:25 AM, George Bosilca wrote:
>
> Thanks for all these suggestions. I could get the expected bindings by 1)
> removing the vm and 2) adding hetero. This is far from an ideal setting, as
> now I have to make my own machinefile for every single run, or spawn daemons
> on
Thanks for all these suggestions. I could get the expected bindings by 1)
removing the vm and 2) adding hetero. This is far from an ideal setting, as
now I have to make my own machinefile for every single run, or spawn
daemons on all the machines on the cluster.
Wouldn't it be useful to make the d
Ah, indeed - if the node where mpirun is executing doesn’t match the compute
nodes, then you must remove that --novm option. Otherwise, we have no way of
knowing what the compute node topology looks like.
> On Sep 3, 2016, at 4:13 PM, Gilles Gouaillardet
> wrote:
>
> George,
>
> If i unders
George,
If i understand correctly, you are running mpirun on dancer, which has
2 sockets, 4 cores per socket and 2 hwthreads per core,
and orted are running on arc[00-08], though the tasks only run on arc00, which
has
2 sockets, 10 cores per socket and 2 hwthreads per core
to me, it looks like O