Add --report-bindings to the mpirun cmd line
Remember, we do not bind processes by default, so you will need to include
something about the binding to use (by core, by socket, etc.) on the cmd line
See "mpirun -h" for the options
On Feb 9, 2013, at 8:46 PM, Kranthi Kumar
I've been talking with Kranthi offline, he wants to use locality info
inside OMPI. He needs the binding info from *inside* MPI. From 10
thousands feet, it looks like communicator->rank[X]->locality_info as a
hwloc object or as a hwloc bitmap.
Brice
Le 10/02/2013 06:07, Ralph Castain a écrit :
Hi
> > You'll want to look at orte/mca/rmaps/rank_file/rmaps_rank_file.c
> > - the bit map is now computed in mpirun and then sent to the daemons
>
> Actually, I'm getting lost in this code. Anyhow, I don't think
> the problem is related to Solaris. I think it's also on Linux.
> E.g., I can
On 2/10/2013 1:14 AM, Siegmar Gross wrote:
I don't think the problem is related to Solaris. I think it's also on Linux.
E.g., I can reproduce the problem with 1.9a1r28035 on Linux using GCC compilers.
Siegmar: can you confirm this is a problem also on Linux? E.g.,
with OMPI 1.9, on one of
Hi everyone out there,
I am a Newbie to HPC,
we have a couple of HPC clusters where I work.
So started to create in vmware workstation. lot of times I failed to run
mpi job. I have followed the default configs.
finally succeeded in installing but I had a problem when running a mpi
The error message indicates that libnuma was not installed on at least one
node. That's a system library, not an OMPI one, so you'll need to get it
installed by someone with root privileges.
On Feb 10, 2013, at 12:04 PM, satya k wrote:
> Hi everyone,
>
> I m getting the
What about *inside* OMPI?
Brice
Le 10/02/2013 21:16, Ralph Castain a écrit :
> There is no MPI standard call to get the binding. He could try to use the MPI
> extensions, depending on which version of OMPI he's using. It is in v1.6 and
> above.
>
> See "man OMPI_Affinity_str" for details
I honestly have no idea what you mean. Are you talking about inside an MPI
application? Do you mean from inside the MPI layer? Inside ORTE? Inside an ORTE
daemon?
On Feb 10, 2013, at 1:41 PM, Brice Goglin wrote:
> What about *inside* OMPI?
>
> Brice
>
>
>
> Le
Inside the OMPI implementation. He wants to use locality information for
some sort of collective algorithm tuning (or something like that). He
needs the locality of all local ranks as far as I understood. I don't
know if that's ORTE or not, but that's inside some OMPI component at least.
Brice
I see - I think. The locality of every process is stored on the ompi_proc_t for
that process in the proc_flags field. You can find the definition of the values
in opal/mca/hwloc/hwloc.h.
On Feb 10, 2013, at 1:57 PM, Brice Goglin wrote:
> Inside the OMPI
Brice
I tried using this tarball. Things didn't work. (This particular run used 2
MPI processes with 32 OpenMP threads each.)
In my application, I first output the topology in a tree structure. (I do
this in my application instead of via one of hwloc's tools because I don't
want to call out to
11 matches
Mail list logo