On Wed, Jan 6, 2016 at 4:36 PM, Matt Thompson wrote:
> On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet
> wrote:
>
>> FWIW,
>>
>> there has been one attempt to set the OMP_* environment variables within
>> OpenMPI, and that was aborted
>> because that caused crashes with a prominent commercia
On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet
wrote:
> FWIW,
>
> there has been one attempt to set the OMP_* environment variables within
> OpenMPI, and that was aborted
> because that caused crashes with a prominent commercial compiler.
>
> also, i'd like to clarify that OpenMPI does bind
FWIW,
there has been one attempt to set the OMP_* environment variables within
OpenMPI, and that was aborted
because that caused crashes with a prominent commercial compiler.
also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g.
processes), and it is up to the OpenMP runtime to bin
Thanks for the clarification. :)
2016-01-07 0:48 GMT+01:00 Jeff Hammond :
> KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option,
> although MKL will respect it since MKL uses the Intel OpenMP runtime (by
> default, at least).
>
> The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PR
KMP_AFFINITY is an Intel OpenMP runtime setting, not an MKL option,
although MKL will respect it since MKL uses the Intel OpenMP runtime (by
default, at least).
The OpenMP 4.0 equivalent of KMP_AFFINITY are OMP_PROC_BIND and
OMP_PLACES. I do not know how many OpenMP implementations support these
Ok, thanks :)
2016-01-06 22:03 GMT+01:00 Ralph Castain :
> Not really - just consistent with the other cmd line options.
>
> On Jan 6, 2016, at 12:58 PM, Nick Papior wrote:
>
> It was just that when I started using map-by I didn't get why:
> ppr:2
> but
> PE=2
> I would at least have expected:
>
Not really - just consistent with the other cmd line options.
> On Jan 6, 2016, at 12:58 PM, Nick Papior wrote:
>
> It was just that when I started using map-by I didn't get why:
> ppr:2
> but
> PE=2
> I would at least have expected:
> ppr=2:PE=2
> or
> ppr:2:PE:2
> ?
> Does this have a reason
It was just that when I started using map-by I didn't get why:
ppr:2
but
PE=2
I would at least have expected:
ppr=2:PE=2
or
ppr:2:PE:2
?
Does this have a reason?
2016-01-06 21:54 GMT+01:00 Ralph Castain :
> ah yes, “r” = “resource”!! Thanks for the reminder :-)
>
> The difference in delimiter is
ah yes, “r” = “resource”!! Thanks for the reminder :-)
The difference in delimiter is just to simplify parsing - we can “split” the
string on colons to separate out the options, and then use “=“ to set the
value. Nothing particularly significant about the choice.
> On Jan 6, 2016, at 12:48 PM
Hmmm…let me see if I can remember :-)
Procs-per-object is what it does, of course, but I honestly forget what that
last “r” stands for!
So what your command line is telling us is:
map 2 processes on each socket, binding each process to 7 cpu’s (“pe” =
processing element)
In this case, we have
Your are correct. "socket" means that the resource is socket, "ppr:2" means
2 processes per resource.
PE= is Processing Elements per process.
Perhaps the dev's can shed some light on why PE uses "=" and ppr has ":" as
delimiter for resource request?
This "old" slide show from Jeff shows the usage
A ha! The Gurus know all. The map-by was the magic sauce:
(1176) $ env OMP_NUM_THREADS=7 KMP_AFFINITY=compact mpirun -np 4 -map-by
ppr:2:socket:pe=7 ./hello-hybrid.x | sort -g -k 18
Hello from thread 0 out of 7 from process 0 out of 4 on borgo035 on CPU 0
Hello from thread 1 out of 7 from process
Ah, yes, my example was for 10 cores per socket, good catch :)
2016-01-06 21:19 GMT+01:00 Ralph Castain :
> I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7
>
>
> On Jan 6, 2016, at 12:14 PM, Nick Papior wrote:
>
> I do not think KMP_AFFINITY should affect anything in OpenMPI
I believe he wants two procs/socket, so you’d need ppr:2:socket:pe=7
> On Jan 6, 2016, at 12:14 PM, Nick Papior wrote:
>
> I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL
> env setting? Or am I wrong?
>
> Note that these are used in an environment where openmpi aut
Sure. Here's the basic one:
(1159) $ env OMP_NUM_THREADS=7 mpirun -np 4 ./hello-hybrid.x | sort -g -k 18
Hello from thread 3 out of 7 from process 0 out of 4 on borgo035 on CPU 0
Hello from thread 1 out of 7 from process 0 out of 4 on borgo035 on CPU 1
Hello from thread 4 out of 7 from process 0 o
I do not think KMP_AFFINITY should affect anything in OpenMPI, it is an MKL
env setting? Or am I wrong?
Note that these are used in an environment where openmpi automatically gets
the host-file. Hence they are not present.
With intel mkl and openmpi I got the best performance using these, rather
l
Setting KMP_AFFINITY will probably override anything that OpenMPI
sets. Can you try without?
-erik
On Wed, Jan 6, 2016 at 2:46 PM, Matt Thompson wrote:
> Hello Open MPI Gurus,
>
> As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do
> things to get the same behavior in variou
Hello Open MPI Gurus,
As I explore MPI-OpenMP hybrid codes, I'm trying to figure out how to do
things to get the same behavior in various stacks. For example, I have a
28-core node (2 14-core Haswells), and I'd like to run 4 MPI processes and
7 OpenMP threads. Thus, I'd like the processes to be 2
Hi,
I'm trying to compare the semantics of MPI RMA with those of ARMCI. I've
written a small test program that writes data to a remote processor and then
reads the data back to the original processor. In ARMCI, you should be able to
do this since operations to the same remote processor are compl
Hi,
I've successfully built openmpi-v2.x-dev-950-g995993b on my machines
(Solaris 10 Sparc, Solaris 10 x86_64, and openSUSE Linux 12.1
x86_64) with gcc-5.1.0 and Sun C 5.13. Unfortunately I get errors
running some small test programs. All programs work as expected
using my gcc or cc version of op
20 matches
Mail list logo