recently I tryied to switch from openMPI 2.1.x to openMPI 3.1.x.
I try to run a openMP/MPI hybrid program and prior to openMPI 3.1 I used
--bind-to core --map-by slot:PE=4
and requested full nodes via PBS or Slurm (:ppn=16; --cpus-per-task=1,
With openMPI 3.1,
Thank you for your response. Yes, possibly hardwiring everything would
be easier. I was thinking I could use OpenMPI for the signaling between
the cores on an OS that doesn't support multi-processing, using the
shmem approach. The executable is the same for each CPU image, but the
Thank you for your advice. But this is only related to its functionality,
and right now my problem is it cannot compile with new version openmpi.
The reason may come from its patch file since it needs to intercept MPI
calls to profile some data. New version openmpi may change its
I am available for off-line discussion for the hwloc side of things. But
things look complicated here from your summary below. I guess there's no
need for binding on such a system. And topology is quite simple, so it
might be easier to hardwire everything.
Le 20/03/2018 à 08:36,
"It does not handle more recent improvements such as Intel's turbo
mode and the processor performance inhomogeneity that comes with it."
I guess it is easy enough to disable Turbo mode in the BIOS though.
On 20 March 2018 at 17:48, Kaiming Ouyang wrote:
> I think the problem
I'm trying to run a small benchmark with Infiniband and Ethernet to see the
difference. I get strange results where OpenMPI seems to be slower with
Infiniband than Ethernet. I'm using the 3.0.0 version.
Using the following parameters to enable Ethernet.
--mca btl ^openib --mca
I think the problem it has is it only deal with the old framework because
it will intercept MPI calls and do some profiling. Here is the library:
I checked the openmpi changelog. From openmpi 1.3, it began to switch to a
new framework, and openmpi 1.4+ has different
I'm inquiring to find someone that can answer some multi-part questions
about hwloc, OpenMPI and an alternative OS and toolchain. I have a
project as part of my PhD work, and it's not a simple, one-part
question. For brevity, I am omitting details about the OS and
On Mar 19, 2018, at 11:32 PM, Kaiming Ouyang wrote:
> Thank you.
> I am using newest version HPL.
> I forgot to say I can run HPL with openmpi-3.0 under infiniband. The reason I
> want to use old version is I need to compile a library that only supports old