Kaiming Ouyang writes:
> Hi Jeff,
> Thank you for your advice. I will contact the author for some suggestions.
> I also notice I may port this old version library to new openmpi 3.0. I
> will work on this soon. Thank you.
I haven't used them, but at least the profiling part,
Thank you for your advice. I will contact the author for some suggestions.
I also notice I may port this old version library to new openmpi 3.0. I
will work on this soon. Thank you.
Kaiming Ouyang, Research Assistant.
Department of Computer Science and Engineering
You might want to take that library author's advice from their README:
The source code herein was used as the basis of Rountree ICS 2009. It
was my first nontrivial MPI tool and was never intended to be released
to the wider world. I beleive it was tied rather tightly to a subset
Kaiming, good luck with your project. I think you should contact Barry
Rountree directly. you will probably get good advice!
It is worth saying that with Turboboost there is variation between each
individual CPU die, even within the same SKU.
What Turboboost does is to set a thermal envelope,
Thank you for your advice. But this is only related to its functionality,
and right now my problem is it cannot compile with new version openmpi.
The reason may come from its patch file since it needs to intercept MPI
calls to profile some data. New version openmpi may change its
"It does not handle more recent improvements such as Intel's turbo
mode and the processor performance inhomogeneity that comes with it."
I guess it is easy enough to disable Turbo mode in the BIOS though.
On 20 March 2018 at 17:48, Kaiming Ouyang wrote:
> I think the problem
I think the problem it has is it only deal with the old framework because
it will intercept MPI calls and do some profiling. Here is the library:
I checked the openmpi changelog. From openmpi 1.3, it began to switch to a
new framework, and openmpi 1.4+ has different
On Mar 19, 2018, at 11:32 PM, Kaiming Ouyang wrote:
> Thank you.
> I am using newest version HPL.
> I forgot to say I can run HPL with openmpi-3.0 under infiniband. The reason I
> want to use old version is I need to compile a library that only supports old
I am using newest version HPL.
I forgot to say I can run HPL with openmpi-3.0 under infiniband. The reason
I want to use old version is I need to compile a library that only supports
old version openmpi, so I am trying to do this tricky job. Anyways, thank
you for your reply Jeff, have
I'm sorry; I can't help debug a version from 9 years ago. The best suggestion
I have is to use a modern version of Open MPI.
Note, however, your use of "--mca btl ..." is going to have the same meaning
for all versions of Open MPI. The problem you showed in the first mail was
with the shared
Thank you for your reply. I just changed to another cluster which does not
have infiniband. I ran the HPL by:
mpirun *--mca btl tcp,self* -np 144 --hostfile /root/research/hostfile
It ran successfully, but if I delete "--mca btl tcp,self", it cannot run
again. So I doubt whether
That's actually failing in a shared memory section of the code.
But to answer your question, yes, Open MPI 1.2 did have IB support.
That being said, I have no idea what would cause this shared memory segv --
it's quite possible that it's simple bit rot (i.e., v1.2.9 was released 9 years
Recently I need to compile High-Performance Linpack code with openmpi 1.2
version (a little bit old). When I finish compilation, and try to run, I
get the following errors:
[test:32058] *** Process received signal ***
[test:32058] Signal: Segmentation fault (11)
Mail list logo