Ralph is right.
I used 1.8, and after digging into it, I noticed it doesn't even compile
the pmi component. When I tried to configure without orte, I could see the
errors while compiling.
It looks like it is well broken!
Peace,
Hadi
On May 15, 2014 7:28 PM, "Ralph Castain"
It's my fault - as release manager, I should have spotted that component
sneaking into the release branch back in the 1.7 series, but I missed it. I
have deleted it now as it was never intended to be released.
Sorry for the confusion.
Ralph
On May 15, 2014, at 8:33 PM, Hadi Montakhabi
Hi,
> This bug should be fixed in tonight's tarball, BTW.
...
> > It is an unrelated bug introduced by a different commit -
> > causing mpirun to segfault upon termination. The fact that
> > you got the hostname to run indicates that this original
> > fix works, so at least we know the connection
Done - will be in nightly 1.8.2 tarball generated later today.
On May 16, 2014, at 2:57 AM, Siegmar Gross
wrote:
> Hi,
>
>> This bug should be fixed in tonight's tarball, BTW.
> ...
>>> It is an unrelated bug introduced by a different commit -
>>>
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
>> Nobody is disagreeing that one could find a way to make CMake work - all we
>> are saying is that (a) CMake has issues too, just like autotools, and (b) we
>> have yet to see a compelling reason to undertake the
Dear all,
I am reinstalling a cluster where nodes are connected by
- 1 Gb ethernet interfaces
- 40 Gb infinband adapters
I installed OFED 3.12 on CentOS 6.5
I would like to be able to tell mpirun to use either the Gb ethernet
interfaces or the infinband adapters.
But, when I launch osu_bw with
On May 16, 2014, at 1:03 PM, Fabricio Cannini wrote:
> Em 16-05-2014 10:06, Jeff Squyres (jsquyres) escreveu:
>> On May 15, 2014, at 8:00 PM, Fabricio Cannini
>> wrote:
>>
Nobody is disagreeing that one could find a way to make CMake
work -
+1 the bootstrapping issue is 50% of the reason I will never use CMake for any
production code.
vygr:~ hjelmn$ type -p cmake
vygr:~ hjelmn$
Nada, zilch, nothing on standard OS X install. I do not want to put an extra
requirement on my users. Nor do I want something as simple-minded as CMake.
+1 even if cmake would make life easier for the developpers, you may
want to consider those sysadmins/users who actually need to compile
and install the software. And for those cmake is a nightmare. Everytime
I run into a software package that uses cmake it makes me cringe.
gromacs is
Em 16-05-2014 17:07, Ralph Castain escreveu:
FWIW, simply for my own curiosity's sake, if someone could confirm
deny whether cmake:
1. Supports the following compiler suites: GNU (that's a given, I
assume), Clang, OS X native (which is variants of GNU and Clang),
Absoft, PGI, Intel, Cray,
For cmake,
-DCMAKE_SHARED_LINKER_FLAGS:STRING=-Wl,-rpath,'$HDF5_SERSH_DIR/lib'
or
-DCMAKE_EXE_LINKER_FLAGS:STRING=-Wl,-rpath,'$HDF5_SERSH_DIR/lib'
I don't have a dog in this, but I will say that we have found supporting
Windows
to be much easier with cmake. If that is not an issue, then
We are looking at enabling the use of OpenMPI on our Xeon Phis,
One comment, i'm not sure that most users will know that pmi means phi,
--with-pmi(=DIR)Build PMI support, optionally adding DIR to the
search path (default: no)
how about:
--with-pmi(=DIR)
My cluster has just upgraded to a new version of MPI, and I'm using an old
one. It seems that I'm having trouble compiling due to the compiler wrapper
file moving (full error here: http://pastebin.com/EmwRvCd9)
"Cannot open configuration file
PMI != phi. If you want to build for phi you will have to make two builds. One
for the host and one for the phi.
Take a look in contrib/platform/lanl/darwin to get an idea of how to build for
phi. The optimized-mic has most of what is needed to build a Phi version of
Open MPI.
I usually run:
Martin Siegert wrote:
> Just set LDFLAGS='-Wl,-rpath,/usr/local/xyz/lib64' with autotools.
> With cmake? Really complicated.
John Cary wrote:
> For cmake,
>
> -DCMAKE_SHARED_LINKER_FLAGS:STRING=-Wl,-rpath,'$HDF5_SERSH_DIR/lib'
> or
>
Ben,
You might want to use module (source forge) to manage paths to different mpi
implementations. It is fairly easy to set up and very robust for this type of
problem. You would remove contentious application paths from you standard PATH
and then use module to switch them in and out as
++1
The original issue was that OMPI builds support for slurm
and loadlevler by default, and this was not desirable (or desired).
That is a non-issue.
If you don't want slurm and loadleveler support,
just configure OMPI
--with-slurm=no --with-loadleveler=no
All other supported schedulers can
Instead of using the outdated and not maintained Module environment, why
not use Lmod : https://www.tacc.utexas.edu/tacc-projects/lmod
It is a drop-in replacement for Module environment that supports all of
their features and much, much more, such as :
- module hierarchies
- module properties
Maxime,
I was unaware of Lmod. Thanks for bringing it to my attention.
Doug
On May 16, 2014, at 4:07 PM, Maxime Boissonneault
wrote:
> Instead of using the outdated and not maintained Module environment, why not
> use Lmod :
I'm not sure I have the ability to implement a different module management
system, I am using a university cluster. We have a module system, and I am
beginning to suspect that maybe it wasn't updated during the upgrade. I have
module list
..other modulesopenmpi/1.4.4
Perhaps this is still
Hi Ben
I guess you are not particularly interested on Modules or LMod either.
You probably don't administer this cluster,
but are just trying to have MPI working, right?
Are you trying to use Open MPI or MPICH?
Your email mentions both.
Let's assume it is Open MPI.
Is recompiling your code
On 05/16/2014 06:26 PM, Ben Lash wrote:
I'm not sure I have the ability to implement a different module
management system, I am using a university cluster. We have a module
system, and I am beginning to suspect that maybe it wasn't updated
during the upgrade. I have
module list
..other
The $PATH and $LD_LIBRARY_PATH seem to be correct, as does module list. I
will try to hear back from our particular cluster people, otherwise I will
try using the latest version. This is old government software, significant
parts are written in fortran77 for example, typically upgrading to a new
On 05/16/2014 07:09 PM, Ben Lash wrote:
The $PATH and $LD_LIBRARY_PATH seem to be correct, as does module list.
I will try to hear back from our particular cluster people, otherwise I
will try using the latest version. This is old government software,
significant parts are written in fortran77
24 matches
Mail list logo