gfortran and intel,
by the way.)
So these guys may be snarky, but they can Fortran, definitely. And if Open MPI
bindings may be compiled by this compiler - they would be likely very
standard-conforming.
Have a nice day and a nice year 2022,
Paul Kapinos
On 12/30/21 16:27, Jeff Squyres
On 10/20/2017 12:24 PM, Dave Love wrote:
> Paul Kapinos <kapi...@itc.rwth-aachen.de> writes:
>
>> Hi all,
>> sorry for the long long latency - this message was buried in my mailbox for
>> months
>>
>>
>>
>> On 03/16/2017 10:35 AM, Alfio
PI_Free_mem() take ages.
(I'm not familiar with CCachegrind so forgive me if I'm not true).
Have a nice day,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
20171019-call
son for this?
In 1.10.x series there were 'memory hooks' - Open MPI did take some care abount
the alignment. This was removed in 2.x series, cf. the whole thread on my link.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aac
Y();
return MPI_SUCCESS;
```
This will at least tell us if the innards of our ALLOC_MEM/FREE_MEM (i.e.,
likely the registration/deregistration) are causing the issue.
On Mar 15, 2017, at 1:27 PM, Dave Love <dave.l...@manchester.ac.uk> wrote:
Paul Kapinos <kapi...@itc.rwth-aachen.de>
Hi,
On 03/16/17 10:35, Alfio Lazzaro wrote:
We would like to ask you which version of CP2K you are using in your tests
Release 4.1
and
if you can share with us your input file and output log.
The question goes to Mr Mathias Schumacher, on CC:
Best
Paul Kapinos
(Our internal ticketing
:22, Nathan Hjelm wrote:
If this is with 1.10.x or older run with --mca memory_linux_disable 1. There is
a bad interaction between ptmalloc2 and psm2 support. This problem is not
present in v2.0.x and newer.
-Nathan
On Mar 7, 2017, at 10:30 AM, Paul Kapinos <kapi...@itc.rwth-aachen.de>
iff openib is suppressed.
However, it requires ompi 1.10, not 1.8, which I was trying to use.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Dipl.-Inform. Paul Kapinos - High
sty)
workaround, cf.
https://www.mail-archive.com/devel@lists.open-mpi.org/msg00052.html
As far as I can see this issue is on InfiniBand only.
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23, D 52074 Aachen (Germany)
nce bug in MPI_Free_mem your application can be horribly slow (seen:
CP2K) if the InfiniBand failback of OPA not disabled manually, see
https://www.mail-archive.com/users@lists.open-mpi.org//msg30593.html
Best,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
R
!
Paul Kapinos
On 12/14/16 13:29, Paul Kapinos wrote:
Hello all,
we seem to run into the same issue: 'mpif90' sigsegvs immediately for Open MPI
1.10.4 compiled using Intel compilers 16.0.4.258 and 16.0.3.210, while it works
fine when compiled with 16.0.2.181.
It seems to be a compiler issue (more
ibs (as
said changing out these solves/raises the issue) we will do a failback to
16.0.2.181 compiler version. We will try to open a case by Intel - let's see...
Have a nice day,
Paul Kapinos
On 05/06/16 14:10, Jeff Squyres (jsquyres) wrote:
Ok, good.
I asked that question because typically w
core dump of 'ompi_info' like below one.
(yes we know that "^tcp,^ib" is a bad idea).
Have a nice day,
Paul Kapinos
P.S. Open MPI: 1.10.4 and 2.0.1 have the same behaviour
--
[lnm001:39957] *** Process received sign
rules of
Open MPI release series) . Anyway, if there is a simple fix for your
test case for the 1.10 series, I am happy to provide a patch. It might
take me a day or two however.
Edgar
On 12/9/2015 6:24 AM, Paul Kapinos wrote:
Sorry, forgot to mention: 1.10.1
Open MPI: 1.10.1
, that will make things much easier from
now.
(and at first glance, that might not be a very tricky bug)
Cheers,
Gilles
On Wednesday, December 9, 2015, Paul Kapinos <kapi...@itc.rwth-aachen.de
<mailto:kapi...@itc.rwth-aachen.de>> wrote:
Dear Open MPI developers,
did OMPIO (1) rea
is quite handy.
Is that a bug in OMPIO or did we miss something?
Best
Paul Kapinos
1) http://www.open-mpi.org/faq/?category=ompio
2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php
3) (ROMIO is default; on local hard drive at node 'cluster')
$ ompi_info | grep romio
in the
'hwlock' library.
Is there a way to disable hwlock or to debug it in somehow way?
(besides to build a debug version of hwlock and OpenMPI)
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
Vanilla Linux ofed from RPM's for Scientific Linux release 6.4 (Carbon) (= RHEL
6.4).
No ofed_info available :-(
On 07/31/13 16:59, Mike Dubman wrote:
Hi,
What OFED vendor and version do you use?
Regards
M
On Tue, Jul 30, 2013 at 8:42 PM, Paul Kapinos <kapi...@rz.rwth-aachen.de
<mailt
,
Paul Kapinos
P.S. There should be no connection problen somewhere between the nodes; a test
job with 1x process on each node has been ran sucessfully just before starting
the actual job, which also ran OK for a while - until calling MPI_Alltoallv
the production on these nodes (and different MPI
versions for different nodes are doofy).
Best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
mt instead of -lmpi (other compilers). However, Intel
MPI is not free.
Best,
Paul Kapinos
Also, I recommend to _always_ check what kinda of threading lievel you ordered
and what did you get:
print *, 'hello, world!', MPI_THREAD_MULTIPLE, provided
On 05/31/13 06:12, W Spector wrote:
De
bit
dusty.
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
integration to LSF 8.0 now =)
For future, if you need a testbed, I can grant an user access to you...
best
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241
p on
this has turned up nothing, literally nothing.
Any suggestions?
Thanks
Tim Dunn
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.
and send it to the compiler developer team :o)
Best
Paul Kapinos
On 04/05/13 17:56, Siegmar Gross wrote:
PPFC mpi-f08.lo
"../../../../../openmpi-1.7/ompi/mpi/fortran/use-mpi-f08/mpi-f08.F90", Line = 1,
Column = 1: INTERNAL: Interrupt: Segmentation fault
--
Dipl.-Inform. Pa
will ignore it, probably...
Best
Paul Kapinos
(*) we've kinda internal test suite in order to check our MPIs...
P.S. $ mpicc -O0 -m32 -o ./mpiIOC32.exe ctest.c -lm
P.S.2 an example cofnigure line:
./configure --with-openib --with-lsf --with-devel-headers
--enable-contrib-no-build=vt --enable
).
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
nough-registred-mem" computation for Mellanox HCAs? Any
other idea/hint?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Descripti
st
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Descri
We 'tune' our Open MPI by setting environment variables
Best
Paul Kapinos
On 12/19/12 11:44, Number Cruncher wrote:
Having run some more benchmarks, the new default is *really* bad for our
application (2-10x slower), so I've been looking at the source to try and figure
out why.
It seems
unning#mpi-preconnect) there is no such
huge latency outliers for the first sample.
Well, we know about the warm-up and lazy connections.
But 200x ?!
Any comments about that is OK so?
Best,
Paul Kapinos
(*) E.g. HPCC explicitely say in http://icl.cs.utk.edu/hpcc/faq/index.html#132
> Addit
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen
,linuxbdc02 /home/pk224850/bin/ulimit_high.sh MPI_FastTest.exe
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
ulimit_high.sh
Description
checking it with OFED folks, but I doubt that there are some dedicated
tests for THP.
So do you see it only with a specific application and only on a specific
data set? Wonder if I can somehow reproduce it in-house...
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen
ize of 64k is fairly small
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
our infiniband network. I'm running a fairly large problem (uses about
18GB), and part way in, I get the following errors:
You say "big footprint"? I hear a bell ringing...
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
--
Dipl.-Inform. Paul Kapinos - High P
in reproduce this?
Best,
Paul Kapinos
P.S: The same test with Intel MPI cannot run using DAPL, but run very fine opef
'ofa' (= native verbs as Open MPI use it). So I believe the problem is rooted in
the communication pattern of the program; it send very LARGE messages to a lot
of/all other processes
_________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
any interest in reproducing this issue?
Best wishes,
Paul Kapinos
P.S. Open MPI 1.5.3 used - still waiting for 1.5.5 ;-)
Some error messages:
with 6 procs over 6 Nodes:
--
mlx4: local QP operation err (QPN 7c0063,
RANK is? (This would make
sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.)
If yes, maybe it also should be documented in the Wiki page.
2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of
OMPI_COMM_WORLD_LOCAL_RANK ?
Best wishes,
Paul Kapinos
--
Dipl.-
Try out the attached wrapper:
$ mpiexec -np 2 masterstdout
mpirun -n 2
Is there a way to have mpirun just merger STDOUT of one process to its
STDOUT stream?
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
ewer 1.5.x is a good idea; but it is always a bit
tedious... Would 1.5.5 arrive the next time?
Best wishes,
Paul Kapinos
Ralph Castain wrote:
I don't see anything in the code that limits the number of procs in a rankfile.
> Are the attached rankfiles the ones you are trying to use?
s computer dimension is a bit too big for the pinning
infrasructure now. A bug?
Best wishes,
Paul Kapinos
P.S. see the attached .tgz for some logzz
--
Rankfiles
Rankfiles provide a means for specifying detailed i
help me in solving this?
Regards,
Anas
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High
t that if you do this:
-
tar xf openmpi-1.5.3.tar.bz2
cd openmpi-1.5.3
./configure etc.
-
everything will work just fine.
On Jul 22, 2011, at 11:12 AM, Paul Kapinos wrote:
Dear OpenMPI volks,
currently I have a problem by building the version 1.5.3 of OpenMPI on
Scientific
--
Because of the anticipated performance gain we would be very keen on
using DAPL with Open MPI. Does somebody have any idea what could be
wrong and what to check?
On Dec 2, 2011, at 1:21 PM, Paul Kapinos wrote:
Dear Open MPI developer,
OFED 1.5.4 will contain DAPL 2.0.34.
I t
"
Well, this are my user's dreams; but maybe this give an inspiration for
Open MPI programmers. As said, the situation when a [long] list of
envvars is to be provided is quite common, and typing everything on the
command line is tedious and error-prone.
Best wishes [and sorry for the noise]
day/whatever time you have!
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
ce weekend,
Paul
http://www.openfabrics.org/downloads/OFED/release_notes/OFED_1.5.4_release_notes
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 24
rently* than other envvars:
$ man mpiexec
Exported Environment Variables
All environment variables that are named in the form OMPI_* will
automatically be exported to new processes on the local and remote
nodes.
So, tells the man page lies, or this is an removed feature, or som
command line options. This should not be so?
(I also tried to advise to provide the envvars by -x
OMPI_MCA_oob_tcp_if_include -x OMPI_MCA_btl_tcp_if_include - nothing
changed. Well, they are OMPI_ variables and should be provided in any case).
Best wishes and many thanks for all,
Paul Ka
above command should disable
the usage of eth0 for MPI communication itself, but it hangs just before
the MPI is started, isn't it? (because one process lacks, the MPI_INIT
cannot be passed)
Now a question: is there a way to forbid the mpiexec to use some
interfaces at all?
Best wishes,
Paul Kap
next thing I will try will be the installation of
1.5.4 :o)
Best,
Paul
P.S. started:
$ /opt/MPI/openmpi-1.5.3/linux/intel/bin/mpiexec --hostfile
hostfile-mini -mca odls_base_verbose 5 --leave-session-attached
--display-map helloworld 2>&1 | tee helloworld.txt
On Nov 21, 2011, at 9:
Idea what is gonna on?
Best,
Paul Kapinos
P.S: no alias names used, all names are real ones
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
l
workarounded the problem by switching our production to
1.5.3 this issue is not a "burning" one; but I decieded still to post
this because any issue on such fundamental things may be interesting for
developers.
Best wishes,
Paul Kapinos
(*) http://www.netlib.org/ben
will trigger our admins...
Best wishes,
Paul
m4 (GNU M4) 1.4.13 (OK)
autoconf (GNU Autoconf) 2.63 (Need: 2.65, NOK)
automake (GNU automake) 1.11.1 (OK)
ltmain.sh (GNU libtool) 2.2.6b (OK)
On Jul 22, 2011, at 9:12 AM, Paul Kapinos wrote:
Dear OpenMPI volks,
currently I have a problem
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
the warnings:
pgCC-Warning-prelink_objects switch is deprecated
pgCC-Warning-instantiation_dir switch is deprecated
coming from the below-noted call.
I do not know about this is a Libtool or a libtool usage (=OpenMPI
issue, but I do not want to keep secret this...
Best wishes
Paul Kapinos
COMPONENT is expanded
from...
config/ompi_mca.m4:326: MCA_CONFIGURE_FRAMEWORK is expanded from...
config/ompi_mca.m4:247: MCA_CONFIGURE_PROJECT is expanded from...
configure.ac:953: warning: AC_RUN_IFELSE was called before
AC_USE_SYSTEM_EXTENSIONS
--
Dipl.-Inform. Paul Kapinos - High Performan
. The32bit version
works with the NIS-autentificated part of our cluster, only.
Thanks for your help!
Best wishes
Paul Kapinos
Reuti wrote:
Hi,
Am 15.07.2011 um 21:14 schrieb Terry Dontje:
On 7/15/2011 1:46 PM, Paul Kapinos wrote:
Hi OpenMPI volks (and Oracle/Sun experts),
we have
h_module.c at line 1058
--
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
t. Maybe someone can correct it? This
would save some time for people like me...
Best wishes
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/
8324) = -1
ENOENT (No such file or directory)
===> OMPI_MCA_orte_rsh_agent does not work?!
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smim
way.
Best wishes,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
compiler without
the need for -lm flag - and this is *wrong*, "cc" need -lm.
It seem for me to be an configure issue.
Greetings
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52
bility of `ceil' for the C compiler (see
config.log.ceil). This check says `ceil' is *available* for the "cc"
Compiler, which is *wrong*, cf. (4).
So, is there an error in the configure stage? Or either the checks in
config.log.ceil does not rely on the avilability of th
on CentOS 5.5 is still a problem, also
other versions of GCC seem not to have the same issue.
Best wishes,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80
ib -tp
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/libso
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-64/10.9/lib
-Wl,-rpath=/../PGI/PGI_10.9_CENTOS_64/linux86-32/10.9/lib
-Wl,-soname -Wl,libopen-pal.so.1 -o .libs/libopen-pal.so.1.0.0
Best wishes,
Paul
--
Dipl.-I
wishes
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature
configure options were for a given
installation! "./configure --help" helps but to guess which all of the
options are used in a release, is a hard job..
--td
On Aug 24, 2010, at 7:40 AM, Paul Kapinos wrote:
Hello OpenMPI developers,
I am searching for a way to discover _al
how can I see
would these flags set or would not?
In other words: is it possible to get _all_ flags of configure from an
"ready" installation in without having the compilation dirs (with
configure logs) any more?
Many thanks
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Com
ceive.
--
Prentice
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dipl.-Inform. Paul Kapinos - High Perform
d
be possible which it is currently not.
Best wishes,
Paul Kapinos
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23, D 52074 Aachen (Germany)
Tel: +49 241/80-24915
PROGRAM sunerr
USE MPI
am
> not sure why it would expand based on stack size?.
>
> --td
>> Date: Thu, 19 Nov 2009 19:21:46 +0100
>> From: Paul Kapinos <kapi...@rz.rwth-aachen.de>
>> Subject: [OMPI users] exceedingly virtual memory consumption of MPI
>> environment if higher-setting &
unt of stack size for each process?
And, why consuming the virtual memory at all? We guess this virtual
memory is allocated for the stack (why else it will be related to the
stack size ulimit). But, is such allocation really needed? Is there a
way to avoid the vaste of virtual memory?
best regards,
his is a bit ugly, but working
workaround. What i wanted to achieve with my mail, was a less ugly
solution :o)
Thanks for your help,
Paul Kapinos
Not at the moment - though I imagine we could create one. It is a tad
tricky in that we allow multiple -x options on the cmd line, but we
obvious
I can add it to the "to-do" list for a rainy day :-)
That would be great :-)
Thanks for your help!
Paul Kapinos
with the -x option of mpiexec there is a way to distribute environmnet
variables:
-x Export the specified environment variables to the remote
meaning? The
writing of environmnet variables on the command line is ugly and tedious...
I've searched for this info on OpenMPI web pages for about an hour and
didn't find the ansver :-/
Thanking you in anticipation,
Paul
--
Dipl.-Inform. Paul Kapinos - High Performance Computing,
RWTH
;-mca opal_set_max_sys_limits 1" to the command line), but
we does not see any change of behaviour).
What is your meaning?
Best regards,
Paul Kapinos
RZ RWTH Aachen
#
/opt/SUNWhpc/HPC8.2/intel/bin/mpiexec -mca opal_set_max_sys_limits 1
-np
1 which is an Red Hat Enterprice 5 Linux.
$ uname -a
Linux linuxhtc01.rz.RWTH-Aachen.DE 2.6.18-53.1.14.el5_lustre.1.6.5custom
#1 SMP Wed Jun 25 12:17:09 CEST 2008 x86_64 x86_64 x86_64 GNU/Linux
configured with:
./configure --enable-static --with-devel-headers CFLAGS="-O2 -m32"
CXX
configuretion files
I think installing everyting to hard-coded paths is somewhat inflexible.
Maybe you may provide relocatable RPMs somewhere in the future?
But as mentioned above, our main goal is to have both versions of CT on
same sythem working.
Best regards,
Paul Kapinos
<>
smi
e really not relocable without parsing
the configure files.
Did you (or anyone reading this message) have any contact to SUN
developers to point to this circumstance? *Why* do them use hard-coded
paths? :o)
best regards,
Paul Kapinos
#
# Default word-size (used when -m flag i
Hi,
First, consider to update to newer OpenMPI.
Second, look on your environment on the box you startts OpenMPI (runs
mpirun ...).
Type
ulimit -n
to explore how many file descriptors your envirinment have. (ulimit -a
for all limits). Note, every process on older versions of OpenMPI (prior
the paths for the configuration files, which
opal_wrapper need, may be setted locally like ../share/openmpi/***
without affectiong the integrity of OpenMPI. Maybe there were were more
places where the usage of local paths may be needed to allowe movable
(relocable) OpenMPI.
What do you mean about?
Best re
85 matches
Mail list logo