/?category=tcp#tcp-selection
Gus Correa
PS - BTW - Our old non-Rocks cluster has Myrinet-2000 (GM).
After I get the new cluster up and running and in production,
I am thinking of revamping the old cluster, and install Rocks on it.
I would love to learn from your experience with your
Rocks+Myrin
) barriers
embedded on it.
Another reason may be the way these codes are developed,
say, whether there should be a code architect wizard who designs
a master plan, or some form of integration and adaption of
well proven existing code, or something else.
My two cents,
Gus Correa
nknown switch: -pthread
make[4]: *** [libmpi_f90.la] Error 1
Thank you,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
t a unique situation,
and other people in our research field also need and use these
libraries built on "Gnu+commercial Fortran" compilers.
For this reason I keep a variety of OpenMPI, MPICH2, MVAPICH2
builds, and I try to stay current with the newest releases.
Any help is much appre
t on 1.3, causing the build to fail.
The build scripts are the same, the computer is the same,
etc, only the OpenMPI release varies.
Is there a way around?
E.g., not using pthreads there,
if not essential, or perhaps helping PGI to find the library
and link to i
also works for other extra libraries
(e.g. Torque, Infiniband) on Linux x86_64.
This is what I used on our AMD x86_64 with CentOS 5.2, and it works.
My two cents,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory
with libnuma
support, it sounds to me that the natural path to choose is /usr/lib64.
Sorry, I don't have an answer about performance.
You may need to ask somebody else or google around
about the relative performance of 32-bit vs. 64-bit mode.
Gus Correa
gure_amber -openmpi=/full/path/to/openmpi
or similar?
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-
org/faq/
Many of your questions may have been answered there already.
I encourage you to read them, particularly the General Information,
Building, and Running Jobs ones.
Plez bear with me as this is the first time i am doin a project on Linux
clust
Hi Francesco, list
Francesco Pietra wrote:
On Mon, Apr 6, 2009 at 5:21 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:
Hi Francesco
Did you try to run examples/connectivity_c.c,
or examples/hello_c.c before trying amber?
They are in the directory where you untarred the OpenMPI t
I don't remember what your intent was,
but if you wanted to use icc (and icpc),
somehow the OpenMPI configure script didn't pick it up.
If you really want icc, rebuild OpenMPI giving the full path name
to icc (CC=/full/path/to/icc), and likewise for icpc and ifort.
thanks
francesco
Good luck,
something in the installation script that
broke the installation procedure,
would not allow me to choose the install directory,
modified their directory structure names, etc.
The net result was that I couldn't install, lest test it.
I hope this helps.
Gus Correa
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
whether it is a 64- or 32-bit compiler, as somehow it seemed to work.)
Thank you,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
you.
(OK, I was about to say you forgot deb64 after -host,
but you sent the fix below.)
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-
other users besides me.
And yes/lib :), I could build 1.3.1 right,
with numa, torque, and openib!
Many thanks,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
names help avoid confusion with other
MPI flavors.
One MPI benchmark available free from Intel:
http://www.intel.com/cd/software/products/asmo-na/eng/219848.htm
There may be others though.
Gus Correa
-
Gustavo Correa
Lamont
line with the -host option,
or you can specify them in a file with the -hostfile option.
Do "mpiexec --help" to learn the details.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Col
Hi Orion, Prentice, list
I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.
See this thread for a workaround:
http://www.open-mpi.org/community/lists/users/2009/04/8724.php
Gus Correa
ity/lists/users/2009/04/8724.php
There is a little script in the above message to do the job.
I hope it helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades,
Orion Poplawski wrote:
Gus Correa wrote:
Hi Orion, Prentice, list
I had a related problem recently,
building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2.
Configure would complete, but not make.
Easier solution is to set FC to "pgf90 -noswitcherror". Does not appear
to
/?category=running#run-prereqs
And try this recipe (if you use RSA keys instead of DSA, replace all
"dsa" by "rsa"):
http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-4.html#ss4.3
I hope thi
(or carefully set the OpenMPI bin path ahead of any
other).
The Linux command "locate" helps find things (e.g. "locate mpi.h").
You may need to update the location database
before using it with "updatedb".
://www.openssh.com/
http://en.wikipedia.org/wiki/OpenSSH
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
n standard Ethernet TCP/IP.
Did you try your own programs?
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palis
ay find free MPI programs on the Internet.
My two cents,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
not there
you need to install one of them.
Read my previous email for details.
I hope it will help you get HPL working, if you are interested on HPL.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory -
y answered.
There isn't much more I can say to help you out.
Good luck!
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-
Hi
This is a MPICH2 error, not OpenMPI.
I saw you sent the same message to the MPICH list.
It looks like you are mixed both MPI flavors.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
ifferent directories for OpenMPI and MPICH2.
Or install only one MPI flavor.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades,
work in the beginning,
but may be worthwhile in the long run.
See this:
http://www.rocksclusters.org/wordpress/
My two cents.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
P
/view/Cimec/PETScFEM
Computational Chemistry, molecular dynamics, etc:
http://www.tddft.org/programs/octopus/wiki/index.php/Main_Page
http://classic.chem.msu.su/gran/gamess/
http://ambermd.org/
http://www.gromacs.org/
http://www.charmm.org/
Gus Correa
Ankush Kaul wrote:
Thanks everyone(esp Gus and J
http://fats-raid.ldeo.columbia.edu/pages/parallel_programming.html#mpi
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia Un
-prefix /the/run/directory \
-np 8 \
-mca btl [openib,]sm,self \
xhpl
Any help, insights, suggestions, reports of previous experiences,
are much appreciated.
Thank you,
Gus Correa
the error:
"No executable was specified on the mpiexec command line.".
Could this possibly be the issue (say, wrong parsing of mca options)?
Many thanks!
Gus Correa
-
G
inues to look strange, there are
more things to check.
Thanks, Jacob
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Gus Correa
Sent: Friday, May 01, 2009 12:17 PM
To: Open MPI Users
Subject: [OMPI users] HPL with OpenMPI: Do I have a memory lea
ion per month! :)
Many thanks!
Gus Correa
Brian W. Barrett wrote:
Gus -
Open MPI 1.3.0 & 1.3.1 attempted to use some controls in the glibc
malloc implementation to handle memory registration caching for
InfiniBand. Unfortunately, it was not only bugging in that it didn't
work, but it als
-np ${NP} \
-mca mpi_paffinity_alone 1 \
-mca btl openib,sm,self \
-mca mpi_leave_pinned 0 \
-mca paffinity_base_verbose 5 \
xhpl
Thank you,
Gus Correa
$ cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 1
about what you are really using.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
en the $HOME/.bashrc and added the following:
PATH="/usr/local/bin:$PATH"
LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH"
Note that /usr/local/bin, not /usr/local/include should be
pre-pended to your PATH!
Gus Correa
---
ect hostfile).
Moreover, the two hosts must talk to each other smoothly,
they must agree about passwordless connections,
about where the executables are, etc.
You are the master, and you must tell both hosts how to agree
on these things.
You'll get there, just be patient, read the av
r instance, nodes that
are on switch A will probably have a larger latency to talk
to nodes on switch B, I suppose.
I hope it helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY,
.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html#parallel
Is this perhaps because the size of the problem doesn't justify using
more than 32 processors?
What is the meaning of the "32" on "CPMD 32 water"?
I hope this helps,
Gus Correa
---
t is going on.
Once again, I am sorry for not reading your original message with the
due attention.
Gus Correa
Gus Correa wrote:
Hi Roman
I googled out and found that CPMD is a molecular dynamics program.
(What would be of civilization without Google?)
Unfortunately I kind of wiped off fr
may want to try the workaround or the upgrade,
regardless of any scaling performance expectations.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
P
Hi Pavel
This is not my league, but here are some
CPMD helpful links (code, benchmarks):
http://www.cpmd.org/
http://www.cpmd.org/cpmd_thecode.html
http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html
IHIH
Gus Correa
Noam Bernstein wrote:
On May 18, 2009, at 12:50 PM
}
I hope this helps,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
Fivoskouk wrote:
Hi
ly use the full path names to the OpenMPI mpif90
and to mpiexec. Inadvertent mix of these executables
from different MPIs is a common source of frustration too.
I hope this helps,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Ea
H* computers,
or install on a directory that is (NFS) mounted on both computers.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
Hi Trent
The famous one is HPL, the Top500 benchmark:
http://www.netlib.org/benchmark/hpl/
It takes some effort to configure and run it.
Goto BLAS is probably your best choice for HPL:
http://www.tacc.utexas.edu/resources/software/
I hope this helps,
Gus Correa
was fixed in 1.3.2:
http://www.open-mpi.org/community/lists/announce/2009/04/0030.php
https://svn.open-mpi.org/trac/ompi/ticket/1853
If you are using 1.3.0 or 1.3.1, upgrade to 1.3.2.
I hope this helps.
Gus Correa
-
Gustavo C
now if 1.2.8 (which you are using)
has a problem with mpi_paffinity_alone,
but the OpenMPI developers may clarify this.
I hope this helps,
Gus Correa
-
Gustavo Correa
Lamont-Dohert
.)
The flags became: -xW -O3 -ip -no-prec-div
I used the same flags for ifort (FFLAGS, FCFLAGS), icc (CFLAGS)
and icpc (CXXFLAGS),to build OpenMPI 1.3.2, and it works.
For "Genuine Intel" processors you can upgrade -xW to whatever is
appropriate.
My $0.02.
sender ranks,
instead of only one size.
Just a thought.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---
to all of them, if the computers have the same
architecture).
Use the configure --prefix=/directory/name to choose the installation
directory.
Don't choose the standard locations /usr or /usr/local, as it may
overwrite existing MPIs and mess things up with yum.
I hope this helps.
Gus Correa
(and
separate from the 10.42.0.0 net).
You can check this out with the tools (ping, etc).
I hope this helps,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
changed configure
(somewhere along the 1.3 series), so I had to change again.
If the libraries aren't in standard places (/usr/lib /usr/lib64),
and the includes also (/usr/include) you need to tell configure where
they are. See the OpenMPI README file and FAQ.
My $0.02.
Gus Correa
PS - BTW, what
similar problems (with libnuma) here.
I hope this helps,
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
nt "Wrapper extra LIBS".
I have -lrdmacm -libverbs, you and Noam don't have them.
(Noam: I am not saying you don't have IB support! :))
My configure explicitly asks for ib support, Noam's (and maybe yours)
doesn't.
Somehow, slight differences in how one invokes
th
.
I hope this helps.
Gus Correa
-
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
-
Jim Kress ORG wrote
Hi Rob
Your email says you'll keep PVFS2.
However, on your blog PVFS2 is not mentioned (on the "Keep" list).
I suppose it will be kept, right?
Thank you,
Gus Correa
On 01/05/2016 12:31 PM, Rob Latham wrote:
I'm itching to discard some of the little-used file system drivers in
ROMIO,
on the nodes'
/opt, which *probably* will work:
https://software.intel.com/en-us/articles/intelr-composer-redistributable-libraries-by-version
**
I hope this helps,
Gus Correa
On 03/24/2016 12:01 AM, Gilles Gouaillardet wrote:
Elio,
usually, /opt is a local filesystem, so it is possible /opt/intel
events the core file to be created when the program
crashes, but on the upside also prevents disk to fill up with big core
files that are forgotten and hang around forever.
[ulimit -a will tell.]
I hope this helps,
Gus Correa
On 04/23/2016 07:06 PM, Gilles Gouaillardet wrote:
If you bu
you may need also to make the locked memory
unlimited:
ulimit -l unlimited
I hope this helps,
Gus Correa
On 05/05/2016 05:15 AM, Giacomo Rossi wrote:
gdb /opt/openmpi/1.10.2/intel/16.0.3/bin/mpif90
GNU gdb (GDB) 7.11
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL
this on the pbs_mom daemon
init script (I am still before the systemd era, that lovely POS).
And set the hard/soft limits on /etc/security/limits.conf as well.
I hope this helps,
Gus Correa
On 05/07/2016 12:27 PM, Jeff Squyres (jsquyres) wrote:
I'm afraid I don't know what a .btr file
ncarnation of an OpenMPI 1.6.5 question
similar to yours (where .btr stands for backtrace):
http://stackoverflow.com/questions/25275450/cause-all-processes-running-under-openmpi-to-dump-core
Could this be due to a (unlikely) mix of OpenMPI 1.10 with 1.6.5?
Gus Correa
On Mon, May 9, 2016 at 12:04 PM,
hope this helps,
Gus Correa
) See also this FAQ related to registered memory.
I set these parameters in /etc/modprobe.d/mlx4_core.conf,
but where they're set may depend on the Linux distro/release and the
OFED you're using.
https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
I hope this helps,
Gus Correa
On
to (#18 in tuning runtime MPI to OpenFabrics)
regards the OFED kernel module parameters
log_num_mtt and log_mtts_per_seg, not to the openib btl mca parameters.
They may default to a less-than-optimal value.
https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
Gus Correa (not Chuck
er/cluster),
but in your case it can be adjusted to how often the program fails.
All atmosphere/ocean/climate/weather_forecast models work
this way (that's what we mostly run here).
I guess most CFD, computational Chemistry, etc, programs also do.
I hope this helps,
Gus Correa
On 06/16/2016 05:25 PM, A
Maybe just --with-valgrind or --with-valgrind=/usr would work?
On 07/14/2016 11:32 AM, David A. Schneider wrote:
I thought it would be a good idea to build a debugging version of
openmpi 1.10.3. Following the instructions in the FAQ:
e more user friendly.
You could also compile it with the flag -traceback
(or -fbacktrace, the syntax depends on the compiler, check the compiler
man page).
This at least will tell you the location in the program where the
segmentation fault happened (in the STDERR file of your job).
I hope this h
pirun in a short Torque script:
#PBS -l nodes=4:ppn=1
...
mpirun hostname
The output should show all four nodes.
Good luck!
Gus Correa
On 07/31/2017 02:41 PM, Mahmood Naderan wrote:
Well it is confusing!! As you can see, I added four nodes to the host
file (the same nodes are used by PBS). The -
S_NODEFILE.
However, that doesn't seem to be the case here, as the mpirun command
line in the various emails has a single executable "a.out".
I hope this helps.
Gus Correa
On 07/31/2017 12:43 PM, Elken, Tom wrote:
“4 threads” In MPI, we refer to this as 4 ranks or 4 processes.
So w
Have you tried:
-mca btl vader,openib,self
or
-mca btl sm,openib,self
by chance?
That adds a btl for intra-node communication (vader or sm).
On 07/13/2017 05:43 PM, Boris M. Vulovic wrote:
I would like to know how to invoke InfiniBand hardware on CentOS 6x
cluster with OpenMPI (static
On 07/17/2017 01:06 PM, Gus Correa wrote:
Hi Boris
The nodes may have standard Gigabit Ethernet interfaces,
besides the Infiniband (RoCE).
You may want to direct OpenMPI to use the Infiniband interfaces,
not Gigabit Ethernet,
by adding something like this to "--mca btl self,vader,self&qu
aq/?category=all#tcp-selection
BTW, some of your questions (and others that you may hit later)
are covered in the OpenMPI FAQ:
https://www.open-mpi.org/faq/?category=all
I hope this helps,
Gus Correa
On 07/17/2017 12:43 PM, Boris M. Vulovic wrote:
Gus, Gilles, Russell, John:
Thanks very much f
: command not found”
I am following the instruction from here:
https://na-inet.jp/na/pccluster/centos_x86_64-en.html
Any help is much appreciated. J
Corina
You need to install openmpi.x86_64 also, not only openmpi-devel.x86_64.
That is the minimum.
I hope this helps,
Gus Correa
p;1 | tee my_make_install.log
** If using csh/tcsh:
./configure CC=gcc CXX=g++ F77=gfortran FC=gfortran
--prefix=/usr/local/openmpi |& tee my_configure.log
make |& tee my_make.log
make install |& tee my_make_install.log
I hope this helps,
Gus Correa
For what i
que to a job, if any, or when Torque is configured
without cpuset support, to somehow still bind the MPI processes to
cores/processors/sockets/etc.
I hope this helps,
Gus Correa
On 10/06/2017 02:22 AM, Anthony Thyssen wrote:
Sorry r...@open-mpi.org <mailto:r...@open-mpi.org> as Gilles Gouai
(as opposed to append) OpenMPI
to your PATH? Say:
export
PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
I hope this helps,
Gus Correa
On 05/14/2018 12:40 PM, Max Mellette wrote:
John,
Thanks
On 08/10/2018 01:27 PM, Jeff Squyres (jsquyres) via users wrote:
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
are great,
knows nothing about the MPI Forum protocols and activities,
but hopes the Forum pays attention to users' needs.
Gus Correa
PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote
too. :)
On 08/10/2018 02:19 PM, Jeff Squyres (jsquyres) via users wrote:
Jeff H
if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
On 08/10/2018 01:52 PM, Jeff Hammond wrote:
This thread is a perfect illustration of why MPI Forum participants
should not flippantly discuss feature deprecation in discussion with
users. Users who are not familiar
Open MPI 4.0.2 here:
/home/guido/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/
Have you tried this instead?
LD_LIBRARY_PATH=$HOME/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/lib:$LD_LIBRARY_PATH
I hope this helps,
Gus Correa
On Tue, Dec 10, 2019 at 4:40 PM Guido granda muñoz via users
Can you use taskid after MPI_Finalize?
Isn't it undefined/deallocated at that point?
Just a question (... or two) ...
Gus Correa
> MPI_Finalize();
>
> printf("END OF CODE from task %d\n", taskid);
On Tue, Oct 13, 2020 at 10:34 AM Jeff Squyres (jsquyres) via users
"The reports of MPI death are greatly exaggerated." [Mark Twain]
And so are the reports of Fortran death
(despite the efforts of many CS departments
to make their students Fortran- and C-illiterate).
IMHO the level of abstraction of MPI is adequate, and actually very well
designed.
Higher levels
+1
In my experience moving software, especially something of the complexity of
(Open) MPI,
is much more troublesome (and often just useless frustration) and time
consuming than recompiling it.
Hardware, OS, kernel, libraries, etc, are unlikely to be compatible.
Gus Correa
On Fri, Jul 24, 2020
>> Core(s) per socket: 8
> "4. If none of a hostfile, the --host command line parameter, or an RM is
> present, Open MPI defaults to the number of processor cores"
Have you tried -np 8?
On Sun, Nov 8, 2020 at 12:25 AM Paul Cizmas via users <
users@lists.open-mpi.org> wrote:
>
-hostfile
https://www.open-mpi.org/faq/?category=running
I hope this helps,
Gus Correa
On Tue, Oct 20, 2020 at 4:47 PM Jorge SILVA via users <
users@lists.open-mpi.org> wrote:
> Hello,
>
> I installed kubuntu20.4.1 with openmpi 4.0.3-0ubuntu in two different
> computers in the stand
.com/users@lists.open-mpi.org/msg08962.html
https://www.mail-archive.com/users@lists.open-mpi.org/msg10375.html
I hope this helps,
Gus Correa
On Thu, Jan 14, 2021 at 5:45 PM Passant A. Hafez via users <
users@lists.open-mpi.org> wrote:
> Hello,
>
>
> I'm having an error when trying to
processes are talking.
I hope this helps,
Gus Correa
On Sun, Dec 5, 2021 at 1:12 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> FWIW: Open MPI 4.1.2 has been released -- you can probably stop using an
> RC release.
>
> I think you're probably run
This may have changed since, but these used to be relevant points.
Overall, the Open MPI FAQ have lots of good suggestions:
https://www.open-mpi.org/faq/
some specific for performance tuning:
https://www.open-mpi.org/faq/?category=tuning
https://www.open-mpi.org/faq/?category=openfabrics
1) Make
with:
yum list | grep numa (CentOS 7, RHEL 7)
dnf list | grep numa (CentOS 8, RHEL 8, RockyLinux 8, Fedora, etc)
apt list | grep numa (Debian, Ubuntu)
If not, you can install (or ask the system administrator to do it).
I hope this helps,
Gus Correa
On Wed, Jul 19, 2023 at 11:55 AM Jeff Squyres
to pull their
system image, separate from yum/dnf/apt/]
Gus
On Thu, Jul 20, 2023 at 4:00 AM Luis Cebamanos via users <
users@lists.open-mpi.org> wrote:
> Hi Gus,
>
> Yeap, I can see softlink is missing on the compute nodes.
>
> Thanks!
> Luis
>
&g
401 - 495 of 495 matches
Mail list logo