Re: [OMPI users] WRF, OpenMPI and PGI 7.2

2009-02-20 Thread Gus Correa
/?category=tcp#tcp-selection Gus Correa PS - BTW - Our old non-Rocks cluster has Myrinet-2000 (GM). After I get the new cluster up and running and in production, I am thinking of revamping the old cluster, and install Rocks on it. I would love to learn from your experience with your Rocks+Myrin

Re: [OMPI users] Any scientific application heavilyusing MPI_Barrier?

2009-03-05 Thread Gus Correa
) barriers embedded on it. Another reason may be the way these codes are developed, say, whether there should be a code architect wizard who designs a master plan, or some form of integration and adaption of well proven existing code, or something else. My two cents, Gus Correa

[OMPI users] Cannot build OpenMPI 1.3 with PGI pgf90 and Gnu gcc/g++.

2009-03-25 Thread Gus Correa
nknown switch: -pthread make[4]: *** [libmpi_f90.la] Error 1 Thank you, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA -

Re: [OMPI users] Cannot build OpenMPI 1.3 with PGI pgf90 and Gnu gcc/g++.

2009-03-27 Thread Gus Correa
t a unique situation, and other people in our research field also need and use these libraries built on "Gnu+commercial Fortran" compilers. For this reason I keep a variety of OpenMPI, MPICH2, MVAPICH2 builds, and I try to stay current with the newest releases. Any help is much appre

Re: [OMPI users] Cannot build OpenMPI 1.3 with PGI pgf90 and Gnu gcc/g++.

2009-03-30 Thread Gus Correa
t on 1.3, causing the build to fail. The build scripts are the same, the computer is the same, etc, only the OpenMPI release varies. Is there a way around? E.g., not using pthreads there, if not essential, or perhaps helping PGI to find the library and link to i

Re: [OMPI users] Issues on install 1.3.1

2009-04-02 Thread Gus Correa
also works for other extra libraries (e.g. Torque, Infiniband) on Linux x86_64. This is what I used on our AMD x86_64 with CentOS 5.2, and it works. My two cents, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory

Re: [OMPI users] Issues on install 1.3.1

2009-04-02 Thread Gus Correa
with libnuma support, it sounds to me that the natural path to choose is /usr/lib64. Sorry, I don't have an answer about performance. You may need to ask somebody else or google around about the relative performance of 32-bit vs. 64-bit mode. Gus Correa

Re: [OMPI users] ssh MPi and program tests

2009-04-06 Thread Gus Correa
gure_amber -openmpi=/full/path/to/openmpi or similar? I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-

Re: [OMPI users] Problem with running openMPI program

2009-04-06 Thread Gus Correa
org/faq/ Many of your questions may have been answered there already. I encourage you to read them, particularly the General Information, Building, and Running Jobs ones. Plez bear with me as this is the first time i am doin a project on Linux clust

Re: [OMPI users] ssh MPi and program tests

2009-04-06 Thread Gus Correa
Hi Francesco, list Francesco Pietra wrote: On Mon, Apr 6, 2009 at 5:21 PM, Gus Correa <g...@ldeo.columbia.edu> wrote: Hi Francesco Did you try to run examples/connectivity_c.c, or examples/hello_c.c before trying amber? They are in the directory where you untarred the OpenMPI t

Re: [OMPI users] Fwd: ssh MPi and program tests

2009-04-07 Thread Gus Correa
I don't remember what your intent was, but if you wanted to use icc (and icpc), somehow the OpenMPI configure script didn't pick it up. If you really want icc, rebuild OpenMPI giving the full path name to icc (CC=/full/path/to/icc), and likewise for icpc and ifort. thanks francesco Good luck,

Re: [OMPI users] recompiling 1.3.1 with intels

2009-04-08 Thread Gus Correa
something in the installation script that broke the installation procedure, would not allow me to choose the install directory, modified their directory structure names, etc. The net result was that I couldn't install, lest test it. I hope this helps. Gus Correa

Re: [OMPI users] shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-09 Thread Gus Correa
___ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA -

[OMPI users] Problems configuring OpenMPI 1.3.1 with numa, torque, and openib

2009-04-09 Thread Gus Correa
whether it is a 64- or 32-bit compiler, as somehow it seemed to work.) Thank you, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA -

Re: [OMPI users] Fwd: shared libraries issue compiling 1.3.1/intel 10.1.022

2009-04-10 Thread Gus Correa
you. (OK, I was about to say you forgot deb64 after -host, but you sent the fix below.) I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-

Re: [OMPI users] Problems configuring OpenMPI 1.3.1 with numa, torque, and openib

2009-04-10 Thread Gus Correa
other users besides me. And yes/lib :), I could build 1.3.1 right, with numa, torque, and openib! Many thanks, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA

Re: [OMPI users] Problem with running openMPI program

2009-04-13 Thread Gus Correa
names help avoid confusion with other MPI flavors. One MPI benchmark available free from Intel: http://www.intel.com/cd/software/products/asmo-na/eng/219848.htm There may be others though. Gus Correa - Gustavo Correa Lamont

Re: [OMPI users] Problem with running openMPI program

2009-04-13 Thread Gus Correa
line with the -host option, or you can specify them in a file with the -hostfile option. Do "mpiexec --help" to learn the details. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Col

Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa
Hi Orion, Prentice, list I had a related problem recently, building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2. Configure would complete, but not make. See this thread for a workaround: http://www.open-mpi.org/community/lists/users/2009/04/8724.php Gus Correa

Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa
ity/lists/users/2009/04/8724.php There is a little script in the above message to do the job. I hope it helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades,

Re: [OMPI users] PGI Fortran pthread support

2009-04-14 Thread Gus Correa
Orion Poplawski wrote: Gus Correa wrote: Hi Orion, Prentice, list I had a related problem recently, building OpenMPI with gcc, g++ and pgf90 8.0-4 on CentOS 5.2. Configure would complete, but not make. Easier solution is to set FC to "pgf90 -noswitcherror". Does not appear to

Re: [OMPI users] Problem with running openMPI program

2009-04-17 Thread Gus Correa
/?category=running#run-prereqs And try this recipe (if you use RSA keys instead of DSA, replace all "dsa" by "rsa"): http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-4.html#ss4.3 I hope thi

Re: [OMPI users] Problem with running openMPI program

2009-04-20 Thread Gus Correa
(or carefully set the OpenMPI bin path ahead of any other). The Linux command "locate" helps find things (e.g. "locate mpi.h"). You may need to update the location database before using it with "updatedb".

Re: [OMPI users] Problem with running openMPI program

2009-04-20 Thread Gus Correa
://www.openssh.com/ http://en.wikipedia.org/wiki/OpenSSH Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA

Re: [OMPI users] Problem with running openMPI program

2009-04-20 Thread Gus Correa
n standard Ethernet TCP/IP. Did you try your own programs? Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palis

Re: [OMPI users] Problem with running openMPI program

2009-04-20 Thread Gus Correa
ay find free MPI programs on the Internet. My two cents, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA

Re: [OMPI users] Problem with running openMPI program

2009-04-21 Thread Gus Correa
not there you need to install one of them. Read my previous email for details. I hope it will help you get HPL working, if you are interested on HPL. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory -

Re: [OMPI users] Problem with running openMPI program

2009-04-22 Thread Gus Correa
y answered. There isn't much more I can say to help you out. Good luck! Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-

Re: [OMPI users] Problem with running openMPI program

2009-04-22 Thread Gus Correa
Hi This is a MPICH2 error, not OpenMPI. I saw you sent the same message to the MPICH list. It looks like you are mixed both MPI flavors. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University

Re: [OMPI users] Problem with running openMPI program

2009-04-22 Thread Gus Correa
ifferent directories for OpenMPI and MPICH2. Or install only one MPI flavor. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades,

Re: [OMPI users] Problem with running openMPI program

2009-04-23 Thread Gus Correa
work in the beginning, but may be worthwhile in the long run. See this: http://www.rocksclusters.org/wordpress/ My two cents. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University P

Re: [OMPI users] Problem with running openMPI program

2009-04-28 Thread Gus Correa
/view/Cimec/PETScFEM Computational Chemistry, molecular dynamics, etc: http://www.tddft.org/programs/octopus/wiki/index.php/Main_Page http://classic.chem.msu.su/gran/gamess/ http://ambermd.org/ http://www.gromacs.org/ http://www.charmm.org/ Gus Correa Ankush Kaul wrote: Thanks everyone(esp Gus and J

Re: [OMPI users] Problem with running openMPI program

2009-04-29 Thread Gus Correa
http://fats-raid.ldeo.columbia.edu/pages/parallel_programming.html#mpi I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia Un

[OMPI users] HPL with OpenMPI: Do I have a memory leak?

2009-05-01 Thread Gus Correa
-prefix /the/run/directory \ -np 8 \ -mca btl [openib,]sm,self \ xhpl Any help, insights, suggestions, reports of previous experiences, are much appreciated. Thank you, Gus Correa

Re: [OMPI users] HPL with OpenMPI: Do I have a memory leak?

2009-05-01 Thread Gus Correa
the error: "No executable was specified on the mpiexec command line.". Could this possibly be the issue (say, wrong parsing of mca options)? Many thanks! Gus Correa - G

Re: [OMPI users] HPL with OpenMPI: Do I have a memory leak?

2009-05-01 Thread Gus Correa
inues to look strange, there are more things to check. Thanks, Jacob -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa Sent: Friday, May 01, 2009 12:17 PM To: Open MPI Users Subject: [OMPI users] HPL with OpenMPI: Do I have a memory lea

Re: [OMPI users] HPL with OpenMPI: Do I have a memory leak?

2009-05-01 Thread Gus Correa
ion per month! :) Many thanks! Gus Correa Brian W. Barrett wrote: Gus - Open MPI 1.3.0 & 1.3.1 attempted to use some controls in the glibc malloc implementation to handle memory registration caching for InfiniBand. Unfortunately, it was not only bugging in that it didn't work, but it als

Re: [OMPI users] 1.3.1 -rf rankfile behaviour ??

2009-05-04 Thread Gus Correa
-np ${NP} \ -mca mpi_paffinity_alone 1 \ -mca btl openib,sm,self \ -mca mpi_leave_pinned 0 \ -mca paffinity_base_verbose 5 \ xhpl Thank you, Gus Correa $ cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 1

Re: [OMPI users] *** An error occurred in MPI_Init

2009-05-08 Thread Gus Correa
about what you are really using. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA -

Re: [OMPI users] *** An error occurred in MPI_Init

2009-05-08 Thread Gus Correa
en the $HOME/.bashrc and added the following: PATH="/usr/local/bin:$PATH" LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH" Note that /usr/local/bin, not /usr/local/include should be pre-pended to your PATH! Gus Correa ---

Re: [OMPI users] Please help me with this simple setup. i am stuck

2009-05-09 Thread Gus Correa
ect hostfile). Moreover, the two hosts must talk to each other smoothly, they must agree about passwordless connections, about where the executables are, etc. You are the master, and you must tell both hosts how to agree on these things. You'll get there, just be patient, read the av

Re: [OMPI users] scaling problem with openmpi

2009-05-15 Thread Gus Correa
r instance, nodes that are on switch A will probably have a larger latency to talk to nodes on switch B, I suppose. I hope it helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY,

Re: [OMPI users] scaling problem with openmpi

2009-05-15 Thread Gus Correa
.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html#parallel Is this perhaps because the size of the problem doesn't justify using more than 32 processors? What is the meaning of the "32" on "CPMD 32 water"? I hope this helps, Gus Correa ---

Re: [OMPI users] scaling problem with openmpi

2009-05-16 Thread Gus Correa
t is going on. Once again, I am sorry for not reading your original message with the due attention. Gus Correa Gus Correa wrote: Hi Roman I googled out and found that CPMD is a molecular dynamics program. (What would be of civilization without Google?) Unfortunately I kind of wiped off fr

Re: [OMPI users] scaling problem with openmpi

2009-05-18 Thread Gus Correa
may want to try the workaround or the upgrade, regardless of any scaling performance expectations. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University P

Re: [OMPI users] scaling problem with openmpi

2009-05-18 Thread Gus Correa
Hi Pavel This is not my league, but here are some CPMD helpful links (code, benchmarks): http://www.cpmd.org/ http://www.cpmd.org/cpmd_thecode.html http://www.theochem.ruhr-uni-bochum.de/~axel.kohlmeyer/cpmd-bench.html IHIH Gus Correa Noam Bernstein wrote: On May 18, 2009, at 12:50 PM

Re: [OMPI users] OpenMPI installation

2009-05-26 Thread Gus Correa
} I hope this helps, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA - Fivoskouk wrote: Hi

Re: [OMPI users] MPI_COMM_WORLD Error

2009-05-26 Thread Gus Correa
ly use the full path names to the OpenMPI mpif90 and to mpiexec. Inadvertent mix of these executables from different MPIs is a common source of frustration too. I hope this helps, Gus Correa - Gustavo Correa Lamont-Doherty Ea

Re: [OMPI users] To connect two computers as two nodes

2009-05-31 Thread Gus Correa
H* computers, or install on a directory that is (NFS) mounted on both computers. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA -

Re: [OMPI users] Performance testing software?

2009-05-31 Thread Gus Correa
Hi Trent The famous one is HPL, the Top500 benchmark: http://www.netlib.org/benchmark/hpl/ It takes some effort to configure and run it. Goto BLAS is probably your best choice for HPL: http://www.tacc.utexas.edu/resources/software/ I hope this helps, Gus Correa

Re: [OMPI users] sync problem

2009-06-01 Thread Gus Correa
was fixed in 1.3.2: http://www.open-mpi.org/community/lists/announce/2009/04/0030.php https://svn.open-mpi.org/trac/ompi/ticket/1853 If you are using 1.3.0 or 1.3.1, upgrade to 1.3.2. I hope this helps. Gus Correa - Gustavo C

Re: [OMPI users] Openmpi and processor affinity

2009-06-02 Thread Gus Correa
now if 1.2.8 (which you are using) has a problem with mpi_paffinity_alone, but the OpenMPI developers may clarify this. I hope this helps, Gus Correa - Gustavo Correa Lamont-Dohert

Re: [OMPI users] Pb in configure script when using ifort with "-fast" + link of opal_wrapper

2009-06-03 Thread Gus Correa
.) The flags became: -xW -O3 -ip -no-prec-div I used the same flags for ifort (FFLAGS, FCFLAGS), icc (CFLAGS) and icpc (CXXFLAGS),to build OpenMPI 1.3.2, and it works. For "Genuine Intel" processors you can upgrade -xW to whatever is appropriate. My $0.02.

Re: [OMPI users] Receiving MPI messages of unknown size

2009-06-03 Thread Gus Correa
sender ranks, instead of only one size. Just a thought. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA ---

Re: [OMPI users] uninstall

2009-06-10 Thread Gus Correa
to all of them, if the computers have the same architecture). Use the configure --prefix=/directory/name to choose the installation directory. Don't choose the standard locations /usr or /usr/local, as it may overwrite existing MPIs and mess things up with yum. I hope this helps. Gus Correa

Re: [OMPI users] MPI over ethernet non default-adapter - Need Help/Advice

2009-06-23 Thread Gus Correa
(and separate from the 10.42.0.0 net). You can check this out with the tools (ping, etc). I hope this helps, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Gus Correa
changed configure (somewhere along the 1.3 series), so I had to change again. If the libraries aren't in standard places (/usr/lib /usr/lib64), and the includes also (/usr/include) you need to tell configure where they are. See the OpenMPI README file and FAQ. My $0.02. Gus Correa PS - BTW, what

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-23 Thread Gus Correa
similar problems (with libnuma) here. I hope this helps, Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Gus Correa
nt "Wrapper extra LIBS". I have -lrdmacm -libverbs, you and Noam don't have them. (Noam: I am not saying you don't have IB support! :)) My configure explicitly asks for ib support, Noam's (and maybe yours) doesn't. Somehow, slight differences in how one invokes th

Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband

2009-06-24 Thread Gus Correa
. I hope this helps. Gus Correa - Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA - Jim Kress ORG wrote

Re: [OMPI users] cleaning up old ROMIO (MPI-IO) drivers

2016-01-05 Thread Gus Correa
Hi Rob Your email says you'll keep PVFS2. However, on your blog PVFS2 is not mentioned (on the "Keep" list). I suppose it will be kept, right? Thank you, Gus Correa On 01/05/2016 12:31 PM, Rob Latham wrote: I'm itching to discard some of the little-used file system drivers in ROMIO,

Re: [OMPI users] Problems in compiling a code with dynamic linking

2016-03-24 Thread Gus Correa
on the nodes' /opt, which *probably* will work: https://software.intel.com/en-us/articles/intelr-composer-redistributable-libraries-by-version ** I hope this helps, Gus Correa On 03/24/2016 12:01 AM, Gilles Gouaillardet wrote: Elio, usually, /opt is a local filesystem, so it is possible /opt/intel

Re: [OMPI users] MPIRUN SEGMENTATION FAULT

2016-04-25 Thread Gus Correa
events the core file to be created when the program crashes, but on the upside also prevents disk to fill up with big core files that are forgotten and hang around forever. [ulimit -a will tell.] I hope this helps, Gus Correa On 04/23/2016 07:06 PM, Gilles Gouaillardet wrote: If you bu

Re: [OMPI users] Segmentation Fault (Core Dumped) on mpif90 -v

2016-05-05 Thread Gus Correa
you may need also to make the locked memory unlimited: ulimit -l unlimited I hope this helps, Gus Correa On 05/05/2016 05:15 AM, Giacomo Rossi wrote: gdb /opt/openmpi/1.10.2/intel/16.0.3/bin/mpif90 GNU gdb (GDB) 7.11 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL

Re: [OMPI users] No core dump in some cases

2016-05-09 Thread Gus Correa
this on the pbs_mom daemon init script (I am still before the systemd era, that lovely POS). And set the hard/soft limits on /etc/security/limits.conf as well. I hope this helps, Gus Correa On 05/07/2016 12:27 PM, Jeff Squyres (jsquyres) wrote: I'm afraid I don't know what a .btr file

Re: [OMPI users] No core dump in some cases

2016-05-10 Thread Gus Correa
ncarnation of an OpenMPI 1.6.5 question similar to yours (where .btr stands for backtrace): http://stackoverflow.com/questions/25275450/cause-all-processes-running-under-openmpi-to-dump-core Could this be due to a (unlikely) mix of OpenMPI 1.10 with 1.6.5? Gus Correa On Mon, May 9, 2016 at 12:04 PM,

Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread Gus Correa
hope this helps, Gus Correa

Re: [OMPI users] "failed to create queue pair" problem, but settings appear OK

2016-06-15 Thread Gus Correa
) See also this FAQ related to registered memory. I set these parameters in /etc/modprobe.d/mlx4_core.conf, but where they're set may depend on the Linux distro/release and the OFED you're using. https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem I hope this helps, Gus Correa On

Re: [OMPI users] "failed to create queue pair" problem, but settings appear OK

2016-06-15 Thread Gus Correa
to (#18 in tuning runtime MPI to OpenFabrics) regards the OFED kernel module parameters log_num_mtt and log_mtts_per_seg, not to the openib btl mca parameters. They may default to a less-than-optimal value. https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem Gus Correa (not Chuck

Re: [OMPI users] Restart after code hangs

2016-06-16 Thread Gus Correa
er/cluster), but in your case it can be adjusted to how often the program fails. All atmosphere/ocean/climate/weather_forecast models work this way (that's what we mostly run here). I guess most CFD, computational Chemistry, etc, programs also do. I hope this helps, Gus Correa On 06/16/2016 05:25 PM, A

Re: [OMPI users] how to build with memchecker using valgrind, preferable linux distro install of valgrind?

2016-07-14 Thread Gus Correa
Maybe just --with-valgrind or --with-valgrind=/usr would work? On 07/14/2016 11:32 AM, David A. Schneider wrote: I thought it would be a good idea to build a debugging version of openmpi 1.10.3. Following the instructions in the FAQ:

Re: [OMPI users] MPI_ABORT was invoked on rank 0 in communicator compute with errorcode 59

2016-11-15 Thread Gus Correa
e more user friendly. You could also compile it with the flag -traceback (or -fbacktrace, the syntax depends on the compiler, check the compiler man page). This at least will tell you the location in the program where the segmentation fault happened (in the STDERR file of your job). I hope this h

Re: [OMPI users] -host vs -hostfile

2017-07-31 Thread Gus Correa
pirun in a short Torque script: #PBS -l nodes=4:ppn=1 ... mpirun hostname The output should show all four nodes. Good luck! Gus Correa On 07/31/2017 02:41 PM, Mahmood Naderan wrote: Well it is confusing!! As you can see, I added four nodes to the host file (the same nodes are used by PBS). The -

Re: [OMPI users] -host vs -hostfile

2017-07-31 Thread Gus Correa
S_NODEFILE. However, that doesn't seem to be the case here, as the mpirun command line in the various emails has a single executable "a.out". I hope this helps. Gus Correa On 07/31/2017 12:43 PM, Elken, Tom wrote: “4 threads” In MPI, we refer to this as 4 ranks or 4 processes. So w

Re: [OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-13 Thread Gus Correa
Have you tried: -mca btl vader,openib,self or -mca btl sm,openib,self by chance? That adds a btl for intra-node communication (vader or sm). On 07/13/2017 05:43 PM, Boris M. Vulovic wrote: I would like to know how to invoke InfiniBand hardware on CentOS 6x cluster with OpenMPI (static

Re: [OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-17 Thread Gus Correa
On 07/17/2017 01:06 PM, Gus Correa wrote: Hi Boris The nodes may have standard Gigabit Ethernet interfaces, besides the Infiniband (RoCE). You may want to direct OpenMPI to use the Infiniband interfaces, not Gigabit Ethernet, by adding something like this to "--mca btl self,vader,self&qu

Re: [OMPI users] Q: Basic invoking of InfiniBand with OpenMPI

2017-07-17 Thread Gus Correa
aq/?category=all#tcp-selection BTW, some of your questions (and others that you may hit later) are covered in the OpenMPI FAQ: https://www.open-mpi.org/faq/?category=all I hope this helps, Gus Correa On 07/17/2017 12:43 PM, Boris M. Vulovic wrote: Gus, Gilles, Russell, John: Thanks very much f

Re: [OMPI users] Help

2017-04-27 Thread Gus Correa
: command not found” I am following the instruction from here: https://na-inet.jp/na/pccluster/centos_x86_64-en.html Any help is much appreciated. J Corina You need to install openmpi.x86_64 also, not only openmpi-devel.x86_64. That is the minimum. I hope this helps, Gus Correa

Re: [OMPI users] Fwd: Make All error regarding either "Conflicting" or "Previous Declaration" among others

2017-09-21 Thread Gus Correa
p;1 | tee my_make_install.log ** If using csh/tcsh: ./configure CC=gcc CXX=g++ F77=gfortran FC=gfortran --prefix=/usr/local/openmpi |& tee my_configure.log make |& tee my_make.log make install |& tee my_make_install.log I hope this helps, Gus Correa For what i

Re: [OMPI users] OpenMPI with-tm is not obeying torque

2017-10-06 Thread Gus Correa
que to a job, if any, or when Torque is configured without cpuset support, to somehow still bind the MPI processes to cores/processors/sockets/etc. I hope this helps, Gus Correa On 10/06/2017 02:22 AM, Anthony Thyssen wrote: Sorry r...@open-mpi.org <mailto:r...@open-mpi.org>  as Gilles Gouai

Re: [OMPI users] OpenMPI 3.0.1 - mpirun hangs with 2 hosts

2018-05-14 Thread Gus Correa
(as opposed to append) OpenMPI to your PATH? Say: export PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin' I hope this helps, Gus Correa On 05/14/2018 12:40 PM, Max Mellette wrote: John, Thanks

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Gus Correa
On 08/10/2018 01:27 PM, Jeff Squyres (jsquyres) via users wrote: It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon. As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4, meaning that they'll likely continue to be in MPI for at least another 10

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Gus Correa
are great, knows nothing about the MPI Forum protocols and activities, but hopes the Forum pays attention to users' needs. Gus Correa PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote too. :) On 08/10/2018 02:19 PM, Jeff Squyres (jsquyres) via users wrote: Jeff H

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Gus Correa
if it strips off useful functionality. My cheap 2 cents from a user. Gus Correa On 08/10/2018 01:52 PM, Jeff Hammond wrote: This thread is a perfect illustration of why MPI Forum participants should not flippantly discuss feature deprecation in discussion with users.  Users who are not familiar

Re: [OMPI users] mca_oob_tcp_recv_handler: invalid message type: 15

2019-12-10 Thread Gus Correa via users
Open MPI 4.0.2 here: /home/guido/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/ Have you tried this instead? LD_LIBRARY_PATH=$HOME/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/lib:$LD_LIBRARY_PATH I hope this helps, Gus Correa On Tue, Dec 10, 2019 at 4:40 PM Guido granda muñoz via users

Re: [OMPI users] Code failing when requesting all "processors"

2020-10-13 Thread Gus Correa via users
Can you use taskid after MPI_Finalize? Isn't it undefined/deallocated at that point? Just a question (... or two) ... Gus Correa > MPI_Finalize(); > > printf("END OF CODE from task %d\n", taskid); On Tue, Oct 13, 2020 at 10:34 AM Jeff Squyres (jsquyres) via users

Re: [OMPI users] MPI is still dominant paradigm?

2020-08-07 Thread Gus Correa via users
"The reports of MPI death are greatly exaggerated." [Mark Twain] And so are the reports of Fortran death (despite the efforts of many CS departments to make their students Fortran- and C-illiterate). IMHO the level of abstraction of MPI is adequate, and actually very well designed. Higher levels

Re: [OMPI users] Moving an installation

2020-07-24 Thread Gus Correa via users
+1 In my experience moving software, especially something of the complexity of (Open) MPI, is much more troublesome (and often just useless frustration) and time consuming than recompiling it. Hardware, OS, kernel, libraries, etc, are unlikely to be compatible. Gus Correa On Fri, Jul 24, 2020

Re: [OMPI users] 4.0.5 on Linux Pop!_OS

2020-11-07 Thread Gus Correa via users
>> Core(s) per socket: 8 > "4. If none of a hostfile, the --host command line parameter, or an RM is > present, Open MPI defaults to the number of processor cores" Have you tried -np 8? On Sun, Nov 8, 2020 at 12:25 AM Paul Cizmas via users < users@lists.open-mpi.org> wrote: >

Re: [OMPI users] mpirun on Kubuntu 20.4.1 hangs

2020-10-21 Thread Gus Correa via users
-hostfile https://www.open-mpi.org/faq/?category=running I hope this helps, Gus Correa On Tue, Oct 20, 2020 at 4:47 PM Jorge SILVA via users < users@lists.open-mpi.org> wrote: > Hello, > > I installed kubuntu20.4.1 with openmpi 4.0.3-0ubuntu in two different > computers in the stand

Re: [OMPI users] Error with building OMPI with PGI

2021-01-14 Thread Gus Correa via users
.com/users@lists.open-mpi.org/msg08962.html https://www.mail-archive.com/users@lists.open-mpi.org/msg10375.html I hope this helps, Gus Correa On Thu, Jan 14, 2021 at 5:45 PM Passant A. Hafez via users < users@lists.open-mpi.org> wrote: > Hello, > > > I'm having an error when trying to

Re: [OMPI users] stdout scrambled in file

2021-12-05 Thread Gus Correa via users
processes are talking. I hope this helps, Gus Correa On Sun, Dec 5, 2021 at 1:12 PM Jeff Squyres (jsquyres) via users < users@lists.open-mpi.org> wrote: > FWIW: Open MPI 4.1.2 has been released -- you can probably stop using an > RC release. > > I think you're probably run

Re: [OMPI users] Using OSU benchmarks for checking Infiniband network

2022-02-07 Thread Gus Correa via users
This may have changed since, but these used to be relevant points. Overall, the Open MPI FAQ have lots of good suggestions: https://www.open-mpi.org/faq/ some specific for performance tuning: https://www.open-mpi.org/faq/?category=tuning https://www.open-mpi.org/faq/?category=openfabrics 1) Make

Re: [OMPI users] libnuma.so error

2023-07-19 Thread Gus Correa via users
with: yum list | grep numa (CentOS 7, RHEL 7) dnf list | grep numa (CentOS 8, RHEL 8, RockyLinux 8, Fedora, etc) apt list | grep numa (Debian, Ubuntu) If not, you can install (or ask the system administrator to do it). I hope this helps, Gus Correa On Wed, Jul 19, 2023 at 11:55 AM Jeff Squyres

Re: [OMPI users] libnuma.so error

2023-07-20 Thread Gus Correa via users
to pull their system image, separate from yum/dnf/apt/] Gus On Thu, Jul 20, 2023 at 4:00 AM Luis Cebamanos via users < users@lists.open-mpi.org> wrote: > Hi Gus, > > Yeap, I can see softlink is missing on the compute nodes. > > Thanks! > Luis > &g

<    1   2   3   4   5