nism to pull their
system image, separate from yum/dnf/apt/]
Gus
On Thu, Jul 20, 2023 at 4:00 AM Luis Cebamanos via users <
users@lists.open-mpi.org> wrote:
> Hi Gus,
>
> Yeap, I can see softlink is missing on the compute nodes.
>
> Thanks!
> Luis
>
> On 19/
with:
yum list | grep numa (CentOS 7, RHEL 7)
dnf list | grep numa (CentOS 8, RHEL 8, RockyLinux 8, Fedora, etc)
apt list | grep numa (Debian, Ubuntu)
If not, you can install (or ask the system administrator to do it).
I hope this helps,
Gus Correa
On Wed, Jul 19, 2023 at 11:55 AM Jeff Squyres
This may have changed since, but these used to be relevant points.
Overall, the Open MPI FAQ have lots of good suggestions:
https://www.open-mpi.org/faq/
some specific for performance tuning:
https://www.open-mpi.org/faq/?category=tuning
https://www.open-mpi.org/faq/?category=openfabrics
1) Make s
processes are talking.
I hope this helps,
Gus Correa
On Sun, Dec 5, 2021 at 1:12 PM Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
> FWIW: Open MPI 4.1.2 has been released -- you can probably stop using an
> RC release.
>
> I think you're probably run
.com/users@lists.open-mpi.org/msg08962.html
https://www.mail-archive.com/users@lists.open-mpi.org/msg10375.html
I hope this helps,
Gus Correa
On Thu, Jan 14, 2021 at 5:45 PM Passant A. Hafez via users <
users@lists.open-mpi.org> wrote:
> Hello,
>
>
> I'm having an error when tryin
>> Core(s) per socket: 8
> "4. If none of a hostfile, the --host command line parameter, or an RM is
> present, Open MPI defaults to the number of processor cores"
Have you tried -np 8?
On Sun, Nov 8, 2020 at 12:25 AM Paul Cizmas via users <
users@lists.open-mpi.org> wrote:
> Gill
-hostfile
https://www.open-mpi.org/faq/?category=running
I hope this helps,
Gus Correa
On Tue, Oct 20, 2020 at 4:47 PM Jorge SILVA via users <
users@lists.open-mpi.org> wrote:
> Hello,
>
> I installed kubuntu20.4.1 with openmpi 4.0.3-0ubuntu in two different
> computers in the stand
Can you use taskid after MPI_Finalize?
Isn't it undefined/deallocated at that point?
Just a question (... or two) ...
Gus Correa
> MPI_Finalize();
>
> printf("END OF CODE from task %d\n", taskid);
On Tue, Oct 13, 2020 at 10:34 AM Jeff Squyres (jsquyres) via
"The reports of MPI death are greatly exaggerated." [Mark Twain]
And so are the reports of Fortran death
(despite the efforts of many CS departments
to make their students Fortran- and C-illiterate).
IMHO the level of abstraction of MPI is adequate, and actually very well
designed.
Higher levels
+1
In my experience moving software, especially something of the complexity of
(Open) MPI,
is much more troublesome (and often just useless frustration) and time
consuming than recompiling it.
Hardware, OS, kernel, libraries, etc, are unlikely to be compatible.
Gus Correa
On Fri, Jul 24, 2020 at
Open MPI 4.0.2 here:
/home/guido/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/
Have you tried this instead?
LD_LIBRARY_PATH=$HOME/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/lib:$LD_LIBRARY_PATH
I hope this helps,
Gus Correa
On Tue, Dec 10, 2019 at 4:40 PM Guido granda muñoz via users
inloc and maxloc are great,
knows nothing about the MPI Forum protocols and activities,
but hopes the Forum pays attention to users' needs.
Gus Correa
PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote
too. :)
On 08/10/2018 02:19 PM, Jeff Squyres (jsquyres)
On 08/10/2018 01:27 PM, Jeff Squyres (jsquyres) via users wrote:
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
years.
if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
On 08/10/2018 01:52 PM, Jeff Hammond wrote:
This thread is a perfect illustration of why MPI Forum participants
should not flippantly discuss feature deprecation in discussion with
users. Users who are not familiar wit
you tried to prepend (as opposed to append) OpenMPI
to your PATH? Say:
export
PATH='/home/user/openmpi_install/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'
I hope this helps,
Gus Correa
On 05/14/2018 12:40 PM, Max Mel
ned by torque to a job, if any, or when Torque is configured
without cpuset support, to somehow still bind the MPI processes to
cores/processors/sockets/etc.
I hope this helps,
Gus Correa
On 10/06/2017 02:22 AM, Anthony Thyssen wrote:
Sorry r...@open-mpi.org <mailto:r...@open-mpi.org> as Gi
my_make.log
make install 2>&1 | tee my_make_install.log
** If using csh/tcsh:
./configure CC=gcc CXX=g++ F77=gfortran FC=gfortran
--prefix=/usr/local/openmpi |& tee my_configure.log
make |& tee my_make.log
make install |& tee my_make_install.log
I hope
pirun in a short Torque script:
#PBS -l nodes=4:ppn=1
...
mpirun hostname
The output should show all four nodes.
Good luck!
Gus Correa
On 07/31/2017 02:41 PM, Mahmood Naderan wrote:
Well it is confusing!! As you can see, I added four nodes to the host
file (the same nodes are used by PBS). The -
f $PBS_NODEFILE.
However, that doesn't seem to be the case here, as the mpirun command
line in the various emails has a single executable "a.out".
I hope this helps.
Gus Correa
On 07/31/2017 12:43 PM, Elken, Tom wrote:
“4 threads” In MPI, we refer to this as 4 ranks or 4 proces
On 07/17/2017 01:06 PM, Gus Correa wrote:
Hi Boris
The nodes may have standard Gigabit Ethernet interfaces,
besides the Infiniband (RoCE).
You may want to direct OpenMPI to use the Infiniband interfaces,
not Gigabit Ethernet,
by adding something like this to "--mca btl self,vader,self&qu
aq/?category=all#tcp-selection
BTW, some of your questions (and others that you may hit later)
are covered in the OpenMPI FAQ:
https://www.open-mpi.org/faq/?category=all
I hope this helps,
Gus Correa
On 07/17/2017 12:43 PM, Boris M. Vulovic wrote:
Gus, Gilles, Russell, John:
Thanks very much f
Have you tried:
-mca btl vader,openib,self
or
-mca btl sm,openib,self
by chance?
That adds a btl for intra-node communication (vader or sm).
On 07/13/2017 05:43 PM, Boris M. Vulovic wrote:
I would like to know how to invoke InfiniBand hardware on CentOS 6x
cluster with OpenMPI (static li
: command not found”
I am following the instruction from here:
https://na-inet.jp/na/pccluster/centos_x86_64-en.html
Any help is much appreciated. J
Corina
You need to install openmpi.x86_64 also, not only openmpi-devel.x86_64.
That is the minimum.
I hope this helps,
Gus Correa
e more user friendly.
You could also compile it with the flag -traceback
(or -fbacktrace, the syntax depends on the compiler, check the compiler
man page).
This at least will tell you the location in the program where the
segmentation fault happened (in the STDERR file of your job).
I hope this h
Maybe just --with-valgrind or --with-valgrind=/usr would work?
On 07/14/2016 11:32 AM, David A. Schneider wrote:
I thought it would be a good idea to build a debugging version of
openmpi 1.10.3. Following the instructions in the FAQ:
https://www.open-mpi.org/faq/?category=debugging#memchecker_ho
r/cluster),
but in your case it can be adjusted to how often the program fails.
All atmosphere/ocean/climate/weather_forecast models work
this way (that's what we mostly run here).
I guess most CFD, computational Chemistry, etc, programs also do.
I hope this helps,
Gus Correa
On 06/16/2016 0
(#18 in tuning runtime MPI to OpenFabrics)
regards the OFED kernel module parameters
log_num_mtt and log_mtts_per_seg, not to the openib btl mca parameters.
They may default to a less-than-optimal value.
https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
Gus Correa (not Chuck
ed
3) See also this FAQ related to registered memory.
I set these parameters in /etc/modprobe.d/mlx4_core.conf,
but where they're set may depend on the Linux distro/release and the
OFED you're using.
https://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
I hope this helps,
info/beowulf
I hope this helps,
Gus Correa
carnation of an OpenMPI 1.6.5 question
similar to yours (where .btr stands for backtrace):
http://stackoverflow.com/questions/25275450/cause-all-processes-running-under-openmpi-to-dump-core
Could this be due to a (unlikely) mix of OpenMPI 1.10 with 1.6.5?
Gus Correa
On Mon, May 9, 2016 at 12:04
I do this on the pbs_mom daemon
init script (I am still before the systemd era, that lovely POS).
And set the hard/soft limits on /etc/security/limits.conf as well.
I hope this helps,
Gus Correa
On 05/07/2016 12:27 PM, Jeff Squyres (jsquyres) wrote:
I'm afraid I don't know what a .btr
iband you may need also to make the locked memory
unlimited:
ulimit -l unlimited
I hope this helps,
Gus Correa
On 05/05/2016 05:15 AM, Giacomo Rossi wrote:
gdb /opt/openmpi/1.10.2/intel/16.0.3/bin/mpif90
GNU gdb (GDB) 7.11
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GN
events the core file to be created when the program
crashes, but on the upside also prevents disk to fill up with big core
files that are forgotten and hang around forever.
[ulimit -a will tell.]
I hope this helps,
Gus Correa
On 04/23/2016 07:06 PM, Gilles Gouaillardet wrote:
If you build y
only the Intel runtime libraries on the nodes'
/opt, which *probably* will work:
https://software.intel.com/en-us/articles/intelr-composer-redistributable-libraries-by-version
**
I hope this helps,
Gus Correa
On 03/24/2016 12:01 AM, Gilles Gouaillardet wrote:
Elio,
usually, /opt is a local
Hi Rob
Your email says you'll keep PVFS2.
However, on your blog PVFS2 is not mentioned (on the "Keep" list).
I suppose it will be kept, right?
Thank you,
Gus Correa
On 01/05/2016 12:31 PM, Rob Latham wrote:
I'm itching to discard some of the little-used file system drivers
--hostfile option, the actual node file would be $PBS_NODEFILE.
[You don't need to do it if Open MPI was built with Torque support.]
I hope this helps.
Gus Correa
Thank you.
--
Abhisek Mondal
/Research Fellow
/
/Structural Biology and Bioinformatics
/
/Indian Institute of Chemical Biolo
In this case, I guess the mpirun options would be:
mpirun --machinefile machine_mpi_bug.txt --mca btl self,vader,tcp
I am not even sure if with "vader" the "self" btl is needed,
as it was the case with "sm".
An OMPI developer could jump into this conversation and
a common cause of trouble.
OpenMPI needs PATH and LD_LIBRARY_PATH at runtime also.
I hope this helps,
Gus Correa
On Fri, Feb 27, 2015 at 10:44 PM, Syed Ahsan Ali wrote:
Dear Gus
Thanks once again for suggestion. Yes I did that before installation
to new path. I am getting error now about some
Hi Syed Ahsan Ali
To avoid any leftovers and further confusion,
I suggest that you delete completely the old installation directory.
Then start fresh from the configure step with the prefix pointing to
--prefix=/share/apps/openmpi-1.8.4_gcc-4.9.2
I hope this helps,
Gus Correa
On 02/27/2015 12
s obscure
about this, not making clear the difference between
/export/apps and /share/apps.
Issuing the Rocks commands:
"tentakel 'ls -d /export/apps'"
"tentakel 'ls -d /share/apps'"
may show something useful.
I hope this helps,
Gus Correa
On 02/27/2015 11:47
Hi George
Many thanks for your answer and interest in my questions.
... so ... more questions inline ...
On 01/16/2015 03:41 PM, George Bosilca wrote:
Gus,
Please see my answers inline.
On Jan 16, 2015, at 14:24 , Gus Correa wrote:
Hi George
It is still not clear to me how to deal with
Is there any simple example of how to achieve stride effect with
MPI_Create_type_subarray in a multi-dimensional array?
BTW, when are you gentlemen going to write an updated version of the
"MPI - The Complete Reference"? :)
Thank you,
Gus Correa
(Hijacking Diego Avesani's thread, a
al/MPI/content6.html
Gus Correa
On 01/15/2015 06:53 PM, Diego Avesani wrote:
dear George, dear Gus, dear all,
Could you please tell me where I can find a good example?
I am sorry but I can not understand the 3D array.
Really Thanks
Diego
On 15 January 2015 at 20:13, George Bosilca mailto:bosi
(as you did in your previous code, with all the surprises regarding
alignment, etc), not array sections.
Also, MPI type vector should be more easy going (and probably more
efficient) than MPI type struct, with less memory alignment problems.
I hope this helps,
Gus Correa
PS - These books have a
Hi Diego
*EITHER*
declare your QQ and PR (?) structure components as DOUBLE PRECISION
*OR*
keep them REAL(dp) but *fix* your "dp" definition, as George Bosilca
suggested.
Gus Correa
On 01/08/2015 06:36 PM, Diego Avesani wrote:
Dear Gus, Dear All,
so are you suggesting to
uggested a while back.
I hope this helps,
Gus Correa
Thanks again
Diego
On 8 January 2015 at 23:24, George Bosilca mailto:bosi...@icl.utk.edu>> wrote:
Diego,
Please find below the corrected example. There were several issues
but the most important one, which is certainly
Hi Michael, Andrew, list
knem is doesn't work in OMPI 1.8.3.
See this thread:
http://www.open-mpi.org/community/lists/users/2014/10/25511.php
A fix was promised on OMPI 1.8.4:
http://www.open-mpi.org/software/ompi/v1.8/
Have you tried it?
I hope this helps,
Gus Correa
On 01/08/2015 04:
t I sent you before for more details.
I hope this helps,
Gus Correa
On 01/06/2015 01:37 PM, Deva wrote:
Hi Waleed,
--
Memlock limit: 65536
--
such a low limit should be due to per-user lock memory limit . Can you
make sure it is set to "unlimited" on all nodes ( &qu
egory=openfabrics#ib-locked-pages-more
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
***
Having said that, a question remains unanswered:
Why is Infiniband such a nightmare?
***
I hope this helps,
Gus Correa
On 12/30/2014 09:16 AM, Waleed Lotfy wrote:
Thank Devendar for your response.
number of open files is yet another hurdle.
And if you're using Infinband, the max locked memory size should be
unlimited.
Check /etc/security/limits.conf and "ulimit -a".
I hope this helps,
Gus Correa
On 12/10/2014 08:28 AM, Gilles Gouaillardet wrote:
Luca,
your email mention
:) )
My vote (... well, I don't have voting rights on that, but I'll vote
anyway ...) is to keeep the current approach.
It is wise and flexible, and easy to adjust and configure to specific
machines with their own oddities, via MCA parameters, as I tried to
explain in previous postings.
Hi Reuti
See below, please.
On 11/13/2014 07:19 AM, Reuti wrote:
Gus,
Am 13.11.2014 um 02:59 schrieb Gus Correa:
On 11/12/2014 05:45 PM, Reuti wrote:
Am 12.11.2014 um 17:27 schrieb Reuti:
Am 11.11.2014 um 02:25 schrieb Ralph Castain:
Another thing you can do is (a) ensure you built
east I think they are sensible. :)
Cheers,
Gus Correa
It tries so independent from the internal or external name of the headnode
given in the machinefile - I hit ^C then.
I attached the output of Open MPI 1.8.1 for this setup too.
-- Reuti
___
users m
questions below
(specially the 12 vader parameters).
Many thanks,
Gus Correa
On Oct 30, 2014, at 4:24 PM, Gus Correa wrote:
Hi Nathan
Thank you very much for addressing this problem.
I read your notes on Jeff's blog about vader,
and that clarified many things that were obscure to me
w
with the
btl_vader_single_copy_mechanism parameter?
Or must OMPI be configured with only one memory copy mechanism?
Many thanks,
Gus Correa
On 10/30/2014 05:44 PM, Nathan Hjelm wrote:
I want to close the loop on this issue. 1.8.5 will address it in several
ways:
- knem support in btl/sm has been fixed. A san
apparently
no solution):
http://www.open-mpi.org/community/lists/users/2013/02/21430.php
Maybe Mellanox has more information about this?
Gus Correa
On 10/21/2014 08:15 PM, Bill Broadley wrote:
On 10/21/2014 04:18 PM, Gus Correa wrote:
Hi Bill
Maybe you're missing these settings in
Hi Bill
Maybe you're missing these settings in /etc/modprobe.d/mlx4_core.conf ?
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
I hope this helps,
Gus Correa
On 10/21/2014 06:36 PM, Bill Broadley wrote:
I've setup several clusters over the years with OpenMPI. I
along automatically)
* -mca btl openib,self (and vader will come along automatically)
* -mca btl openib,self,vader (because vader is default only for 1-node jobs)
* something else (or several alternatives)
Whatever happened to the "self" btl in this new context?
Gone? Still there?
Many thanks
1.8
process placement conceptual model, along with its syntax
and examples.
Thank you,
Gus Correa
On 10/17/2014 12:10 AM, Ralph Castain wrote:
I know this commit could be a little hard to parse, but I have updated
the mpirun man page on the trunk and will port the change over to the
1.8 series
ix=${MYINSTALLDIR} \
--with-tm=/opt/torque/4.2.5/gnu-4.4.7 \
--with-verbs=/usr \
--with-knem=/opt/knem-1.1.1 \
2>&1 | tee configure_${build_id}.log
Many thanks,
Gus
On Oct 16, 2014, at 4:24 PM, Gus Correa wrote:
On 10/16/2014 05:38 PM, Nathan Hjelm wrote:
On Thu, Oct 16, 2014 at 05:2
On Oct 16, 2014, at 4:06 PM, Gus Correa wrote:
Hi All
Back to the original issue of knem in Open MPI 1.8.3.
It really seems to be broken.
I launched the Intel MPI benchmarks (IMB) job both with
'-mca btl ^vader,tcp', and with '-mca btl sm,self,openib'.
Both syntaxes seem
On 10/16/2014 05:38 PM, Nathan Hjelm wrote:
On Thu, Oct 16, 2014 at 05:27:54PM -0400, Gus Correa wrote:
Thank you, Aurelien!
Aha, "vader btl", that is new to me!
I tought Vader was that man dressed in black in Star Wars,
Obi-Wan Kenobi's nemesis.
That was a while ago, my kid
ers like these don't give me any incentive to upgrade
our production codes to OMPI 1.8.
Will this be fixed in the next Open MPI 1.8 release?
Thank you,
Gus Correa
PS - Many thanks to Aurelien Boutelier for pointing out the existence
of the vader btl. Without his tip I would still be in the d
ed to keep their MPI applications running in production mode,
hopefully with Open MPI 1.8,
can somebody explain more clearly what "vader" is about?
Thank you,
Gus Correa
On Thu, Oct 16, 2014 at 01:49:09PM -0700, Ralph Castain wrote:
FWIW: vader is the default in 1.8
On Oct 16, 2014,
openib, etc)?
How does it affect knem?
What are vader's pros/cons w.r.t. using the other btls?
In which conditions is it good or bad to use it vs. the other btls?
What do I gain/lose if I do "btl = sm,self,openib"
(which presumably will knock off tcp and "vader'),
or maybe &q
t;btl = ^tcp,^vader" ?
I am in CentOS 6.5, stock kernel 2.6.32, no 3.1,no CMA linux,
so I believe I need knem for now.
I tried '-mca btl_base_verbose 30' but no knem information came out.
Many thanks,
Gus Correa
On 10/16/2014 04:40 PM, Aurélien Bouteiller wrote:
Are you sure you are
t there was no trace of knem
in sderr/stdout of either 1.6.5 or 1.8.3.
So, the evidence I have that knem is
active in 1.6.5 but not in 1.8.3 comes only from the statistics in
/dev/knem.
***
Thank you,
Gus Correa
***
PS - As an aside, I also have some questions on the knem setup,
which I mos
and mpiexec options:
-bind-to-core, rmaps_base_schedule_policy, orte_process_binding, etc.
Thank you,
Gus Correa
On 10/15/2014 11:10 PM, Ralph Castain wrote:
On Oct 15, 2014, at 11:46 AM, Gus Correa mailto:g...@ldeo.columbia.edu>> wrote:
Thank you Ralph and Jeff for the help!
Glad to hear t
codes + short job queue time policy
is very common out there.
Here most problems with long runs
(we have some non-restartable serial code die-hards),
happen due to NFS issues (busy, slow response, etc),
and code with poorly designed IO.
My two cents,
Gus Correa
On 10/16/2014 10:16 AM, McGrattan, Kevin
any thanks,
Gus Correa
On 10/15/2014 11:12 AM, Jeff Squyres (jsquyres) wrote:
We talked off-list -- fixed this on master and just filed
https://github.com/open-mpi/ompi-release/pull/33 to get this into the v1.8
branch.
On Oct 14, 2014, at 7:39 PM, Ralph Castain wrote:
On Oct 14, 2014,
old (1.6)
OMPI runtime parameters, and/or any additional documentation
about the new style of OMPI 1.8 runtime parameters?
Since there seems to have been a major revamping of the OMPI
runtime parameters, that would be a great help.
Thank you,
Gus Correa
H and $OMPI/lib to LD_LIBRARY_PATH
and are these environment variables propagated to the job execution
nodes (specially those that are failing)?
Anyway, just a bunch of guesses ...
Gus Correa
*
QCSCRATCH Defines the directory in which
Q-Chem
will
There is no guarantee that the messages will be received in the same
order that they were sent.
Use tags or another mechanism to match the messages on send and recv ends.
On 09/18/2014 10:42 AM, XingFENG wrote:
I have found some thing strange.
Basically, in my codes, processes send and receive
libraries (blas, lapack, fft) and to build
them. At least that is what seems to have happened on my computer.
So, I don't think you need any other libraries.
Good luck,
Gus Correa
On 09/04/2014 04:17 PM, Elio Physics wrote:
Dear Gus,
Firstly I really need to thank you for the effort you are
arts of QE that it needs.
And this is *exactly what the error message in your first email showed*,
a bunch of object files that were not found.
***
Sorry, but I cannot do any better than this.
I hope this helps,
Gus Correa
On 09/03/2014 08:59 PM, Elio Physics wrote:
Ray and Gus,
Thanks a lot for
nd
top EPW directory (which per the recipe is right below the top QE)
plays a role.
Anyway, phonons are not my playground,
just trying to help two-cent-wise,
although this is not really an MPI or OpenMPI issue,
more or a Makefile/configure issue specific to QE and EPW.
Thanks,
Gus Correa
On 09/03/201
the recipe on the EPW web site?
http://epw.org.uk/Main/DownloadAndInstall
**
I hope this helps,
Gus Correa
On 09/03/2014 06:48 PM, Elio Physics wrote:
I have already done all of the steps you mentioned. I have installed the
older version of quantum espresso, configured it and followed all the
?
Do they have a mailing list or bulletin board where you could get
specific help for their software?
(Either on EPW or on QuantumExpresso (which seems to be required):
http://www.quantum-espresso.org/)
That would probably be the right forum to ask your questions.
My two cents,
Gus Correa
On
Was the error that you listed the *first* error?
Apparently various object files are missing from the
../../Modules/ directory, and were not compiled,
suggesting something is amiss even before the
compilation of the executable (epw.x).
On 09/03/2014 05:20 PM, Elio Physics wrote:
Dear all,
I am
Hi Peter
If I remember right from my compilation of OMPI on a Mac
years ago, you need to have X-Code installed, in case you don't.
If vampir-trace is the only problem,
you can disable it when you configure OMPI (--disable-vt).
My two cents,
Gus Correa
On 08/21/2014 03:35 PM, Bosler,
On 08/07/2014 11:49 AM, Ralph Castain wrote:
On Aug 7, 2014, at 8:47 AM, Reuti mailto:re...@staff.uni-marburg.de>> wrote:
Am 07.08.2014 um 17:28 schrieb Gus Correa:
I guess Control-C will kill only the mpirun process.
You may need to kill the (two) jules.exe processes separately,
say
On 08/07/2014 11:28 AM, Gus Correa wrote:
I guess Control-C will kill only the mpirun process.
You may need to kill the (two) jules.exe processes separately,
say, with kill -9.
ps -u "yourname"
will show what you have running.
Something may have been left behind by Control-C,
a
I guess Control-C will kill only the mpirun process.
You may need to kill the (two) jules.exe processes separately,
say, with kill -9.
ps -u "yourname"
will show what you have running.
On 08/07/2014 11:16 AM, Jane Lewis wrote:
Hi all,
This is a really simple problem (I hope) where I’ve introdu
eed/want, is a pain.
Anyway, this is the OMPI list, not a place for advocacy of either
package, so I am going to stop here.
I just wanted to set the record straight that:
- the Enviroment Modules package is not dead,
- it has a large user base, and
- it is sooo good that among other things it opened
e the same exact thing that they currently have,
and in the end gain little if any
relevant/useful/new functionality.
My two cents of opinion
Gus Correa
On 08/05/2014 12:54 PM, Ralph Castain wrote:
Check the repo - hasn't been touched in a very long time
On Aug 5, 2014, at 9:42 AM, Fabric
stall from
each of these dirctories, using the appropriate compilers,
and pointing to two distinct *installation directories*
(with configure -prefix).
My two cents,
Gus Correa
On 08/04/2014 11:54 PM, Andrew Caird wrote:
Hi Ahsan,
We, and I think many people, use the Environment Modules sof
ix?
(CC, CXX, FC)
Then "make distclean; configure; make; make install".
Gus Correa
On 08/04/2014 04:10 PM, Dan Shell wrote:
Ralph
Ok
I will give that a try
Thanks
Dan Shell
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Mo
not be mixed.
The OMPI implementations should be the same on all machines as well.
Running "which mpirun" on those machines may help.
These user enviroment problems often cause confusion.
My two cents,
Gus Correa
On 07/30/2014 09:56 AM, Ralph Castain wrote:
Does "polaris" ha
No underlying compiler was specified in the wrapper compiler data file
(e.g., mpicc-wrapper-data.txt)
The error message is complaining about mpicc, not mpifort.
I wonder if this may be due to a Makefile misconfiguration again.
My two cents,
Gus Correa
On 07/25/2014 03:02 PM, Jeff Squyres (jsquyres) wrote:
On Jul 25, 2014, at 1:14 PM, Gus Correa wrote:
Change the mkmf.template file and replace the Fortran
compiler name (gfortran) by the Open MPI (OMPI) Fortran compiler wrapper:
mpifortran (or mpif90 if it still exists
in OMPI 1.8.1
(e.g.
to MPICH libraries and include files).
Then rebuild the Makefile and compile MOM again.
I hope this helps.
Gus Correa
On 07/25/2014 12:37 PM, Dan Shell wrote:
OpenMOM-mpi
I am trying to compile MOM and have installed openmpi 1.8.1 getting an
installation error below
Looking for some help
lem.
Could your libcrypto be in an an unusual location?
Maybe you need to load a Torque environment module to add it to your
LD_LIBRARY_PATH before you build OMPI?
Gus Correa
On 07/24/2014 05:18 PM, Jeff Hammond wrote:
That could be the case. I've reported the missing libcrypto issue to
NERSC
ewise,
env |grep PATH
and
env |grep LD_LIBRARY_PATH
may hint if you have a mixed environment and mixed MPI implementations
and versions.
I hope this helps,
Gus Correa
PS - BTW, unless your company's policies forbid,
you can install OpenMPI on a user directory, say, your /home directory.
he only cause of the problem.
If you want to use openib switch to
--mca btl openib,sm,self
Another thing to check is whether there is a mixup of enviroment
variables, PATH and LD_LIBRARY_PATH perhaps pointing to the old OMPI
version you may have installed.
My two cents,
Gus Correa
On 06/
iexec, etc), to inherit those limits.
Or not?
Gus Correa
On 06/11/2014 06:20 PM, Jeff Squyres (jsquyres) wrote:
+1
On Jun 11, 2014, at 6:01 PM, Ralph Castain
wrote:
Yeah, I think we've seen that somewhere before too...
On Jun 11, 2014, at 2:59 PM, Joshua Ladd wrote:
Agreed. The
r" in Torque parlance).
This mostly matter if there is more than one job running on a node.
However, Torque doesn't bind processes/MPI_ranks to cores or sockets or
whatever. As Ralph said, Open MPI does that.
I believe Open MPI doesn't use the cpuset info from Torque.
(Ralph, pl
ferred transport layer for intra-node
communication.
Gus Correa
On 06/04/2014 11:13 AM, Ralph Castain wrote:
Thanks!! Really appreciate your help - I'll try to figure out what went
wrong and get back to you
On Jun 4, 2014, at 8:07 AM, Fischer, Greg A. mailto:fisch...@westinghouse.com>> wrote
nknown Unknown
CCTM_V5g_Linux2_x 007FD3A0 Unknown Unknown Unknown
CCTM_V5g_Linux2_x 007BA9A2 Unknown Unknown Unknown
CCTM_V5g_Linux2_x 00759288 Unknown Unknown Unknown
...
On Wed, May 21, 2014 at 2:08 PM, Gus Correa mailto:g...@ldeo.c
enmpi/1.6.5 should have been marked
to conflict with 1.4.4.
Is it?
Anyway, you may want to do a 'which mpiexec' to see which one is
taking precedence in your environment (1.6.5 or 1.4.4)
Probably 1.6.5.
Does the code work now, or does it continue to fail?
I hope this helps,
Gus Correa
he code.
(Probably just
module swap openmpi/1.4.4-intel openmpi/1.6.5-intel)
You may need to tweak with the Makefile, if it hardwires
the MPI wrappers/binary location, or the library and include paths.
Some do, some don't.
Gus Correa
[bl10@login2 ~]$ echo $PATH
/home/bl10/rlib/deps/bin:/
1 - 100 of 535 matches
Mail list logo