to pull their
system image, separate from yum/dnf/apt/]
Gus
On Thu, Jul 20, 2023 at 4:00 AM Luis Cebamanos via users <
users@lists.open-mpi.org> wrote:
> Hi Gus,
>
> Yeap, I can see softlink is missing on the compute nodes.
>
> Thanks!
> Luis
>
&g
If it is installed, libunuma should be in:
/usr/lib64/libnuma.so
as a softlink to the actual number-versioned library.
In general the loader is configured to search for shared libraries
in /usr/lib64 ("ldd " may shed some light here).
You can check if the numa packages are installed with:
yum
This may have changed since, but these used to be relevant points.
Overall, the Open MPI FAQ have lots of good suggestions:
https://www.open-mpi.org/faq/
some specific for performance tuning:
https://www.open-mpi.org/faq/?category=tuning
https://www.open-mpi.org/faq/?category=openfabrics
1) Make
Hi Mark
Back in the days I liked the mpirun/mpiexec *--tag-output *option.
Jeff: Does it still exist?
It may not prevent 100% the splitting of output lines,
but tagging the lines with the process rank helps.
You can grep the stdout log for the rank that you want,
which helps a lot when several
Hi Passant, list
This is an old problem with PGI.
There are many threads in the OpenMPI mailing list archives about this,
with workarounds.
The simplest is to use FC="pgf90 -noswitcherror".
Here are two out of many threads ... well, not pthreads! :)
>> Core(s) per socket: 8
> "4. If none of a hostfile, the --host command line parameter, or an RM is
> present, Open MPI defaults to the number of processor cores"
Have you tried -np 8?
On Sun, Nov 8, 2020 at 12:25 AM Paul Cizmas via users <
users@lists.open-mpi.org> wrote:
>
Hi Jorge
You may have an active firewall protecting either computer or both,
and preventing mpirun to start the connection.
Your /etc/hosts file may also not have the computer IP addresses.
You may also want to try the --hostfile option.
Likewise, the --verbose option may also help diagnose the
Can you use taskid after MPI_Finalize?
Isn't it undefined/deallocated at that point?
Just a question (... or two) ...
Gus Correa
> MPI_Finalize();
>
> printf("END OF CODE from task %d\n", taskid);
On Tue, Oct 13, 2020 at 10:34 AM Jeff Squyres (jsquyres) via users <
"The reports of MPI death are greatly exaggerated." [Mark Twain]
And so are the reports of Fortran death
(despite the efforts of many CS departments
to make their students Fortran- and C-illiterate).
IMHO the level of abstraction of MPI is adequate, and actually very well
designed.
Higher levels
+1
In my experience moving software, especially something of the complexity of
(Open) MPI,
is much more troublesome (and often just useless frustration) and time
consuming than recompiling it.
Hardware, OS, kernel, libraries, etc, are unlikely to be compatible.
Gus Correa
On Fri, Jul 24, 2020 at
Hi Guido
Your PATH and LD_LIBRARY_PATH seem to be inconsistent with each other:
PATH=$HOME/libraries/compiled_with_gcc-7.3.0/openmpi-4.0.2/bin:$PATH
LD_LIBRARY_PATH=/share/apps/gcc-7.3.0/lib64:$LD_LIBRARY_PATH
Hence, you may be mixing different versions of Open MPI.
It looks like you installer
11 matches
Mail list logo