Dave,
if you want to use the Open MPI you downloaded and compiled, you need to
set your PATH environment variable accordingly
if you'd rather keep it simple and use Ubuntu provided Open MPI,
you can do as advised by ubuntu :
sudo apt-get install libopenmpi-dev
and then try again
Cheers,
Gi
Hello- I have a Ubuntu 12.04 distro, running on a 32 platform. I
installed http://www.open-mpi.org/software/ompi/v1.10/downloads/openm .
I have hello_c.c in the examples subdirectory. I installed 'c' compiler.
When I run mpicc hello_c.c screen dump shows:
dave@ubuntu-desk:~/Desktop/openmpi-1.
It appears that the PDF that was originally posted was corrupted. Doh!
The file has been fixed -- you should be able to download and open it correctly
now:
http://www.open-mpi.org/papers/sc-2015/
Sorry about that, folks!
> On Nov 19, 2015, at 9:03 AM, Jeff Squyres (jsquyres)
> wrote:
>
On Thu, Nov 19, 2015 at 4:11 PM, Howard Pritchard
wrote:
> Hi Jeff H.
>
> Why don't you just try configuring with
>
> ./configure --prefix=my_favorite_install_dir
> --with-libfabric=install_dir_for_libfabric
> make -j 8 install
>
> and see what happens?
>
>
That was the first thing I tried. Howe
Hi Jeff,
I finally got an allocation on cori - its one busy machine.
Anyway, using the ompi i'd built on edison with the above recommended
configure options
I was able to run using either srun or mpirun on cori provided that in the
later case I used
mpirun -np X -N Y --mca plm slurm ./my_favorit
Hi Jeff H.
Why don't you just try configuring with
./configure --prefix=my_favorite_install_dir
--with-libfabric=install_dir_for_libfabric
make -j 8 install
and see what happens?
Make sure before you configure that you have PrgEnv-gnu or PrgEnv-intel
module loaded.
Those were the configure/com
>
>
> How did you configure for Cori? You need to be using the slurm plm
> component for that system. I know this sounds like gibberish.
>
>
../configure --with-libfabric=$HOME/OFI/install-ofi-gcc-gni-cori \
--enable-mca-static=mtl-ofi \
--enable-mca-no-build=btl-openib,
Michael,
in the mean time, you can use 'mpi_f08' instead of 'use mpi'
this is really a f90 binding issue, and f08 is safe
Cheers,
Gilles
On 11/19/2015 10:21 PM, michael.rach...@dlr.de wrote:
Thank You, Nick and Gilles,
I hope the administrators of the cluster will be so kind and will
upd
I apologize, I have the wrong lines from strace for the initial file there (of
course). The file with fd = 11 which causes the problem is called
shared_mem_pool.[host] and fruncate(11, 134217736) is called on it. (This is
exactly 1024 times the ulimit of 131072 which makes sense as the ulimit is
> Could you please provide a little more info regarding the environment you
> are running under (which resource mgr or not, etc), how many nodes you had
> in the allocation, etc?
> There is no reason why something should behave that way. So it would help
> if we could understand the setup.
Could you please provide a little more info regarding the environment you
are running under (which resource mgr or not, etc), how many nodes you had
in the allocation, etc?
There is no reason why something should behave that way. So it would help
if we could understand the setup.
Ralph
On Thu, N
An "strace" showed something related to shared memory use was causing the
signal. Sticking
btl = ^sm
into the openmpi-mca-params.conf file fixed this issue.
saurabh
From: saur...@hotmail.com
To: us...@open-mpi.org
Subject: Openmpi 1.10.1 fails with SIGXFSZ on file limit <= 131072
List-Post: us
Hi,
Sorry my previous email was garbled, sending it again.
> cd examples
> make hello_cxx
> ulimit -f 131073
> orterun -np 3 hello_cxx
Hello, world
(etc)
> ulimit -f 131072
> orterun -np 3 hello_cxx
--
orterun noticed that
Here's what I find:
> cd examples
> make hello_cxx
> ulimit -f 131073
> orterun -np 3 hello_cxxHello, world!
[Etc]
> ulimit -f 131072
> orterun -np 3 hello_cxx
Hi Jeff
How did you configure for Cori? You need to be using the slurm plm component
for that system. I know this sounds like gibberish.
There should be a with-slurm configure option to pick up this component.
Doesn't mpich have the option to use sysv memory? You may want to try that
Oh
Hi Jeff,
On Thu 19.11.2015 09:44:20 Jeff Hammond wrote:
> I have no idea what this is trying to tell me. Help?
>
> jhammond@nid00024:~/MPI/qoit/collectives> mpirun -n 2 ./driver.x 64
> [nid00024:00482] [[46168,0],0] ORTE_ERROR_LOG: Not found in file
> ../../../../../orte/mca/plm/alps/plm_alps_mo
I have no idea what this is trying to tell me. Help?
jhammond@nid00024:~/MPI/qoit/collectives> mpirun -n 2 ./driver.x 64
[nid00024:00482] [[46168,0],0] ORTE_ERROR_LOG: Not found in file
../../../../../orte/mca/plm/alps/plm_alps_module.c at line 418
I can run the same job with srun without incide
On 19/11/2015 16:15, Lev Givon wrote:
Received from Jeff Squyres (jsquyres) on Thu, Nov 19, 2015 at 10:03:33AM EST:
Thanks to the over 100 people who came to the Open MPI State of the Union BOF
yesterday. George Bosilca from U. Tennessee, Nathan Hjelm from Los Alamos
National Lab, and I prese
Received from Jeff Squyres (jsquyres) on Thu, Nov 19, 2015 at 10:03:33AM EST:
> Thanks to the over 100 people who came to the Open MPI State of the Union BOF
> yesterday. George Bosilca from U. Tennessee, Nathan Hjelm from Los Alamos
> National Lab, and I presented where we are with Open MPI devel
Thanks to the over 100 people who came to the Open MPI State of the Union BOF
yesterday. George Bosilca from U. Tennessee, Nathan Hjelm from Los Alamos
National Lab, and I presented where we are with Open MPI development, and where
we're going.
If you weren't able to join us, feel free to read
Hi Ibrahim,
If you just try to compile with the javac do you at least see a "error:
package mpi..." does not exist?
Adding the "-verbose" option may also help with diagnosing the problem.
If the javac doesn't get that far then your problem is with the java
install.
Howard
2015-11-19 6:45 GMT-
Hello,
thank you for answering.
the command mpijavac --verbose Hello.java gives me the same result as yours.
JAVA_HOME ist right at me, but I don't have neither JAVA_BINDIR nor JAVA_ROOT.
I think that the both variables don't cause the problem, because I was able to
compile Hello.java before t
Thank You, Nick and Gilles,
I hope the administrators of the cluster will be so kind and will update
OpenMPI for me (and others) soon.
Greetings
Michael
Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von Gilles
Gouaillardet
Gesendet: Donnerstag, 19. November 2015 12:59
An: Open MP
Thanks Nick for the pointer !
Michael,
good news is you do not have to upgrade ifort,
but you have to update to 1.10.1
(intel 16 changed the way gcc pragmas are handled, and ompi has been made
aware in 1.10.1)
1.10.1 fixes many bugs from 1.10.0, so I strongly encourage anyone to use
1.10.1
Cheer
Thank you for the fix,
I could have tried only today, I confirm it works with the patch and with
the mca option.
Cheers,
Federico Reghenzani
2015-11-18 6:15 GMT+01:00 Gilles Gouaillardet :
> Federico,
>
> i made PR #772 https://github.com/open-mpi/ompi-release/pull/772
>
> feel free to manually
Maybe I can chip in,
We use OpenMPI 1.10.1 with Intel /2016.1.0.423501 without problems.
I could not get 1.10.0 to work, one reason is:
http://www.open-mpi.org/community/lists/users/2015/09/27655.php
On a side-note, please note that if you require scalapack you may need to
follow this approach:
Sorry, Gilles,
I cannot update to more recent versions, because what I used is the newest
combination of OpenMPI and Intel-Ftn available on that cluster.
When looking at the list of improvements on the OpenMPI website for OpenMPI
1.10.1 compared to 1.10.0, I do not remember having seen this
Michael,
I remember i saw similar reports.
Could you give a try to the latest v1.10.1 ?
And if that still does not work, can you upgrade icc suite and give it an other
try ?
I cannot remember whether this is an ifort bug or the way ompi uses fortran...
Btw, any reason why you do not
Use mpi_f0
Dear developers of OpenMPI,
I am trying to run our parallelized Ftn-95 code on a Linux cluster with
OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
In the code I use the module MPI ("use MPI"-stmts).
However I am not able to compile the code, because of compiler error messages
like this:
/
29 matches
Mail list logo