Leo,
Thanks for the info. That is interesting. And yes, Having a CUDA aware MPI
API list would be very useful.
--Junchao Zhang
On Mon, Aug 19, 2019 at 10:23 AM Fang, Leo
mailto:leof...@bnl.gov>> wrote:
Hi Junchao,
First, for your second question, the answer is here:
Note that it is not advisable to run autogen from a distribution tarball.
Specifically: we include the autogen script for advanced users who want to
tweak their own Open MPI copy (without cloning from git), but 99.99% of users
can just run "./configure ..." directly (without first running
Do not use Open MPI v2.0.x -- it's ancient.
You should probably use Open MPI v3.1.x or v4.0.x.
On Aug 19, 2019, at 2:12 PM, Riddhi A Mehta via users
mailto:users@lists.open-mpi.org>> wrote:
Hi
I followed the exact procedure stated in this link:
Hi
this is steven. I am building custom clusters on AWS Ec2 and had some
problems in the past. I am getting good result with external pmix 3.1.3
./autogen.sh && ./configure --prefix=/usr/local/ --with-platform=optimized
--with-hwloc=/usr/local --with-libevent=/usr/local --enable-pmix-binaries
Hi
I followed the exact procedure stated in this link:
http://www.science.smith.edu/dftwiki/index.php/Install_MPI_on_a_MacBook.
It runs correctly until this line : mpicc -o hello helloWorld.c
After which it gives me the error when I do mpirun.
Thank you
Riddhi
From: users on behalf of
Is there any chance that the fact that Riddhi appears to be trying to execute
an uncompiled hello.c could be the problem here?
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Jeff Squyres
(jsquyres) via users
Sent: Monday, August 19, 2019 2:05 PM
To: Open MPI User's List
Cc:
Can you provide some more details?
https://www.open-mpi.org/community/help/
On Aug 19, 2019, at 1:18 PM, Riddhi A Mehta via users
mailto:users@lists.open-mpi.org>> wrote:
Hello
My name is Riddhi and I am a Graduate Research Assistant in the Dept. of
Physics & Astronomy at Purdue University.
Hello
My name is Riddhi and I am a Graduate Research Assistant in the Dept. of
Physics & Astronomy at Purdue University. About a month ago I correctly
configured openmpi on my mac and the ‘mpirun -np 2 ./hello.c’ ran correctly.
But today, it gave me the following error:
Hi Junchao,
First, for your second question, the answer is here:
https://www.mail-archive.com/users@lists.open-mpi.org/msg33279.html. I know
this because I also asked it earlier It'd be nice to have this documented in
the Q though.
As for your first question, I am also interested. It'd
On Aug 19, 2019, at 6:15 AM, Sangam B via users
wrote:
>
> subroutine recv(this,lmb)
> class(some__example6), intent(inout) :: this
> integer, intent(in) :: lmb(2,2)
>
> integer :: cs3, ierr
> integer(kind=C_LONG) :: size
This ^^ is your problem. More below.
> ! receive
Hi,
Evan after recompiling OpenMPI with -fdefault-real-8, it fails with same
error.
It seems to me that, it's an issue with OpenMPI itself, because:
Intel MPI + gnu compiler --- works
Intel MPI + intel compiler --- works
Open MPI + gnu compiler --- fails
Open MPI + AOCC compiler --- fails
Thanks, but this is not really helping.
Could you please build a Minimal, Reproducible Example as described at
https://stackoverflow.com/help/minimal-reproducible-example ?
Cheers,
Gilles
On Mon, Aug 19, 2019 at 7:19 PM Sangam B via users
wrote:
>
> Hi,
>
> Here is the sample program snippet:
Hi,
Here is the sample program snippet:
#include "intrinsic_sizes.h"
#include "redef.h"
module module1_m
use mod1_m, only: some__example2
use mod2_m, only: some__example3
use mod3_m, only: some__example4
use mpi
use, intrinsic :: iso_c_binding
implicit none
private
I am not questioning whether you are facing an issue with Open MPI or not.
I am just asking for "the same application" (read minimal source code)
so I can reproduce the issue, investigate it and hopefully help you.
Meanwhile, try rebuilding Open MPI with '-fdefault-real-8' in your
FCFLAGS (since
Hi,
I've tried both gcc-8.1.0 and AOCC-2.0 compilers with openmpi-3.1.1. It
fails for both the compilers.
Posted error message was from OpenMPI-3.1.1 + AOCC-2.0 compiler.
To cross-check whether it is problem with OpenMPI or the base compiler,
compiled the same application with Intel MPI using
One more thing ...
Your initial message mentioned a failure with gcc 8.2.0, but your
follow-up message mentions LLVM compiler.
So which compiler did you use to build Open MPI that fails to build your test ?
Cheers,
Gilles
On Mon, Aug 19, 2019 at 6:49 PM Gilles Gouaillardet
wrote:
>
>
Thanks,
and your reproducer is ?
Cheers,
Gilles
On Mon, Aug 19, 2019 at 6:42 PM Sangam B via users
wrote:
>
> Hi,
>
> OpenMPI is configured as follows:
>
> export CC=`which clang`
> export CXX=`which clang++`
> export FC=`which flang`
> export F90=`which flang`
>
> ../configure
Hi,
OpenMPI is configured as follows:
export CC=`which clang`
export CXX=`which clang++`
export FC=`which flang`
export F90=`which flang`
../configure --prefix=/sw/openmpi/3.1.1/aocc20hpcx210-mpifort
--enable-mpi-fortran --enable-mpi-cxx --without-psm --without-psm2
--without-knem
Hi,
Can you please post a full but minimal example that evidences the issue?
Also please post your Open MPI configure command line.
Cheers,
Gilles
Sent from my iPod
> On Aug 19, 2019, at 18:13, Sangam B via users
> wrote:
>
> Hi,
>
> I get following error if the application is compiled
Hi,
I get following error if the application is compiled with openmpi-3.1.1:
mpifort -O3 -march=native -funroll-loops -finline-aggressive -flto
-J./bin/obj_amd64aocc20 -std=f2008 -O3 -march=native -funroll-loops
-finline-aggressive -flto -fallow-fortran-gnu-ext -ffree-form
-fdefault-real-8
Hello
Indeed we would like to expose this kind of info but Netloc is
unfornately undermanpowered these days. The code in git master is
outdated. We have a big rework in a branch but it still needs quite a
lot of polishing before being merged
The API is still mostly-scotch-oriented (i.e. for
21 matches
Mail list logo