info | grep -i mvapi:
MCA mpool : mvapi (MCA v1.0, API v1.0, Component v1.0)
MCA btl : mvapi (MCA v1.0, API v1.0, Component v1.0)
hardware : dual Xeon Nocona 2 GiB mem, mell. pci-exress HCAs
tia,
Peter
--
-------
mpirun --mca btl_base_include self,mvapi -np 4 a.out
>
> This will tell OMPI that you want to use the "self" (i.e., loopback)
> and "mvapi" BTLs, and no others.
>
> Try this and see if you get better results.
Nope, no errors, no extra output, but same ethernet-tcp
onfigure: ./configure --prefix=xxx --with-btl-mvapi=yyy --disable-cxx
--disable-f90 --disable-io-romio
--
----
Peter Kjellström |
National Supercomputer Centre |
Sweden | http://www.nsc.liu.se
pgpg3mftJmPTK.pgp
Description: PGP signature
> 1024 floats took 0.160632 (0.143644) seconds. Min: 0.003200 max:
> 0.268681 Writing logfile
> Finished writing logfile.
--
Peter Kjellström |
National Supercomputer Centre |
Sweden | http://www.nsc.liu.se
pgpqBS6LxHJl2.pgp
Description: PGP signature
On Mon, 13 Jun 2016 19:04:59 -0400
Mehmet Belgin wrote:
> Greetings!
>
> We have not upgraded our OFED stack for a very long time, and still
> running on an ancient version (1.5.4.1, yeah we know). We are now
> considering a big jump from this version to a tested and stable
> recent version an
On Tue, 14 Jun 2016 16:20:42 +
Grigory Shamov wrote:
> On 2016-06-14, 3:42 AM, "users on behalf of Peter Kjellström"
> wrote:
>
> >On Mon, 13 Jun 2016 19:04:59 -0400
> >Mehmet Belgin wrote:
> >
> >> Greetings!
> >>
> >&g
On Tue, 14 Jun 2016 13:18:33 -0400
"Llolsten Kaonga" wrote:
> Hello Grigory,
>
> I am not sure what Redhat does exactly but when you install the OS,
> there is always an InfiniBand Support module during the installation
> process. We never check/install that module when we do OS
> installations
On Wed, 15 Jun 2016 15:00:05 +0530
Sreenidhi Bharathkar Ramesh
wrote:
> hi Mehmet / Llolsten / Peter,
>
> Just curious to know what is the NIC or fabric you are using in your
> respective clusters.
>
> If it is Mellanox, is it not better to use the MLNX_OFED ?
We run both Mellanox ConnectX3 ba
On Wed, 13 Sep 2017 20:13:54 +0430
Mahmood Naderan wrote:
...
> `/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libc.a(strcmp.o)'
> can not be used when making an executable; recompile with -fPIE and
> relink with -pie collect2: ld returned 1 exit status
>
>
> With such an error, I tho
On Thu, 14 Sep 2017 14:28:08 +0430
Mahmood Naderan wrote:
> >In short, "mpicc -Wl,-rpath=/my/lib/path helloworld.c -o hello", will
> >compile a dynamic binary "hello" with built in search path
> >to "/my/lib/path".
>
> Excuse me... Is that a path or file? I get this:
It should be a path ie. d
On Thu, 14 Sep 2017 19:01:08 +0900
Gilles Gouaillardet wrote:
> Peter and all,
>
> an easier option is to configure Open MPI with
> --mpirun-prefix-by-default this will automagically add rpath to the
> libs.
Yes that sorts out the OpenMPI libs but I was imagining a more general
situation (and t
On Friday 19 November 2010 01:03:35 HeeJin Kim wrote:
...
> * mlx4: There is a mismatch between the kernel and the userspace
> libraries: Kernel does not support XRC. Exiting.*
...
> What I'm thinking is that the infiniband card is installed but it doesn't
> work in correct mode.
> My linux kerne
On Monday 06 December 2010 15:03:13 Mathieu Gontier wrote:
> Hi,
>
> A small update.
> My colleague made a mistake and there is no arithmetic performance
> issue. Sorry for bothering you.
>
> Nevertheless, one can observed some differences between MPICH and
> OpenMPI from 25% to 100% depending on
On Monday, January 10, 2011 03:06:06 pm Michael Di Domenico wrote:
> I'm not sure if these are being reported from OpenMPI or through
> OpenMPI from OpenFabrics, but i figured this would be a good place to
> start
>
> On one node we received the below errors, i'm not sure i under the
> error seque
On Thursday, March 10, 2011 08:30:19 pm Thierry LAMOUREUX wrote:
> Hello,
>
> We add recently enhanced our network with Infiniband modules on a six node
> cluster.
>
> We have install all OFED drivers related to our hardware
>
> We have set network IP like following :
> - eth : 192.168.1.0 / 255
On Monday, March 14, 2011 09:37:54 pm Bernardo F Costa wrote:
> Ok. Native ibverbs/openib is preferable although cannot be used by all
> applications (those who do not have a native ip interface).
Applications (in this context at least) uses the MPI interface. MPI in general
and OpenMPI in pertic
On Monday, March 21, 2011 12:25:37 pm Dave Love wrote:
> I'm trying to test some new nodes with ConnectX adaptors, and failing to
> get (so far just) IMB to run on them.
...
> I'm using gcc-compiled OMPI 1.4.3 and the current RedHat 5 OFED with IMB
> 3.2.2, specifying `btl openib,sm,self' (or `mtl
On Wednesday, May 04, 2011 04:04:37 PM hi wrote:
> Greetings !!!
>
> I am observing following error messages when executing attached test
> program...
>
>
> C:\test>mpirun mar_f.exe
...
> [vbgyor:9920] *** An error occurred in MPI_Allreduce
> [vbgyor:9920] *** on communicator MPI_COMM_WORLD
> [v
On Wednesday, May 25, 2011 01:16:04 PM Andrew Senin wrote:
> Hello list,
>
> I have an application which uses MPI_Allgather with derived types. It works
> correctly with mpich2 and mvapich2. However it crashes periodically with
> openmpi2.
Which version of OpenMPI are you using? There is no such
On Wednesday, May 25, 2011 01:16:04 PM Andrew Senin wrote:
> Hello list,
>
> I have an application which uses MPI_Allgather with derived types. It works
> correctly with mpich2 and mvapich2. However it crashes periodically with
> openmpi2. After investigation I found that the crash takes place whe
On Tuesday, September 13, 2011 09:07:32 AM nn3003 wrote:
> Hello !
>
> I am running wrf model on 4x AMD 6172 which is 12 core CPU. I use OpenMPI
> 1.4.3 and libgomp 4.3.4. I have binaries compiled for shared-memory and
> distributed-memory (OpenMP and OpenMPI) I use following command
> mpirun -np
HPGMG-FV is easy to build and to run both serial, mpi, openmp and
mpi+openmp.
/Peter
On Mon, 9 Oct 2017 17:54:02 +
"Sasso, John (GE Digital, consultant)" wrote:
> I am looking for a decent hybrid MPI+OpenMP benchmark utility which I
> can easily build and run with OpenMPI 1.6.5 (at least) a
On Tue, 10 Oct 2017 11:57:51 -0400
Michael Di Domenico wrote:
> i'm getting stuck trying to run some fairly large IMB-MPI alltoall
> tests under openmpi 2.0.2 on rhel 7.4
What is the IB stack used, just RHEL inbox?
Do you run openmpi on the psm mtl for qlogic and openib btl for
mellanox or some
On Fri, 1 Dec 2017 21:32:35 +0100
Götz Waschk wrote:
...
> # Benchmarking Alltoall
> # #processes = 1024
> #
>#bytes #repetitions t_min[usec] t_max[usec] t_avg[usec]
> 0 1000 0.04 0.09
On Wed, 13 Dec 2017 20:34:52 +0330
Mahmood Naderan wrote:
> >Currently I am using two Tesla K40m cards for my computational work
> >on quantum espresso (QE) suit http://www.quantum-espresso.org/. My
> >GPU enabled QE code running very slower than normal version
>
> Hi,
> When I hear such words
On Wed, 2 May 2018 11:15:09 +0200
Pierre Gubernatis wrote:
> Hello all...
>
> I am using a *cartesian grid* of processors which represents a spatial
> domain (a cubic geometrical domain split into several smaller
> cubes...), and I have communicators to address the procs, as for
> example a comm
On Wed, 02 May 2018 06:32:16 -0600
Nathan Hjelm wrote:
> Hit send before I finished. If each proc along the axis needs the
> partial sum (ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then
> MPI_Scan will do that.
I must confess that I had forgotten about MPI_Scan when I replied to
the OP. In fa
On Wed, 2 May 2018 08:39:30 -0400
Charles Antonelli wrote:
> This seems to be crying out for MPI_Reduce.
No, the described reduction cannot be implemented with MPI_Reduce (note
the need for partial sums along the axis).
> Also in the previous solution given, I think you should do the
> MPI_Sen
On Mon, 3 Dec 2018 19:41:25 +
"Hammond, Simon David via users" wrote:
> Hi Open MPI Users,
>
> Just wanted to report a bug we have seen with OpenMPI 3.1.3 and 4.0.0
> when using the Intel 2019 Update 1 compilers on our
> Skylake/OmniPath-1 cluster. The bug occurs when running the Github
> ma
On Tue, 4 Dec 2018 09:15:13 -0500
George Bosilca wrote:
> I'm trying to replicate using the same compiler (icc 2019) on my OSX
> over TCP and shared memory with no luck so far. So either the
> segfault it's something specific to OmniPath or to the memcpy
> implementation used on Skylake.
Note th
On Sat, 22 Dec 2018 12:42:24 -0500
Bennet Fauber wrote:
> Maybe the distribution tar ball at
>
> https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.3.tar.gz
>
> did not get refreshed after the fix in
>
> https://github.com/bosilca/ompi/commit/b902cd5eb765ada57f06c75048509d07169535
On Thu, 10 Jan 2019 11:20:12 +
ROTHE Eduardo - externe wrote:
> Hi Gilles, thank you so much for your support!
>
> For now I'm just testing the software, so it's running on a single
> node.
>
> Your suggestion was very precise. In fact, choosing the ob1 component
> leads to a successfull ex
On Thu, 10 Jan 2019 21:51:03 +0900
Gilles Gouaillardet wrote:
> Eduardo,
>
> You have two options to use OmniPath
>
> - “directly” via the psm2 mtl
> mpirun —mca pml cm —mca mtl psm2 ...
>
> - “indirectly” via libfabric
> mpirun —mca pml cm —mca mtl ofi ...
>
> I do invite you to try both. By
On Wed, 20 Feb 2019 10:46:10 -0500
Adam LeBlanc wrote:
> Hello,
>
> When I do a run with OpenMPI v4.0.0 on Infiniband with this command:
> mpirun --mca btl_openib_warn_no_device_params_found 0 --map-by node
> --mca orte_base_help_aggregate 0 --mca btl openib,vader,self --mca
> pml ob1 --mca btl_
FYI, Just noticed this post from the hdf group:
https://forum.hdfgroup.org/t/hdf5-and-openmpi/5437
/Peter K
pgpmcS_mBlpzB.pgp
Description: OpenPGP digital signature
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/lis
n code before it gets to
> >>the
> >>F90 application.
> >> I am using the Ver 6.1 PGF90 64bit compiler on a Linux Opteron
> >>workstation with 2
> >>dual core 2.4 GHZ processors. If you think it it is worthwhile to
> >>pursue this problem
> >>further, what could I send you to help troubleshoot the problem?
> >>Meanwhile I have
> >>gone back to 1.0.1, which works fine on everything.
--
Peter Kjellström |
National Supercomputer Centre |
Sweden | http://www.nsc.liu.se
pgpMZRUeJjrPR.pgp
Description: PGP signature
nor ones.
/Peter
>
> Daniël Mantione
> ...
--
--------
Peter Kjellström |
National Supercomputer Centre |
Sweden | http://www.nsc.liu.se
pgpf_WEE6QKIG.pgp
Description: PGP signature
On Tuesday 05 September 2006 09:19, Aidaros Dev wrote:
> Nowdays we hear about intel dual core processor, An Intel dual-core
> processor consists of two complete execution cores in one physical
> processor both running at the same frequency. Both cores share the same
> packaging and the same interf
On Tue, 27 Aug 2019 14:36:54 -0500
Cooper Burns via users wrote:
> Hello all,
>
> I have been doing some MPI benchmarking on an Infiniband cluster.
>
> Specs are:
> 12 cores/node
> 2.9ghz/core
> Infiniband interconnect (TCP also available)
>
> Some runtime numbers:
> 192 cores total: (16 nodes
On Wed, 28 Aug 2019 09:45:15 -0500
Cooper Burns wrote:
> Peter,
>
> Thanks for your input!
> I tried some things:
>
> *1) The app was placed/pinned differently by the two MPIs. Often this
> would probably not cause such a big difference.*
> I agree this is unlikely the cause, however I tried va
On Fri, 1 Nov 2019 15:48:35 +
"Jeff Squyres \(jsquyres\) via users" wrote:
> Open MPI doesn't have a public function in its Fortran interface
> named "random_seed". So I'm not sure what that's about.
That is a WRF+GCC bug.
> On Nov 1, 2019, at 11:36 AM, Qianjin Zheng
> mailto:qianjin.zh...
On Wed, 20 Nov 2019 17:38:19 +
"Mccall, Kurt E. \(MSFC-EV41\) via users"
wrote:
> Hi,
>
> My job is behaving differently on its two nodes, refusing to
> MPI_Comm_spawn() a process on one of them but succeeding on the
> other.
...
> Data for node: n002Num slots: 3... Bound: N/A
> Data
On Mon, 18 Nov 2019 17:48:30 +
"Mccall, Kurt E. \(MSFC-EV41\) via users"
wrote:
> I'm trying to debug a problem with my job, launched with the mpiexec
> options -display-map and -display-allocation, but I don't know how to
> interpret the output. For example, mpiexec displays the following
On Thu, 25 Jun 2020 14:04:12 +
"CHESTER, DEAN \(PGR\) via users" wrote:
...
> The cluster hardware is QLogic infiniband with Intel CPUs. My
> understanding is that we should be using the old PSM for networking.
>
> Any thoughts what might be going wrong with the build?
Yes only PSM will pe
On Thu, 2 Jul 2020 08:38:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> I tried this again and it resulted in the same error:
> nymph3.29935PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29937PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29936PSM c
On Thu, 2 Jul 2020 10:27:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> The permissions were incorrect!
>
> For our old installation of OMPI 1.10.6 it didn’t complain which is
> strange.
Then that did not use PSM and as such had horrible performance :-(
/Peter K
On Wed, 27 Jan 2021 15:31:40 -0500
Michael Di Domenico via users wrote:
> if you have OPA cards, for openmpi you only need --with-ofi, you don't
> need psm/psm2/verbs/ucx.
I agree with Michael and would add for clarity that on the system you
always need PSM2 and optionally libfabric (if you go t
On Wed, 19 May 2021 15:53:50 +0200
Pavel Mezentsev via users wrote:
> It took some time but my colleague was able to build OpenMPI and get
> it working with OmniPath, however the performance is quite
> disappointing. The configuration line used was the
> following: ./configure --prefix=$INSTALL_P
48 matches
Mail list logo