On Tue, 27 Aug 2019 14:36:54 -0500
Cooper Burns via users wrote:
> Hello all,
>
> I have been doing some MPI benchmarking on an Infiniband cluster.
>
> Specs are:
> 12 cores/node
> 2.9ghz/core
> Infiniband interconnect (TCP also available)
>
> Some runtime numbers:
> 192 cores total: (16 nodes
On Wed, 28 Aug 2019 09:45:15 -0500
Cooper Burns wrote:
> Peter,
>
> Thanks for your input!
> I tried some things:
>
> *1) The app was placed/pinned differently by the two MPIs. Often this
> would probably not cause such a big difference.*
> I agree this is unlikely the cause, however I tried va
On Fri, 1 Nov 2019 15:48:35 +
"Jeff Squyres \(jsquyres\) via users" wrote:
> Open MPI doesn't have a public function in its Fortran interface
> named "random_seed". So I'm not sure what that's about.
That is a WRF+GCC bug.
> On Nov 1, 2019, at 11:36 AM, Qianjin Zheng
> mailto:qianjin.zh...
On Wed, 20 Nov 2019 17:38:19 +
"Mccall, Kurt E. \(MSFC-EV41\) via users"
wrote:
> Hi,
>
> My job is behaving differently on its two nodes, refusing to
> MPI_Comm_spawn() a process on one of them but succeeding on the
> other.
...
> Data for node: n002Num slots: 3... Bound: N/A
> Data
On Mon, 18 Nov 2019 17:48:30 +
"Mccall, Kurt E. \(MSFC-EV41\) via users"
wrote:
> I'm trying to debug a problem with my job, launched with the mpiexec
> options -display-map and -display-allocation, but I don't know how to
> interpret the output. For example, mpiexec displays the following
On Thu, 25 Jun 2020 14:04:12 +
"CHESTER, DEAN \(PGR\) via users" wrote:
...
> The cluster hardware is QLogic infiniband with Intel CPUs. My
> understanding is that we should be using the old PSM for networking.
>
> Any thoughts what might be going wrong with the build?
Yes only PSM will pe
On Thu, 2 Jul 2020 08:38:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> I tried this again and it resulted in the same error:
> nymph3.29935PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29937PSM can't open /dev/ipath for reading and writing (err=23)
> nymph3.29936PSM c
On Thu, 2 Jul 2020 10:27:51 +
"CHESTER, DEAN \(PGR\) via users" wrote:
> The permissions were incorrect!
>
> For our old installation of OMPI 1.10.6 it didn’t complain which is
> strange.
Then that did not use PSM and as such had horrible performance :-(
/Peter K
On Wed, 27 Jan 2021 15:31:40 -0500
Michael Di Domenico via users wrote:
> if you have OPA cards, for openmpi you only need --with-ofi, you don't
> need psm/psm2/verbs/ucx.
I agree with Michael and would add for clarity that on the system you
always need PSM2 and optionally libfabric (if you go t
On Wed, 19 May 2021 15:53:50 +0200
Pavel Mezentsev via users wrote:
> It took some time but my colleague was able to build OpenMPI and get
> it working with OmniPath, however the performance is quite
> disappointing. The configuration line used was the
> following: ./configure --prefix=$INSTALL_P
10 matches
Mail list logo