, 2021, at 7:13 AM, Luis Cebamanos via users
mailto:users@lists.open-mpi.org>> wrote:
Hi John,
I would be interested to know if that does what you are expecting...
On 01/03/2021 00:02, John R Cary via users wrote:
I've been watching this exchange with interest, because it is the
c
I've been watching this exchange with interest, because it is the
closest I have seen to what I want, but I want something slightly
different: 2 processes per node, with the first one bound to one core,
and the second bound to all the rest, with no use of hyperthreads.
Would this be
--map-by ppr
Do -cpu-set or -cpu-list work? Or is there a better way to use rankfile?
I have a cluster with 24-cores and 1 GPU per node. I would like to have
one core drive the GPU and the other 23 to be used thread-parallel with
OpenMP. My setup is described in my just-previous email to this list:
CentOS
3231773] MCW rank 0 bound to socket 0[core 0[hwt
0-1]]:
[BB/../../../../../../../../../../../../../../../../../../../../../../..]
[vcloud.txcorp.com:3231773] MCW rank 1 bound to socket 0[core 1[hwt
0-1]]:
[../BB/../../../../../../../../../../../../../../../../../../../../../..]
Thx....
O
OMPI_Affinity_str returning empty strings
I am trying to understand the affinities chosen by OpenMPI
following the documentation of
https://www.open-mpi.org/doc/v4.0/man3/OMPI_Affinity_str.3.php
CentOS-8.2, gcc-8.3, openmpi-4.0.5
$ which mpirun
~/ompi/contrib-gcc830/openmpi-4.0.5-shared/bin/mpir
Hi George,
Just to make sure I am communicating
On 4/24/2019 9:25 AM, George Bosilca wrote:
The configure AC_HEADER_STDC macro is considered obsolete [1] as most
of the OSes are STDC compliant nowadays.
Thanks. I read that and wonder why it is used if obsolete?
To have it failing on a r
..?
More below.
On Apr 23, 2019, at 6:26 PM, John R. Cary via users
wrote:
The failure is
In file included from
/Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/datatype/ompi_datatype_external.c:29:
In file included from
/Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0
The failure is
In file included from
/Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/datatype/ompi_datatype_external.c:29:
In file included from
/Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/communicator/communicator.h:38:
In file included from
Dear OpenMPI community,
This email is about whether a commercial version of OpenMPI for Windows
could be successful. I hesitated before sending this, but upon asking
some others (notably Jeff) on this list, it seemed appropriate.
We at Tech-X have been asking whether a commercial/freemium suppo
Thanks!
John
On 9/12/12 8:05 AM, Ralph Castain wrote:
On Sep 12, 2012, at 4:57 AM, "John R. Cary" wrote:
I do want in fact to bind first to one HT of each core
before binding to two HTs of one core. So that will
be possible in 1.7?
Yes - you can get a copy of the 1.7 nightly t
's of
a core. Starting with the upcoming 1.7 release, you can bind to the separate
HTs, but that doesn't sound like something you want to do.
HTH
Ralph
On Sep 11, 2012, at 6:34 PM, John R. Cary wrote:
Our code gets little benefit from using virtual cores (hyperthreading),
so w
Our code gets little benefit from using virtual cores (hyperthreading),
so when we run with mpiexec on an 8 real plus 8 virtual machine, we
would like to be certain that it uses only the 8 real cores.
Is there a way to do this with openmpi?
ThxJohn
On 6/30/12 8:47 AM, Ralph Castain wrote:
Add --disable-vt to your configure line - if you don't need VampirTrace, just
bypass the problem
Works. Thanks.
On Jun 30, 2012, at 8:32 AM, John R. Cary wrote:
My system:
$ uname -a
Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #
My system:
$ uname -a
Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #1 SMP Wed May 16
00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux
$ gcc --version
gcc (GCC) 4.6.3
Copyright (C) 2011
Configured with
'/scr_multipole/cary/vorpalall/builds/openmpi-1.6/configure' \
--prefix=/scr_multi
I noted that the download openmpi-1.6 cannot be configured
with CMake.
Are there plans for making it configurable with CMake?
Thx.John Cary
On 11/21/2011 5:43 AM, Shiqing Fan wrote:
Hi John,
Yes, there will be an initial build support for MinGW, but a few
runtime issues still need to be fixed.
If you want to try the current one, please download one of the latest
1.5 nightly tarballs. Please just let me know if you got problems o
Are there plans for mingw32 support in openmpi?
If so, any time scale?
I configured with cmake and errored out at
In file included from
C:/winsame/builds-mingw/facetsall-mingw/openmpi-1.5.4/opal/include/opal_config_bottom.h:258:0,
from
C:/winsame/builds-mingw/facetsall-mingw/
I have been trying to build with mingw, including mingw32-gfortran, with
no luck.
Using mingw32-4.5.1, openmpi-1.5.4.
Has anyone gotten mingw32 with gfortran to work with openmpi?
ThxJohn
On 10/28/11 4:09 AM, Shiqing Fan wrote:
Hi Yue,
If you want to build Open MPI on Windows, there is
After patching, I get:
make[3]: Entering directory
`/scr_iter/cary/facetspkgs/builds/openmpi-1.4.2/nodl/ompi/contrib/vt/vt'
make[3]: *** No rule to make target
`/scr_iter/cary/facetspkgs/builds/openmpi/ompi/contrib/vt/vt/m4/acinclude.compinst.m4',
needed by
`/scr_iter/cary/facetspkgs/builds/o
what is going on.
ThxJohn
>
>
> Instead just build OMPI with --with-tm, and it will link against TORQUE and
> start up and track jobs properly.
>
> -Joshua Bernstein
> Penguin Computing
>
> On Mar 14, 2010, at 21:35, "John R. Cary" wrote:
>
&g
I have a script that launches a bunch of runs on some compute nodes of
a cluster. Once I get through the queue, I query PBS for my machine
file, then I copy that to a local file 'nodes' which I use for mpiexec:
mpiexec -machinefile /home/research/cary/projects/vpall/vptests/nodes
-np 6 /hom
e/r
Jeff Squyres wrote:
(for the web archives)
Brock and I talked about this .f90 code a bit off list -- he's going
to investigate with the test author a bit more because both of us are
a bit confused by the F90 array syntax used.
Attached is a simple send/recv code written (procedural) C++ that
From http://svn.open-mpi.org/svn/ompi/branches/v1.3/NEWS I see:
- Many updates and fixes to the (non-default) "sm" collective
component (i.e., native shared memory MPI collective operations).
Will this fix the problem noted at
https://svn.open-mpi.org/trac/ompi/ticket/2043
??
Thanks..Joh
This also appears to fix a bug I had reported that did not involve
collective calls.
The code is appended. When run on 64 bit architecture with
iter.cary$ gcc --version
gcc (GCC) 4.4.0 20090506 (Red Hat 4.4.0-4)
Copyright (C) 2009 Free Software Foundation, Inc.
This is free software; see the so
x86_64 x86_64 GNU/Linux
It does not appear to occur on a 4-core, 32-bit box:
multipole.cary$ uname -a
*** 2.6.25.14-108.fc9.i686 #1 SMP Mon Aug 4 14:08:11 EDT 2008 i686
athlon i386 GNU/Linux
which has an intermediate kernel.
JC
John R. Cary wrote:
We have been getting hangs and failures
We have been getting hangs and failures of openmpi-1.3.X
on an 8-core FC11 box. Details:
Machine:
Linux octet.carys.home 2.6.30.5-43.fc11.x86_64 #1 SMP Thu Aug 27
21:39:52 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
with 2 quad-core cpus.
Compiler:
g++ (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2)
T
t
is run.
Thx.John
On Aug 3, 2009, at 4:21 PM, John R. Cary wrote:
In the latest versions of libtool, the runtime library path is
encoded with
a statement like:
LD_RUN_PATH="/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy/.libs:/contrib/babel-1.4.0-r6662p1-shared/lib
In the latest versions of libtool, the runtime library path is encoded with
a statement like:
LD_RUN_PATH="/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy/.libs:/contrib/babel-1.4.0-r6662p1-shared/lib:/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy:/scr_multipole/cary/volatile/ued
-to-openmpi-default-hostfile
on your cmd line. Check out "man orte_hosts" for a full explanation of
how these are used as it has changed from 1.2.
Ralph
On Jul 11, 2009, at 7:21 AM, John R. Cary wrote:
The original problem was that I could not get an 8-proc job to
run on an 8-co
The original problem was that I could not get an 8-proc job to
run on an 8-core cluster. I loaded mpi4py and petsc4py, and then
I try to run the python script:
from mpi4py import MPI
from petsc4py import PETSc
using
mpirun -n 8 -x PYTHONPATH python test-mpi4py.py
This hangs on my 8-core FC11
Our scenario is that we are running python, then importing a module
written in Fortran.
We run via:
mpiexec -n 8 -x PYTHONPATH -x SIDL_DLL_PATH python tokHsmNP8.py
where the script calls into Fortran to call MPI_Init.
On 8 procs (but not one) we get hangs in the code (on some machines but
not
31 matches
Mail list logo