Re: [OMPI users] Mapping, binding and ranking

2021-03-01 Thread John R Cary via users
, at 7:13 AM, Luis Cebamanos via users mailto:users@lists.open-mpi.org>> wrote: Hi John, I would be interested to know if that does what you are expecting... On 01/03/2021 00:02, John R Cary via users wrote: I've been watching this exchange with interest, because it is the closest I hav

Re: [OMPI users] Mapping, binding and ranking

2021-02-28 Thread John R Cary via users
I've been watching this exchange with interest, because it is the closest I have seen to what I want, but I want something slightly different: 2 processes per node, with the first one bound to one core, and the second bound to all the rest, with no use of hyperthreads. Would this be --map-by

[OMPI users] Do -cpu-set or -cpu-list work? Or is there a better way to use rankfile?

2021-02-21 Thread John R Cary via users
Do -cpu-set or -cpu-list work?  Or is there a better way to use rankfile? I have a cluster with 24-cores and 1 GPU per node.  I would like to have one core drive the GPU and the other 23 to be used thread-parallel with OpenMP.  My setup is described in my just-previous email to this list:

Re: [OMPI users] OMPI_Affinity_str returning empty strings

2021-02-21 Thread John R Cary via users
73] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]: [BB/../../../../../../../../../../../../../../../../../../../../../../..] [vcloud.txcorp.com:3231773] MCW rank 1 bound to socket 0[core 1[hwt 0-1]]: [../BB/../../../../../../../../../../../../../../../../../../../../../..] Thx On 2/2

[OMPI users] OMPI_Affinity_str returning empty strings

2021-02-21 Thread John R Cary via users
OMPI_Affinity_str returning empty strings I am trying to understand the affinities chosen by OpenMPI following the documentation of https://www.open-mpi.org/doc/v4.0/man3/OMPI_Affinity_str.3.php CentOS-8.2, gcc-8.3, openmpi-4.0.5 $ which mpirun

Re: [OMPI users] 3.0.4, 4.0.1 build failure on OSX Mojave with LLVM

2019-04-24 Thread John R. Cary via users
Hi George, Just to make sure I am communicating On 4/24/2019 9:25 AM, George Bosilca wrote: The configure AC_HEADER_STDC macro is considered obsolete [1] as most of the OSes are STDC compliant nowadays. Thanks.  I read that and wonder why it is used if obsolete? To have it failing on a

Re: [OMPI users] 3.0.4, 4.0.1 build failure on OSX Mojave with LLVM

2019-04-23 Thread John R. Cary via users
. On Apr 23, 2019, at 6:26 PM, John R. Cary via users wrote: The failure is In file included from /Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/datatype/ompi_datatype_external.c:29: In file included from /Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi

[OMPI users] 3.0.4, 4.0.1 build failure on OSX Mojave with LLVM

2019-04-23 Thread John R. Cary via users
The failure is In file included from /Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/datatype/ompi_datatype_external.c:29: In file included from /Users/cary/projects/ulixesall-llvm/builds/openmpi-4.0.1/nodl/../ompi/communicator/communicator.h:38: In file included from

Re: [OMPI users] Windows support for OpenMPI

2012-12-03 Thread John R. Cary
Dear OpenMPI community, This email is about whether a commercial version of OpenMPI for Windows could be successful. I hesitated before sending this, but upon asking some others (notably Jeff) on this list, it seemed appropriate. We at Tech-X have been asking whether a commercial/freemium

Re: [OMPI users] Ensuring use of real cores

2012-09-12 Thread John R. Cary
Thanks! John On 9/12/12 8:05 AM, Ralph Castain wrote: On Sep 12, 2012, at 4:57 AM, "John R. Cary" <c...@txcorp.com> wrote: I do want in fact to bind first to one HT of each core before binding to two HTs of one core. So that will be possible in 1.7? Yes - you can get a

Re: [OMPI users] Ensuring use of real cores

2012-09-12 Thread John R. Cary
of a core. Starting with the upcoming 1.7 release, you can bind to the separate HTs, but that doesn't sound like something you want to do. HTH Ralph On Sep 11, 2012, at 6:34 PM, John R. Cary <c...@txcorp.com> wrote: Our code gets little benefit from using virtual cores (hyperthreading), s

[OMPI users] Ensuring use of real cores

2012-09-11 Thread John R. Cary
Our code gets little benefit from using virtual cores (hyperthreading), so when we run with mpiexec on an 8 real plus 8 virtual machine, we would like to be certain that it uses only the 8 real cores. Is there a way to do this with openmpi? ThxJohn

Re: [OMPI users] Cannot build openmpi-1.6 on

2012-07-01 Thread John R. Cary
On 6/30/12 8:47 AM, Ralph Castain wrote: Add --disable-vt to your configure line - if you don't need VampirTrace, just bypass the problem Works. Thanks. On Jun 30, 2012, at 8:32 AM, John R. Cary wrote: My system: $ uname -a Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #1 SMP

[OMPI users] Cannot build openmpi-1.6 on

2012-06-30 Thread John R. Cary
My system: $ uname -a Linux multipole.txcorp.com 2.6.32-220.17.1.el6.x86_64 #1 SMP Wed May 16 00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux $ gcc --version gcc (GCC) 4.6.3 Copyright (C) 2011 Configured with '/scr_multipole/cary/vorpalall/builds/openmpi-1.6/configure' \

[OMPI users] Plans for cmake with non-Windows?

2012-06-13 Thread John R. Cary
I noted that the download openmpi-1.6 cannot be configured with CMake. Are there plans for making it configurable with CMake? Thx.John Cary

Re: [OMPI users] openmpi and mingw32?

2011-11-21 Thread John R. Cary
On 11/21/2011 5:43 AM, Shiqing Fan wrote: Hi John, Yes, there will be an initial build support for MinGW, but a few runtime issues still need to be fixed. If you want to try the current one, please download one of the latest 1.5 nightly tarballs. Please just let me know if you got problems

[OMPI users] openmpi and mingw32?

2011-11-20 Thread John R. Cary
Are there plans for mingw32 support in openmpi? If so, any time scale? I configured with cmake and errored out at In file included from C:/winsame/builds-mingw/facetsall-mingw/openmpi-1.5.4/opal/include/opal_config_bottom.h:258:0, from

Re: [OMPI users] successful story of building openmpi on cygwin?

2011-10-28 Thread John R. Cary
I have been trying to build with mingw, including mingw32-gfortran, with no luck. Using mingw32-4.5.1, openmpi-1.5.4. Has anyone gotten mingw32 with gfortran to work with openmpi? ThxJohn On 10/28/11 4:09 AM, Shiqing Fan wrote: Hi Yue, If you want to build Open MPI on Windows, there

[OMPI users] 1.4.2 build problem

2010-06-01 Thread John R. Cary
After patching, I get: make[3]: Entering directory `/scr_iter/cary/facetspkgs/builds/openmpi-1.4.2/nodl/ompi/contrib/vt/vt' make[3]: *** No rule to make target `/scr_iter/cary/facetspkgs/builds/openmpi/ompi/contrib/vt/vt/m4/acinclude.compinst.m4', needed by

[OMPI users] OMPI looking for PBS file?

2010-03-14 Thread John R. Cary
I have a script that launches a bunch of runs on some compute nodes of a cluster. Once I get through the queue, I query PBS for my machine file, then I copy that to a local file 'nodes' which I use for mpiexec: mpiexec -machinefile /home/research/cary/projects/vpall/vptests/nodes -np 6 /hom

Re: [OMPI users] Program deadlocks, on simple send/recv loop

2009-12-01 Thread John R. Cary
Jeff Squyres wrote: (for the web archives) Brock and I talked about this .f90 code a bit off list -- he's going to investigate with the test author a bit more because both of us are a bit confused by the F90 array syntax used. Attached is a simple send/recv code written (procedural) C++ that

[OMPI users] Release date for 1.3.4?

2009-11-12 Thread John R. Cary
From http://svn.open-mpi.org/svn/ompi/branches/v1.3/NEWS I see: - Many updates and fixes to the (non-default) "sm" collective component (i.e., native shared memory MPI collective operations). Will this fix the problem noted at https://svn.open-mpi.org/trac/ompi/ticket/2043 ??

Re: [OMPI users] collective communications broken on more than 4 cores

2009-10-29 Thread John R. Cary
This also appears to fix a bug I had reported that did not involve collective calls. The code is appended. When run on 64 bit architecture with iter.cary$ gcc --version gcc (GCC) 4.4.0 20090506 (Red Hat 4.4.0-4) Copyright (C) 2009 Free Software Foundation, Inc. This is free software; see the

Re: [OMPI users] mpicxx and LD_RUN_PATH

2009-08-08 Thread John R. Cary
x.John On Aug 3, 2009, at 4:21 PM, John R. Cary wrote: In the latest versions of libtool, the runtime library path is encoded with a statement like: LD_RUN_PATH="/scr_multipole/cary/facetsall/physics/uedge/par/uecxxpy/.libs:/contrib/babel-1.4.0-r6662p1-shared/lib:/scr_multipole

[OMPI users] mpicxx and LD_RUN_PATH

2009-08-03 Thread John R. Cary
In the latest versions of libtool, the runtime library path is encoded with a statement like:

[OMPI users] And anyone know what limits connections?

2009-07-11 Thread John R. Cary
-to-openmpi-default-hostfile on your cmd line. Check out "man orte_hosts" for a full explanation of how these are used as it has changed from 1.2. Ralph On Jul 11, 2009, at 7:21 AM, John R. Cary wrote: The original problem was that I could not get an 8-proc job to run on an 8-co

[OMPI users] default host file ignore? (also, what limits connections?)

2009-07-11 Thread John R. Cary
The original problem was that I could not get an 8-proc job to run on an 8-core cluster. I loaded mpi4py and petsc4py, and then I try to run the python script: from mpi4py import MPI from petsc4py import PETSc using mpirun -n 8 -x PYTHONPATH python test-mpi4py.py This hangs on my 8-core FC11

[OMPI users] Using openmpi within python and crashes

2009-07-09 Thread John R. Cary
Our scenario is that we are running python, then importing a module written in Fortran. We run via: mpiexec -n 8 -x PYTHONPATH -x SIDL_DLL_PATH python tokHsmNP8.py where the script calls into Fortran to call MPI_Init. On 8 procs (but not one) we get hangs in the code (on some machines but