[OMPI users] question about the Open-MPI ABI

2023-02-01 Thread Jeff Hammond via users
0012cf80 B ompi_mpi_info_null 00116038 D ompi_mpi_info_null_addr 00133720 B ompi_mpi_op_null 001163c0 D ompi_mpi_op_null_addr 00135740 B ompi_mpi_win_null 00117c80 D ompi_mpi_win_null_addr 0012d080 B ompi_request_null 00116040 D ompi_request_null_addr -- Je

Re: [OMPI users] Disabling barrier in MPI_Finalize

2022-09-10 Thread Jeff Hammond via users
r it? > > > > Thanks, > > Kurt > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Segfault in ucp_dt_pack function from UCX library 1.8.0 and 1.11.2 for large sized communications using both OpenMPI 4.0.3 and 4.1.2

2022-06-05 Thread Jeff Hammond via users
h-memalign=64 >> >> and OpenMPI configure options: >> >> >> '--prefix=/scinet/niagara/software/2022a/opt/gcc-11.2.0/openmpi/4.1.2+ucx-1.11.2' >> '--enable-mpi-cxx' >> '--enable-mpi1-compatibility' >> '--with-hwloc=internal' >> '--with-knem=/opt/knem-1.1.3.90mlnx1' >> '--with-libevent=internal' >> '--with-platform=contrib/platform/mellanox/optimized' >> '--with-pmix=internal' >> '--with-slurm=/opt/slurm' >> '--with-ucx=/scinet/niagara/software/2022a/opt/gcc-11.2.0/ucx/1.11.2' >> >> I am then wondering: >> >> 1) Is UCX library considered "stable" for production use with very large >> sized problems ? >> >> 2) Is there a way to "bypass" UCX at runtime? >> >> 3) Any idea for debugging this? >> >> Of course, I do not yet have a "minimum reproducer" that bugs, since it >> happens only on "large" problems, but I think I could export the data for a >> 512 processes reproducer with PARMetis call only... >> >> Thanks for helping, >> >> Eric >> >> -- >> >> Eric Chamberland, ing., M. Ing >> >> Professionnel de recherche >> >> GIREF/Université Laval >> >> (418) 656-2131 poste 41 22 42 >> >> > > -- > Josh Hursey > IBM Spectrum MPI Developer > > -- > Eric Chamberland, ing., M. Ing > Professionnel de recherche > GIREF/Université Laval > (418) 656-2131 poste 41 22 42 > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[OMPI users] please fix your attributes implementation in v5.0.0rc3+, which is broken by GCC 11

2022-04-30 Thread Jeff Hammond via users
. https://jenkins.open-mpi.org/jenkins/job/open-mpi.build.compilers/8370/ indicates you are not testing GCC 11. Please test this compiler. https://github.com/open-mpi/ompi/pull/10343 has details. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] cross-compilation documentation seems to be missing

2021-09-07 Thread Jeff Hammond via users
V node. It will generate a config.cache file. > > Then you can > > grep ^ompi_cv_fortran_ config.cache > > to generate the file you can pass to --with-cross when cross compiling > on your x86 system > > > Cheers, > > > Gilles > > > On 9/7/2021 7:35 PM, Jeff Ha

[OMPI users] cross-compilation documentation seems to be missing

2021-09-07 Thread Jeff Hammond via users
relevant. Thanks, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

[OMPI users] how to suppress "libibverbs: Warning: couldn't load driver ..." messages?

2021-06-23 Thread Jeff Hammond via users
I am running on a single node and do not need any network support. I am using the NVIDIA build of Open-MPI 3.1.5. How do I tell it to never use anything related to IB? It seems that ^openib is not enough. Thanks, Jeff $ OMP_NUM_THREADS=1

Re: [OMPI users] Books/resources to learn (open)MPI from

2020-08-20 Thread Jeff Hammond via users
;> >> >> >> Assuming you want to learn about MPI (and not the Open MPI internals), >> >> the books by Bill Gropp et al. are the reference : >> >> https://www.mcs.anl.gov/research/projects/mpi/usingmpi/ >> >> >> >> (Using MPI 3rd edition is affordable on amazon) >> > >> > >> > Thanks! Yes, this is what I was after. However, if I wanted to learn >> about OpenMPI internals, what would be the go-to resource? >> > > > -- > Jeff Squyres > jsquy...@cisco.com > > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] OMPI 4.0.4 how to use mpirun properly in numa architecture

2020-08-20 Thread Jeff Hammond via users
y than traditional MPI codes in a NUMA context and it is worth mentioning it explicitly if you are using NWChem, GAMES, MOLPRO, or other code that uses GA or DDI. If you are running VASP, CP2K, or other code that uses MPI in a more conventional manner, don't worry about it. Jeff -- Jeff

Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll

2020-01-25 Thread Jeff Hammond via users
built OpenMPI > > 4.0.2 with GCC, Intel and AOCC compilers, all using the same options. > > > > hcoll is provided by MLNX_OFED 4.7.3 and configure is run with > > > > --with-hcoll=/opt/mellanox/hcoll > > > > > > -- > Ake Sandgren, HPC2N, Umea Unive

Re: [OMPI users] problem with cancelling Send-Request

2019-10-02 Thread Jeff Hammond via users
t work, so I’m wondering, > whether this is a current limitation or are we not supposed to end up in > this specific …_request_cancel implementation? > > Thank you in advance! > > Christian > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] problem with cancelling Send-Request

2019-10-02 Thread Jeff Hammond via users
requests by now */ > *return* OMPI_SUCCESS; > } > > The man page for MPI_Cancel does not mention that cancelling Send requests > does not work, so I’m wondering, > whether this is a current limitation or are we not supposed to end up in > this specific …_request_c

Re: [OMPI users] silent failure for large allgather

2019-08-11 Thread Jeff Hammond via users
cool. > It sounds like Open-MPI doesn't properly support the maximum transfer size of PSM2. One way to work around this is to wrap your MPI collective calls and do <4G chunking yourself. Jeff > Could the error reporting in this case be somehow improved ? > > I'd be glad

Re: [OMPI users] When is it save to free the buffer after MPI_Isend?

2019-08-11 Thread Jeff Hammond via users
his ? >> > >> > MPI_Test(req, , ); >> > if (flag){ >> >MPI_Wait(req, MPI_STATUS_IGNORE); >> >free(buffer); >> > } >> >> That should be a no-op, because "req" should have been turned into >> MPI_REQUEST_NULL if flag==true. >> >>

Re: [OMPI users] Issues compiling HPL with OMPIv4.0.0

2019-04-03 Thread Jeff Hammond
or why it works with the old > OMPI version and not with the new. Any help or pointer would be appreciated. > Thanks. > AFernandez > > > ___ > users mailing list > users@lists.open-mpi.org > https://

Re: [OMPI users] Cannot catch std::bac_alloc?

2019-04-03 Thread Jeff Hammond
tch bad_alloc as I expected. It seems that I am >> > misunderstanding something. Could you please help? Thanks a lot. >> > >> > >> > >> > Best regards, >> > Zhen >> > >> > ___ >> > use

Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-17 Thread Jeff Hammond
s. Would you > recommend that I report this issue on the developer's mailing list or open > a GitHub issue? > > Best wishes, > Thomas Pak > > On Mar 16 2019, at 7:40 pm, Jeff Hammond wrote: > > Is there perhaps a different way to solve your problem that doesn’t spawn > so

Re: [OMPI users] MPI_Comm_spawn leads to pipe leak and other errors

2019-03-16 Thread Jeff Hammond
> for (;;) { > MPI_Comm_spawn(cmd, cmd_argv, maxprocs, info, root, comm, > , array_of_errcodes); > > MPI_Comm_disconnect(); > } > > // If process was spawned > } else { > > puts("I was spawned!"); > > MPI_Comm_disconnect(); > } > > // Finalize > MPI_Finalize(

Re: [OMPI users] Best way to send on mpi c, architecture dependent data type

2019-03-13 Thread Jeff Hammond
that have the > same size in both architectures. Other option could be serialize long. > > So my question is: there any way to pass data that don't depend of > architecture? > > > > ___ > use

[OMPI users] please fix RMA before you ship 4.0.0

2019-01-23 Thread Jeff Hammond
appear with shared-memory, which is a pretty important conduit. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-09 Thread Jeff Hammond
t; > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Querying/limiting OpenMPI memory allocations

2018-12-20 Thread Jeff Hammond
/mailman/listinfo/users >> >> >> >> ___ >> >> users mailing list >> >> users@lists.open-mpi.org >> >> https://lists.open-mpi.org/mailman/listinfo/users >> > >> > ___ >> > users mailing list >> > users@lists.open-mpi.org >> > https://lists.open-mpi.org/mailman/listinfo/users >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] filesystem-dependent failure building Fortran interfaces

2018-12-05 Thread Jeff Hammond
h I don't know how how > robust it is these days in GNU Fortran.] > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] [version 2.1.5] invalid memory reference

2018-10-11 Thread Jeff Hammond
> > _______ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [hwloc-users] Travis CI unit tests failing with HW "operating system" error

2018-09-14 Thread Jeff Hammond
you may try setting HWLOC_COMPONENTS=no_os,stop > in the environment so that hwloc behaves as if the operating system had no > topology support. > > Brice > > > > Le 14/09/2018 à 06:11, Jeff Hammond a écrit : > > All of the job failures have this warning so I am inclined to thin

Re: [hwloc-users] Travis CI unit tests failing with HW "operating system" error

2018-09-13 Thread Jeff Hammond
o run lstopo on that node? > > By the way, you shouldn't use hwloc 2.0.0rc2, at least because it's old, > it has a broken ABI, and it's a RC :) > > Brice > > > > Le 13/09/2018 à 16:12, Jeff Hammond a écrit : > > I am running ARMCI-MPI over MPICH in a Travis CI Linux

[hwloc-users] Travis CI unit tests failing with HW "operating system" error

2018-09-13 Thread Jeff Hammond
mond/armci-mpi/jobs/425342479 has all of the details. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ hwloc-users mailing list hwloc-users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Re: [OMPI users] RDMA over Ethernet in Open MPI - RoCE on AWS?

2018-09-11 Thread Jeff Hammond
net NICs can > handle RDMA requests directly? Or am I misunderstanding RoCE/how Open > MPI's RoCE transport? > > Ben > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > --

Re: [OMPI users] know which CPU has the maximum value

2018-08-11 Thread Jeff Hammond
2 cents from a user. > Gus Correa > > > On 08/10/2018 01:52 PM, Jeff Hammond wrote: > >> This thread is a perfect illustration of why MPI Forum participants >> should not flippantly discuss feature deprecation in discussion with >> users. Users who are not famil

Re: [OMPI users] know which CPU has the maximum value

2018-08-10 Thread Jeff Hammond
> >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >>> > >>> > >>> ___ > >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >>> ___ > >>> users mailing list > >>> users@lists.open-mpi.org > >>> https://lists.open-mpi.org/mailman/listinfo/users > >> > >> ___ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > >> > >> ___ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] OSHMEM: shmem_ptr always returns NULL

2018-06-01 Thread Jeff Hammond
105590] base/spml_base_select.c:194 - mca_spml_base_select() >>> select: component ucx selected >>> [c11-2:105590] spml_ucx.c:82 - mca_spml_ucx_enable() *** ucx ENABLED **** >>> [c11-1:36522] spml_ucx.c:305 - mca_spml_ucx_a

Re: [OMPI users] MPI Windows: performance of local memory access

2018-05-23 Thread Jeff Hammond
nter Stuttgart (HLRS) > Nobelstr. 19 > D-70569 Stuttgart > > Tel.: +49(0)711-68565890 > Fax: +49(0)711-6856832 > E-Mail: schuch...@hlrs.de > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/ma

Re: [OMPI users] User-built OpenMPI 3.0.1 segfaults when storing into an atomic 128-bit variable

2018-05-04 Thread Jeff Hammond
28b-aligned if the base is. Noncontiguous is actually worse in that the implementation could allocate the segment for each process with only 64b alignment. Jeff > -Nathan > > On May 3, 2018, at 9:43 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote: > > Given that this seems to b

Re: [OMPI users] User-built OpenMPI 3.0.1 segfaults when storing into an atomic 128-bit variable

2018-05-03 Thread Jeff Hammond
? (syscall-template.S:84) > >> ==22815==by 0x583B4A7: poll (poll2.h:46) > >> ==22815==by 0x583B4A7: poll_dispatch (poll.c:165) > >> ==22815==by 0x5831BDE: opal_libevent2022_event_base_loop > (event.c:1630) > >> ==22815==by 0x57F210D: progress_engine (in > /usr/local/lib/libopen-p

Re: [OMPI users] libmpi_cxx.so doesn't exist in lib path when installing 3.0.1

2018-04-08 Thread Jeff Hammond
pi.org/mailman/listinfo/users> >>> ___ >>> users mailing list >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> >>> https://lists.open-mpi.org/mailman/listinfo/users >>> <https://lists.open-mpi.org/mailman/listinfo/users> >>> >>> >>> >>> >>> ___ >>> users mailing list >>> users@lists.open-mpi.org >>> https://lists.open-mpi.org/mailman/listinfo/users >>> >> >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] libmpi_cxx

2018-03-29 Thread Jeff Hammond
for the MPI C++ > bindings. Hence the deprecation in 2009 and the removal in 2012. > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.or

Re: [OMPI users] Concerning the performance of the one-sided communications

2018-02-16 Thread Jeff Hammond
egards > Ahmed > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > > > _______ > users mailing list > use

Re: [OMPI users] Using OpenSHMEM with Shared Memory

2018-02-06 Thread Jeff Hammond
" to 0 to see all > help / error messages > > > I tried fiddling with the MCA command-line settings, but didn't have any > luck. Is it possible to do this? Can anyone point me to some > documentation? > > Thanks, > > Ben >

Re: [OMPI users] Oversubscribing

2018-01-24 Thread Jeff Hammond
lt OpenMPI 3.0.0 on Arch Linux. > > Cheers, > > Ben > > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___

Re: [OMPI users] Custom datatype with variable length array

2018-01-16 Thread Jeff Hammond
allows you to use a more efficient collective algorithm. > I'm open to any good and elegant suggestions! > I won't guarentee that any of my suggestions satisfied either property :-) Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Jeff Hammond
ter. > > On Thursday, January 4, 2018, r...@open-mpi.org <r...@open-mpi.org> wrote: > >> Yes, please - that was totally inappropriate for this mailing list. >> Ralph >> >> >> On Jan 4, 2018, at 4:33 PM, Jeff Hammond <jeff.scie...@gmail.com> w

Re: [OMPI users] latest Intel CPU bug

2018-01-04 Thread Jeff Hammond
t the primary impact comes from >> accessing kernel services. With an OS-bypass network, that shouldn’t happen >> all that frequently, and so I would naively expect the impact to be at the >> lower end of the reported scale for those environments. TCP-based systems, >> though, might be on the other end. >> >

Re: [OMPI users] Possible memory leak in opal_free_list_grow_st

2017-12-04 Thread Jeff Hammond
round this without replacing a lot of Boost code by a > hand-coded equivalent. > > Any suggestions welcome. > > Thanks, > > Philip > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinf

Re: [OMPI users] How can I send an unsigned long long recvcounts and displs using MPI_Allgatherv()

2017-11-28 Thread Jeff Hammond
d that this will create further delays. Actually, > this is the reason I am trying to replace Bcast() and try other things. > > I am using Open MPI 2.1.2 and testing on a single computer with 7 MPI > processes. The ompi_info is the attached file. > _

Re: [OMPI users] How can I measure synchronization time of MPI_Bcast()

2017-10-20 Thread Jeff Hammond
_ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Hybrid MPI+OpenMP benchmarks (looking for)

2017-10-09 Thread Jeff Hammond
gt; users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-22 Thread Jeff Hammond
etails but there may be unnecessarily restrictive at times. > > > > On Wed, Sep 20, 2017 at 4:45 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote: > > > > > > On Wed, Sep 20, 2017 at 5:55 AM, Dave Love <dave.l...@manchester.ac.uk> wrote: > > Jeff H

Re: [OMPI users] Multi-threaded MPI communication

2017-09-21 Thread Jeff Hammond
_________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-20 Thread Jeff Hammond
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love <dave.l...@manchester.ac.uk> wrote: > Jeff Hammond <jeff.scie...@gmail.com> writes: > > > Please separate C and C++ here. C has a standard ABI. C++ doesn't. > > > > Jeff > > [For some value o

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-20 Thread Jeff Hammond
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > > > Fortran is a legit problem, although if somebody builds a standalone > Fortran >

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
nks! >> >> Michael >> >> >> >> ___ >> >> users mailing list >> >> users@lists.open-mpi.org >> >> https://lists.open-mpi.org/mailman/listinfo/users >> > >> > >> > >> > ___ >> > users mailing list >> > users@lists.open-mpi.org >> > https://lists.open-mpi.org/mailman/listinfo/users >> ___ >> users mailing list >> users@lists.open-mpi.org >> https://lists.open-mpi.org/mailman/listinfo/users >> > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Question concerning compatibility of languages used with building OpenMPI and languages OpenMPI uses to build MPI binaries.

2017-09-18 Thread Jeff Hammond
hanks! > >> Michael > >> > >> _______ > >> users mailing list > >> users@lists.open-mpi.org > >> https://lists.open-mpi.org/mailman/listinfo/users > > > > > > > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://lists.open-mpi.org/mailman/listinfo/users > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Issues with Large Window Allocations

2017-09-08 Thread Jeff Hammond
encies as well). > >> > >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix > >> component that uses shmem_open to create a POSIX shared memory object > >> instead of a file on disk, which is then mmap'ed. Unfortunately, if I > >> r

Re: [OMPI users] Issues with Large Window Allocations

2017-09-04 Thread Jeff Hammond
an alternative, would it be possible to use anonymous shared memory > mappings to avoid the backing file for large allocations (maybe above a > certain threshold) on systems that support MAP_ANONYMOUS and distribute the > result of the mmap call among the processes on the node? > > Thanks, &

Re: [OMPI users] Issues with Large Window Allocations

2017-08-29 Thread Jeff Hammond
y > supports allocations up to 60GB, so my second point reported below may be > invalid. Number 4 seems still seems curious to me, though. > > Best > Joseph > > On 08/25/2017 09:17 PM, Jeff Hammond wrote: > >> There's no reason to do anything special for shared memory wi

Re: [OMPI users] Issues with Large Window Allocations

2017-08-25 Thread Jeff Hammond
I'm happy to >>> provide additional details if needed. >>> >>> Best >>> Joseph >>> -- >>> Dipl.-Inf. Joseph Schuchart >>> High Performance Computing Center Stuttgart (HLRS) >>> Nobelstr. 19 >>> D-70569 Stuttgart >>> >>> Tel.

Re: [OMPI users] How to get a verbose compilation?

2017-08-05 Thread Jeff Hammond
ptions in order to debug. Thanks > ___ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ __

Re: [OMPI users] MPI_IN_PLACE

2017-07-26 Thread Jeff Hammond
___ > > > > >>>> users mailing list > > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org> > > > > >>>> > > > https://urldefense.proofpoint.com

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-14 Thread Jeff Hammond
>>> >>> I have two questions - >>> >>> 1) I am unable to capture the standard error that mpirun throws in a file >>> >>> How can I go about capturing the standard error of mpirun ? >>> >>> 2) Has this error i.e. double free or corruption been reported by others ? >>> Is there a Is a >>> >>> bug fix available ? >>> >>> Regards, >>> >>> Ashwin. >>> >>> >> > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Double free or corruption with OpenMPI 2.0

2017-06-13 Thread Jeff Hammond
> >> How can I go about capturing the standard error of mpirun ? >> >> 2) Has this error i.e. double free or corruption been reported by others ? >> Is there a Is a >> >> bug fix available ? >> >> Regards, >> >> Ashwin. >> >> >

Re: [OMPI users] "undefined reference to `MPI_Comm_create_group'" error message when using Open MPI 1.6.2

2017-06-08 Thread Jeff Hammond
nd link the code using Open MPI 1.6.2? > > Thanks, > Arham Amouei > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ _

Re: [OMPI users] mpi_scatterv problem in fortran

2017-05-15 Thread Jeff Hammond
00403769 Unknown Unknown Unknown > > _ > *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE EARTH *[image: > Earth-22-june.gif (7996 bytes)] > > http://sites.google.com/site/kolukulasivasrinivas/ > > Siva Srinivas Kolukula, PhD > *Scientist - B* > Indian Tsunami Early Warning Centre (ITEWC) > Advisory Services and Satellite Oceanography Group (ASG) > Indian National Centre for Ocean Information Services (INCOIS) > "Ocean Valley" > Pragathi Nagar (B.O) > Nizampet (S.O) > Hyderabad - 500 090 > Telangana, INDIA > > Office: 040 23886124 > > > *Cell: +91 9381403232; +91 8977801947* > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres) <jsquy...@cisco.com> wrote: > On Mar 15, 2017, at 8:25 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote: > > > > I couldn't find the docs on mpool_hints, but shouldn't there be a way to > disable registration via

Re: [OMPI users] openib/mpi_alloc_mem pathology

2017-03-15 Thread Jeff Hammond
blem, but I assume they'd like to know for users' sake, > > particularly if it's not going to be addressed. I wonder what else > > might be affected. > > ___ > > users mailing list > > users@lists.open-mpi.org > > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > > > -- > Jeff Squyres > jsquy...@cisco.com > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-03-07 Thread Jeff Hammond
t;> >>>> Many thanks in advance! >>>> >>>> Cheers >>>> Joseph >>>> >>>> -- >>>> Dipl.-Inf. Joseph Schuchart >>>> High Performance Computing Center Stuttgart (HLRS) >>

Re: [OMPI users] MPI_THREAD_MULTIPLE: Fatal error in MPI_Win_flush

2017-02-21 Thread Jeff Hammond
> > Dipl.-Inf. Joseph Schuchart > > High Performance Computing Center Stuttgart (HLRS) > > Nobelstr. 19 > > D-70569 Stuttgart > > > > Tel.: +49(0)711-68565890 > > Fax: +49(0)711-6856832 > > E-Mail: schuch...@hlrs.de > > > > ___

Re: [OMPI users] Rounding errors and MPI

2017-01-18 Thread Jeff Hammond
s related to MPI > > > Oscar Mojica > Geologist Ph.D. in Geophysics > SENAI CIMATEC Supercomputing Center > Lattes: http://lattes.cnpq.br/0796232840554652 > > > > ___ > users mailing > listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/

Re: [hwloc-users] Issue running hwloc on Xeon-Phi Coprocessor uOS

2017-01-17 Thread Jeff Hammond
PATH, but $LIBRARY_PATH. It'd > ld.so (at runtime) which looks at $LD_LIBRARY_PATH. > > Samuel > ___ > hwloc-users mailing list > hwloc-users@lists.open-mpi.org > https://rfd.newmexicoconsortium.o

Re: [hwloc-users] Issue running hwloc on Xeon-Phi Coprocessor uOS

2017-01-16 Thread Jeff Hammond
You need to cross-compile binaries for Knights Corner (KNC) aka Xeon Phi 71xx if you're on a Xeon host. KNC is x86 but the binary format differs, as your analysis indicates. You can either ssh to card and build native, build on host with k1om GCC tool chain, or build on host with Intel

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
gt;> "It turns out my Xcode was messed up as I was missing /usr/include/. >> After rerunning xcode-select --install it works now." >> >> On my OS X 10.11.6, I have /usr/include/stdint.h without having the >> PGI compilers. This may be related to th

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-28 Thread Jeff Hammond
cal effects of spinning and > ameliorations on some sort of "representative" system? > > None that are published, unfortunately. Best, Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Issues building Open MPI 2.0.1 with PGI 16.10 on macOS

2016-11-28 Thread Jeff Hammond
I'm not sure. But, no matter what, does anyone have thoughts on how to > solve this? > > Thanks, > Matt > > -- > Matt Thompson > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Cast MPI inside another MPI?

2016-11-27 Thread Jeff Hammond
Have you tried subcommunicators? MPI is well-suited to hierarchical parallelism since MPI-1 days. Additionally, MPI-3 enables MPI+MPI as George noted. Your question is probably better suited for Stack Overflow, since it's not implementation-specific... Jeff On Fri, Nov 25, 2016 at 3:34 AM

Re: [OMPI users] Follow-up to Open MPI SC'16 BOF

2016-11-22 Thread Jeff Hammond
> > > >1. MPI_ALLOC_MEM integration with memkind > > It would sense to prototype this as a standalone project that is integrated with any MPI library via PMPI. It's probably a day or two of work to get that going. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jef

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-11-07 Thread Jeff Hammond
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love <d.l...@liverpool.ac.uk> wrote: > > [Some time ago] > Jeff Hammond <jeff.scie...@gmail.com> writes: > > > If you want to keep long-waiting MPI processes from clogging your CPU > > pipeline and heating up your

Re: [OMPI users] OMPI users] Fortran and MPI-3 shared memory

2016-10-27 Thread Jeff Hammond
e C-pointers do in 'testmpi3.c', but clearly >> that isn't happening. Can anyone explain why not, and what is needed to >> make this happen. Any suggestions are welcome. >> >> My environment: >> Scientific Linux 6.8 >> INTEL FORTRAN and ICC version 15.0.2.164 >> OPEN-MPI 2.0.1 >> >> >> T. Rosmon

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-25 Thread Jeff Hammond
> users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/ ___ users mailing list users@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Re: [OMPI users] Performing partial calculation on a single node in an MPI job

2016-10-17 Thread Jeff Hammond
George: http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm Jeff On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca <bosi...@icl.utk.edu> wrote: > Vahid, > > You cannot use Fortan's vector subscript with MPI. > -- Jeff Hammond jeff.scie...@gmail.com http://jef

Re: [OMPI users] How to yield CPU more when not computing (was curious behavior during wait for broadcast: 100% cpu)

2016-10-16 Thread Jeff Hammond
pi world, then only start the mpi framework once it's needed? > > Regards, > > ___ > users mailing list > users@lists.open-mpi.org > https://rfd.newmexicoconsortium.org/mailman/listinfo/users -- Jeff Hammond jeff.scie...@gmail.com http://jef

Re: [OMPI users] job distribution issue

2016-09-21 Thread Jeff Hammond
>> *Structural Biology and Bioinformatics Division* >> *CSIR-Indian Institute of Chemical Biology* >> >> *Kolkata 700032* >> >> *INDIA* >> > > ___ > users mailing list > users@

Re: [hwloc-users] memory binding on Knights Landing

2016-09-08 Thread Jeff Hammond
eneck in any application? Are there codes bindings memory frequently? Because most things inside the kernel are limited by single-threaded performance, it is reasonable for them to be slower than on a Xeon processor, but I've not seen slowdowns th

Re: [OMPI users] stdin issue with openmpi/2.0.0

2016-08-22 Thread Jeff Hammond
K Quantum Espresso has the same option. I never need stdin to run MiniDFT (i.e. QE-lite). Since both codes you name already have the correct workaround for stdin, I would not waste any time debugging this. Just do the right thing from now on and enjoy having your applicati

Re: [OMPI users] mpi_f08 Question: set comm on declaration error, and other questions

2016-08-21 Thread Jeff Hammond
t; Huh. I guess I'd assumed that the MPI Standard would have made sure a > declared communicator that hasn't been filled would have been an error to > use. > > > > When I get back on Monday, I'll try out some other compilers as well as > try different compiler options (e.g.,

[OMPI users] ompi_info -c does not print configure arguments

2016-07-23 Thread Jeff Hammond
FO_KEY: 36 MPI_MAX_INFO_VAL: 256 MPI_MAX_PORT_NAME: 1024 MPI_MAX_DATAREP_STRING: 128 How do I extract configure arguments from an OpenMPI installation? I am trying to reproduce a build exactly and I do not have access to config.log from the origin build. Thanks, Jeff -- Jeff Hammond jeff.scie...

Re: [OMPI users] Continuous integration question...

2016-06-22 Thread Jeff Hammond
apshot.txt thing there: >> >> wget >> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt >> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat >> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat> >> latest_snapshot.txt`.tar.bz2 >> >> > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29519.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] mkl threaded works in serail but not in parallel

2016-06-22 Thread Jeff Hammond
post: > http://www.open-mpi.org/community/lists/users/2016/06/29497.php > > > > > ___ > users mailing listus...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29498.php > > > > ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29499.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] max buffer size

2016-06-05 Thread Jeff Hammond
s...@open-mpi.org >> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/06/29371.php >> >> ___ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29372.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Broadcast faster than barrier

2016-05-30 Thread Jeff Hammond
ined. There's a nice paper on self-consistent performance of MPI implementations that has lots of details. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-21 Thread Jeff Hammond
r the wire before running them? > > If we exclude GPU or other nonMPI solutions, and cost being a primary > factor, what is progression path from 2boxes to a cloud based solution > (amazon and the like...) > > Regards, > MM > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Porting MPI-3 C-program to Fortran

2016-04-18 Thread Jeff Hammond
MPI uses void** arguments to pass pointer by reference so it can be updated. In Fortran, you always pass by reference so you don't need this. Just pass your Fortran pointer argument. There are MPI-3 shared memory examples in Fortran somewhere. Try Using Advanced MPI (latest edition) or MPI

Re: [OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Jeff Hammond
shared memory features? > > T. Rosmond > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/201

Re: [OMPI users] resolution of MPI_Wtime

2016-04-08 Thread Jeff Hammond
cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/04/28910.php > > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/28911.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] resolution of MPI_Wtime

2016-04-07 Thread Jeff Hammond
on anyone learns from this. It is extremely important to application developers that MPI_Wtime represent a "best effort" implementation on every platform. Other implementations of MPI have very accurate counters. Jeff -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
correct program. Jeff > Cheers, > > Gilles > > >> On 3/25/2016 4:25 AM, Jeff Hammond wrote: >> >> >>> On Thursday, March 24, 2016, Sebastian Rettenberger <rette...@in.tum.de> >>> wrote: >>> Hi, >>> >>> I

Re: [OMPI users] Collective MPI-IO + MPI_THREAD_MULTIPLE

2016-03-24 Thread Jeff Hammond
puting > Boltzmannstrasse 3, 85748 Garching, Germany > http://www5.in.tum.de/ > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Existing and emerging interconnects for commodity PCs

2016-03-21 Thread Jeff Hammond
word 'good', say, 10 years down the road? >> >> Thanks >> Durga >> >> We learn from history that we never learn from history. >> > > _______ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28769.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
On Mon, Mar 21, 2016 at 1:37 PM, Brian Dobbins <bdobb...@gmail.com> wrote: > > Hi Jeff, > > On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond <jeff.scie...@gmail.com> > wrote: > >> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf >>

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf to see the status of all implementations w.r.t. MPI-3 as of one year ago. Jeff On Mon, Mar 21, 2016 at 1:14 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote: > Call MPI from C code, where you will have all the preprocessor support you >

Re: [OMPI users] Q: Fortran, MPI_VERSION and #defines

2016-03-21 Thread Jeff Hammond
> Or any other suggestions? > > Thanks, > - Brian > > ___ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/03/28777.php > -- Jeff Hammond jeff.scie...@gmail.com http://jeffhammond.github.io/

Re: [OMPI users] Fault tolerant feature in Open MPI

2016-03-16 Thread Jeff Hammond
ster and > I want to migrate process running in node A to other node, let's say to > node C. > is there a way to do this with open MPI ? thanks. > > Regards, > > Husen > > > > > On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond <jeff.scie...@gmail.com > <jav

  1   2   >