0012cf80 B ompi_mpi_info_null
00116038 D ompi_mpi_info_null_addr
00133720 B ompi_mpi_op_null
001163c0 D ompi_mpi_op_null_addr
00135740 B ompi_mpi_win_null
00117c80 D ompi_mpi_win_null_addr
0012d080 B ompi_request_null
00116040 D ompi_request_null_addr
--
Je
r it?
>
>
>
> Thanks,
>
> Kurt
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
h-memalign=64
>>
>> and OpenMPI configure options:
>>
>>
>> '--prefix=/scinet/niagara/software/2022a/opt/gcc-11.2.0/openmpi/4.1.2+ucx-1.11.2'
>> '--enable-mpi-cxx'
>> '--enable-mpi1-compatibility'
>> '--with-hwloc=internal'
>> '--with-knem=/opt/knem-1.1.3.90mlnx1'
>> '--with-libevent=internal'
>> '--with-platform=contrib/platform/mellanox/optimized'
>> '--with-pmix=internal'
>> '--with-slurm=/opt/slurm'
>> '--with-ucx=/scinet/niagara/software/2022a/opt/gcc-11.2.0/ucx/1.11.2'
>>
>> I am then wondering:
>>
>> 1) Is UCX library considered "stable" for production use with very large
>> sized problems ?
>>
>> 2) Is there a way to "bypass" UCX at runtime?
>>
>> 3) Any idea for debugging this?
>>
>> Of course, I do not yet have a "minimum reproducer" that bugs, since it
>> happens only on "large" problems, but I think I could export the data for a
>> 512 processes reproducer with PARMetis call only...
>>
>> Thanks for helping,
>>
>> Eric
>>
>> --
>>
>> Eric Chamberland, ing., M. Ing
>>
>> Professionnel de recherche
>>
>> GIREF/Université Laval
>>
>> (418) 656-2131 poste 41 22 42
>>
>>
>
> --
> Josh Hursey
> IBM Spectrum MPI Developer
>
> --
> Eric Chamberland, ing., M. Ing
> Professionnel de recherche
> GIREF/Université Laval
> (418) 656-2131 poste 41 22 42
>
> --
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
.
https://jenkins.open-mpi.org/jenkins/job/open-mpi.build.compilers/8370/
indicates you are not testing GCC 11. Please test this compiler.
https://github.com/open-mpi/ompi/pull/10343 has details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
V node. It will generate a config.cache file.
>
> Then you can
>
> grep ^ompi_cv_fortran_ config.cache
>
> to generate the file you can pass to --with-cross when cross compiling
> on your x86 system
>
>
> Cheers,
>
>
> Gilles
>
>
> On 9/7/2021 7:35 PM, Jeff Ha
relevant.
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
I am running on a single node and do not need any network support. I am
using the NVIDIA build of Open-MPI 3.1.5. How do I tell it to never use
anything related to IB? It seems that ^openib is not enough.
Thanks,
Jeff
$ OMP_NUM_THREADS=1
;> >>
>> >> Assuming you want to learn about MPI (and not the Open MPI internals),
>> >> the books by Bill Gropp et al. are the reference :
>> >> https://www.mcs.anl.gov/research/projects/mpi/usingmpi/
>> >>
>> >> (Using MPI 3rd edition is affordable on amazon)
>> >
>> >
>> > Thanks! Yes, this is what I was after. However, if I wanted to learn
>> about OpenMPI internals, what would be the go-to resource?
>>
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
y than traditional MPI codes in
a NUMA context and it is worth mentioning it explicitly if you are using
NWChem, GAMES, MOLPRO, or other code that uses GA or DDI. If you are
running VASP, CP2K, or other code that uses MPI in a more conventional
manner, don't worry about it.
Jeff
--
Jeff
built OpenMPI
> > 4.0.2 with GCC, Intel and AOCC compilers, all using the same options.
> >
> > hcoll is provided by MLNX_OFED 4.7.3 and configure is run with
> >
> > --with-hcoll=/opt/mellanox/hcoll
> >
> >
>
> --
> Ake Sandgren, HPC2N, Umea Unive
t work, so I’m wondering,
> whether this is a current limitation or are we not supposed to end up in
> this specific …_request_cancel implementation?
>
> Thank you in advance!
>
> Christian
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
requests by now */
> *return* OMPI_SUCCESS;
> }
>
> The man page for MPI_Cancel does not mention that cancelling Send requests
> does not work, so I’m wondering,
> whether this is a current limitation or are we not supposed to end up in
> this specific …_request_c
cool.
>
It sounds like Open-MPI doesn't properly support the maximum transfer size
of PSM2. One way to work around this is to wrap your MPI collective calls
and do <4G chunking yourself.
Jeff
> Could the error reporting in this case be somehow improved ?
>
> I'd be glad
his ?
>> >
>> > MPI_Test(req, , );
>> > if (flag){
>> >MPI_Wait(req, MPI_STATUS_IGNORE);
>> >free(buffer);
>> > }
>>
>> That should be a no-op, because "req" should have been turned into
>> MPI_REQUEST_NULL if flag==true.
>>
>>
or why it works with the old
> OMPI version and not with the new. Any help or pointer would be appreciated.
> Thanks.
> AFernandez
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://
tch bad_alloc as I expected. It seems that I am
>> > misunderstanding something. Could you please help? Thanks a lot.
>> >
>> >
>> >
>> > Best regards,
>> > Zhen
>> >
>> > ___
>> > use
s. Would you
> recommend that I report this issue on the developer's mailing list or open
> a GitHub issue?
>
> Best wishes,
> Thomas Pak
>
> On Mar 16 2019, at 7:40 pm, Jeff Hammond wrote:
>
> Is there perhaps a different way to solve your problem that doesn’t spawn
> so
> for (;;) {
> MPI_Comm_spawn(cmd, cmd_argv, maxprocs, info, root, comm,
> , array_of_errcodes);
>
> MPI_Comm_disconnect();
> }
>
> // If process was spawned
> } else {
>
> puts("I was spawned!");
>
> MPI_Comm_disconnect();
> }
>
> // Finalize
> MPI_Finalize(
that have the
> same size in both architectures. Other option could be serialize long.
>
> So my question is: there any way to pass data that don't depend of
> architecture?
>
>
>
> ___
> use
appear with shared-memory, which is a pretty important conduit.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
t; > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
/mailman/listinfo/users
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.open-mpi.org
>> >> https://lists.open-mpi.org/mailman/listinfo/users
>> >
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
h I don't know how how
> robust it is these days in GNU Fortran.]
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
>
> _______
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
you may try setting HWLOC_COMPONENTS=no_os,stop
> in the environment so that hwloc behaves as if the operating system had no
> topology support.
>
> Brice
>
>
>
> Le 14/09/2018 à 06:11, Jeff Hammond a écrit :
>
> All of the job failures have this warning so I am inclined to thin
o run lstopo on that node?
>
> By the way, you shouldn't use hwloc 2.0.0rc2, at least because it's old,
> it has a broken ABI, and it's a RC :)
>
> Brice
>
>
>
> Le 13/09/2018 à 16:12, Jeff Hammond a écrit :
>
> I am running ARMCI-MPI over MPICH in a Travis CI Linux
mond/armci-mpi/jobs/425342479 has all of the
details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users
net NICs can
> handle RDMA requests directly? Or am I misunderstanding RoCE/how Open
> MPI's RoCE transport?
>
> Ben
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
2 cents from a user.
> Gus Correa
>
>
> On 08/10/2018 01:52 PM, Jeff Hammond wrote:
>
>> This thread is a perfect illustration of why MPI Forum participants
>> should not flippantly discuss feature deprecation in discussion with
>> users. Users who are not famil
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>>
> >>>
> >>> ___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>> ___
> >>> users mailing list
> >>> users@lists.open-mpi.org
> >>> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >> ___
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >>
> >> ___
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
105590] base/spml_base_select.c:194 - mca_spml_base_select()
>>> select: component ucx selected
>>> [c11-2:105590] spml_ucx.c:82 - mca_spml_ucx_enable() *** ucx ENABLED ****
>>> [c11-1:36522] spml_ucx.c:305 - mca_spml_ucx_a
nter Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/ma
28b-aligned if the base is. Noncontiguous is actually worse in that the
implementation could allocate the segment for each process with only 64b
alignment.
Jeff
> -Nathan
>
> On May 3, 2018, at 9:43 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote:
>
> Given that this seems to b
? (syscall-template.S:84)
> >> ==22815==by 0x583B4A7: poll (poll2.h:46)
> >> ==22815==by 0x583B4A7: poll_dispatch (poll.c:165)
> >> ==22815==by 0x5831BDE: opal_libevent2022_event_base_loop
> (event.c:1630)
> >> ==22815==by 0x57F210D: progress_engine (in
> /usr/local/lib/libopen-p
pi.org/mailman/listinfo/users>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> <https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
for the MPI C++
> bindings. Hence the deprecation in 2009 and the removal in 2012.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.or
egards
> Ahmed
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
>
> _______
> users mailing list
> use
" to 0 to see all
> help / error messages
>
>
> I tried fiddling with the MCA command-line settings, but didn't have any
> luck. Is it possible to do this? Can anyone point me to some
> documentation?
>
> Thanks,
>
> Ben
>
lt OpenMPI 3.0.0 on Arch Linux.
>
> Cheers,
>
> Ben
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
allows you to use a more efficient collective algorithm.
> I'm open to any good and elegant suggestions!
>
I won't guarentee that any of my suggestions satisfied either property :-)
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
ter.
>
> On Thursday, January 4, 2018, r...@open-mpi.org <r...@open-mpi.org> wrote:
>
>> Yes, please - that was totally inappropriate for this mailing list.
>> Ralph
>>
>>
>> On Jan 4, 2018, at 4:33 PM, Jeff Hammond <jeff.scie...@gmail.com> w
t the primary impact comes from
>> accessing kernel services. With an OS-bypass network, that shouldn’t happen
>> all that frequently, and so I would naively expect the impact to be at the
>> lower end of the reported scale for those environments. TCP-based systems,
>> though, might be on the other end.
>> >
round this without replacing a lot of Boost code by a
> hand-coded equivalent.
>
> Any suggestions welcome.
>
> Thanks,
>
> Philip
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinf
d that this will create further delays. Actually,
> this is the reason I am trying to replace Bcast() and try other things.
>
> I am using Open MPI 2.1.2 and testing on a single computer with 7 MPI
> processes. The ompi_info is the attached file.
> _
_
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
gt; users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
etails but there may be unnecessarily restrictive at times.
> >
> > On Wed, Sep 20, 2017 at 4:45 PM, Jeff Hammond <jeff.scie...@gmail.com>
wrote:
> >
> >
> > On Wed, Sep 20, 2017 at 5:55 AM, Dave Love <dave.l...@manchester.ac.uk>
wrote:
> > Jeff H
_________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
On Wed, Sep 20, 2017 at 5:55 AM, Dave Love <dave.l...@manchester.ac.uk>
wrote:
> Jeff Hammond <jeff.scie...@gmail.com> writes:
>
> > Please separate C and C++ here. C has a standard ABI. C++ doesn't.
> >
> > Jeff
>
> [For some value o
On Wed, Sep 20, 2017 at 6:26 AM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> On Tue, Sep 19, 2017 at 11:58 AM, Jeff Hammond <jeff.scie...@gmail.com>
> wrote:
>
> > Fortran is a legit problem, although if somebody builds a standalone
> Fortran
>
nks!
>> >> Michael
>> >>
>> >> ___
>> >> users mailing list
>> >> users@lists.open-mpi.org
>> >> https://lists.open-mpi.org/mailman/listinfo/users
>> >
>> >
>> >
>> > ___
>> > users mailing list
>> > users@lists.open-mpi.org
>> > https://lists.open-mpi.org/mailman/listinfo/users
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
hanks!
> >> Michael
> >>
> >> _______
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
encies as well).
> >>
> >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix
> >> component that uses shmem_open to create a POSIX shared memory object
> >> instead of a file on disk, which is then mmap'ed. Unfortunately, if I
> >> r
an alternative, would it be possible to use anonymous shared memory
> mappings to avoid the backing file for large allocations (maybe above a
> certain threshold) on systems that support MAP_ANONYMOUS and distribute the
> result of the mmap call among the processes on the node?
>
> Thanks,
&
y
> supports allocations up to 60GB, so my second point reported below may be
> invalid. Number 4 seems still seems curious to me, though.
>
> Best
> Joseph
>
> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
>
>> There's no reason to do anything special for shared memory wi
I'm happy to
>>> provide additional details if needed.
>>>
>>> Best
>>> Joseph
>>> --
>>> Dipl.-Inf. Joseph Schuchart
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstr. 19
>>> D-70569 Stuttgart
>>>
>>> Tel.
ptions in order to debug. Thanks
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
__
___
> > > > >>>> users mailing list
> > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>>>
> >
> https://urldefense.proofpoint.com
>>>
>>> I have two questions -
>>>
>>> 1) I am unable to capture the standard error that mpirun throws in a file
>>>
>>> How can I go about capturing the standard error of mpirun ?
>>>
>>> 2) Has this error i.e. double free or corruption been reported by others ?
>>> Is there a Is a
>>>
>>> bug fix available ?
>>>
>>> Regards,
>>>
>>> Ashwin.
>>>
>>>
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>> How can I go about capturing the standard error of mpirun ?
>>
>> 2) Has this error i.e. double free or corruption been reported by others ?
>> Is there a Is a
>>
>> bug fix available ?
>>
>> Regards,
>>
>> Ashwin.
>>
>>
>
nd link the code using Open MPI 1.6.2?
>
> Thanks,
> Arham Amouei
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
_
00403769 Unknown Unknown Unknown
>
> _
> *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE EARTH *[image:
> Earth-22-june.gif (7996 bytes)]
>
> http://sites.google.com/site/kolukulasivasrinivas/
>
> Siva Srinivas Kolukula, PhD
> *Scientist - B*
> Indian Tsunami Early Warning Centre (ITEWC)
> Advisory Services and Satellite Oceanography Group (ASG)
> Indian National Centre for Ocean Information Services (INCOIS)
> "Ocean Valley"
> Pragathi Nagar (B.O)
> Nizampet (S.O)
> Hyderabad - 500 090
> Telangana, INDIA
>
> Office: 040 23886124
>
>
> *Cell: +91 9381403232; +91 8977801947*
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:
> On Mar 15, 2017, at 8:25 PM, Jeff Hammond <jeff.scie...@gmail.com> wrote:
> >
> > I couldn't find the docs on mpool_hints, but shouldn't there be a way to
> disable registration via
blem, but I assume they'd like to know for users' sake,
> > particularly if it's not going to be addressed. I wonder what else
> > might be affected.
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
t;>
>>>> Many thanks in advance!
>>>>
>>>> Cheers
>>>> Joseph
>>>>
>>>> --
>>>> Dipl.-Inf. Joseph Schuchart
>>>> High Performance Computing Center Stuttgart (HLRS)
>>
> > Dipl.-Inf. Joseph Schuchart
> > High Performance Computing Center Stuttgart (HLRS)
> > Nobelstr. 19
> > D-70569 Stuttgart
> >
> > Tel.: +49(0)711-68565890
> > Fax: +49(0)711-6856832
> > E-Mail: schuch...@hlrs.de
> >
> > ___
s related to MPI
>
>
> Oscar Mojica
> Geologist Ph.D. in Geophysics
> SENAI CIMATEC Supercomputing Center
> Lattes: http://lattes.cnpq.br/0796232840554652
>
>
>
> ___
> users mailing
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/
PATH, but $LIBRARY_PATH. It'd
> ld.so (at runtime) which looks at $LD_LIBRARY_PATH.
>
> Samuel
> ___
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.o
You need to cross-compile binaries for Knights Corner (KNC) aka Xeon Phi 71xx
if you're on a Xeon host. KNC is x86 but the binary format differs, as your
analysis indicates.
You can either ssh to card and build native, build on host with k1om GCC tool
chain, or build on host with Intel
gt;> "It turns out my Xcode was messed up as I was missing /usr/include/.
>> After rerunning xcode-select --install it works now."
>>
>> On my OS X 10.11.6, I have /usr/include/stdint.h without having the
>> PGI compilers. This may be related to th
cal effects of spinning and
> ameliorations on some sort of "representative" system?
>
>
None that are published, unfortunately.
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
I'm not sure. But, no matter what, does anyone have thoughts on how to
> solve this?
>
> Thanks,
> Matt
>
> --
> Matt Thompson
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Have you tried subcommunicators? MPI is well-suited to hierarchical
parallelism since MPI-1 days.
Additionally, MPI-3 enables MPI+MPI as George noted.
Your question is probably better suited for Stack Overflow, since it's not
implementation-specific...
Jeff
On Fri, Nov 25, 2016 at 3:34 AM
>
>
>
>1. MPI_ALLOC_MEM integration with memkind
>
> It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI. It's probably a day or two of
work to get that going.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jef
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love <d.l...@liverpool.ac.uk> wrote:
>
> [Some time ago]
> Jeff Hammond <jeff.scie...@gmail.com> writes:
>
> > If you want to keep long-waiting MPI processes from clogging your CPU
> > pipeline and heating up your
e C-pointers do in 'testmpi3.c', but clearly
>> that isn't happening. Can anyone explain why not, and what is needed to
>> make this happen. Any suggestions are welcome.
>>
>> My environment:
>> Scientific Linux 6.8
>> INTEL FORTRAN and ICC version 15.0.2.164
>> OPEN-MPI 2.0.1
>>
>>
>> T. Rosmon
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
George:
http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm
Jeff
On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
> Vahid,
>
> You cannot use Fortan's vector subscript with MPI.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jef
pi world, then only start the mpi framework once
it's needed?
>
> Regards,
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jef
>> *Structural Biology and Bioinformatics Division*
>> *CSIR-Indian Institute of Chemical Biology*
>>
>> *Kolkata 700032*
>>
>> *INDIA*
>>
>
> ___
> users mailing list
> users@
eneck in any application? Are there codes bindings memory
frequently?
Because most things inside the kernel are limited by single-threaded
performance, it is reasonable for them to be slower than on a Xeon
processor, but I've not seen slowdowns th
K Quantum Espresso has the same option. I
never need stdin to run MiniDFT (i.e. QE-lite).
Since both codes you name already have the correct workaround for stdin, I
would not waste any time debugging this. Just do the right thing from now
on and enjoy having your applicati
t; Huh. I guess I'd assumed that the MPI Standard would have made sure a
> declared communicator that hasn't been filled would have been an error to
> use.
>
>
>
> When I get back on Monday, I'll try out some other compilers as well as
> try different compiler options (e.g.,
FO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
How do I extract configure arguments from an OpenMPI installation? I am
trying to reproduce a build exactly and I do not have access to config.log
from the origin build.
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...
apshot.txt thing there:
>>
>> wget
>> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt
>> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat
>> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat>
>> latest_snapshot.txt`.tar.bz2
>>
>>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29519.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
post:
> http://www.open-mpi.org/community/lists/users/2016/06/29497.php
>
>
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29498.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29499.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
s...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29371.php
>>
>> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29372.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
ined.
There's a nice paper on self-consistent performance of MPI implementations
that has lots of details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
r the wire before running them?
>
> If we exclude GPU or other nonMPI solutions, and cost being a primary
> factor, what is progression path from 2boxes to a cloud based solution
> (amazon and the like...)
>
> Regards,
> MM
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
MPI uses void** arguments to pass pointer by reference so it can be updated. In
Fortran, you always pass by reference so you don't need this. Just pass your
Fortran pointer argument.
There are MPI-3 shared memory examples in Fortran somewhere. Try Using Advanced
MPI (latest edition) or MPI
shared memory features?
>
> T. Rosmond
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/201
cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/04/28910.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/04/28911.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
on anyone learns from this. It
is extremely important to application developers that MPI_Wtime represent a
"best effort" implementation on every platform.
Other implementations of MPI have very accurate counters.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
correct
program.
Jeff
> Cheers,
>
> Gilles
>
>
>> On 3/25/2016 4:25 AM, Jeff Hammond wrote:
>>
>>
>>> On Thursday, March 24, 2016, Sebastian Rettenberger <rette...@in.tum.de>
>>> wrote:
>>> Hi,
>>>
>>> I
puting
> Boltzmannstrasse 3, 85748 Garching, Germany
> http://www5.in.tum.de/
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
word 'good', say, 10 years down the road?
>>
>> Thanks
>> Durga
>>
>> We learn from history that we never learn from history.
>>
>
> _______
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28769.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
On Mon, Mar 21, 2016 at 1:37 PM, Brian Dobbins <bdobb...@gmail.com> wrote:
>
> Hi Jeff,
>
> On Mon, Mar 21, 2016 at 2:18 PM, Jeff Hammond <jeff.scie...@gmail.com>
> wrote:
>
>> You can consult http://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf
>>
://meetings.mpi-forum.org/mpi3-impl-status-Mar15.pdf to
see the status of all implementations w.r.t. MPI-3 as of one year ago.
Jeff
On Mon, Mar 21, 2016 at 1:14 PM, Jeff Hammond <jeff.scie...@gmail.com>
wrote:
> Call MPI from C code, where you will have all the preprocessor support you
>
> Or any other suggestions?
>
> Thanks,
> - Brian
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28777.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
ster and
> I want to migrate process running in node A to other node, let's say to
> node C.
> is there a way to do this with open MPI ? thanks.
>
> Regards,
>
> Husen
>
>
>
>
> On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond <jeff.scie...@gmail.com
> <jav
1 - 100 of 159 matches
Mail list logo