[Mpi-forum] reading announcement for pull 978: standard ABI (new Fortran idea)

2024-09-08 Thread Jeff Hammond via mpi-forum
https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/978__;!!G_uCfscf7eWS!eF6DvtbZmyjfgP4gdudIwCFlt756X3QBnhJmo6_-nmPQNlMHduM0R9xRV8imO7ACvXQb7udpLSBupKLpPhqwbSZs77WvcNI$
  adds Fortran.

This one is not perfect but I expect the fixes to whatever is wrong with it
to be minor.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
https://urldefense.us/v3/__http://jeffhammond.github.io/__;!!G_uCfscf7eWS!eF6DvtbZmyjfgP4gdudIwCFlt756X3QBnhJmo6_-nmPQNlMHduM0R9xRV8imO7ACvXQb7udpLSBupKLpPhqwbSZsg6DIRiU$
 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] reading (and hopefully voting) announcement for pull 977: standard ABI (no Fortran yet)

2024-09-08 Thread Jeff Hammond via mpi-forum
https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/977__;!!G_uCfscf7eWS!dri2axOJfE3FgEKzGK1zuSWh2ZY-ybNY3eI1AXr0w-7JTwjuCx7pHco_mZZbXqGX7KMgYiPu6uTtXVplY_NbhBQmBEZ7L9o$
 

This is the third reading of the ABI proposal.  I removed most of the
Fortran stuff for this one, because I hope to get a no-no vote on the
changes relative to the last reading and vote on it.

I have attempted to make everyone happy.  At some point, we need to have a
vote on this, or we will never get anything done.

The Fortran 2.0 ABI pull request will happen tomorrow, before the end of
the day my time.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
https://urldefense.us/v3/__http://jeffhammond.github.io/__;!!G_uCfscf7eWS!dri2axOJfE3FgEKzGK1zuSWh2ZY-ybNY3eI1AXr0w-7JTwjuCx7pHco_mZZbXqGX7KMgYiPu6uTtXVplY_NbhBQmV3KN00E$
 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI Forum Next Week

2024-06-17 Thread Jeff Hammond via mpi-forum
I will provide an update on the MPI ABI discussions, including:

1. Why we need MPI__{to,from}int instead of f2c/c2f in the C API in
order to support Fortran in a non-stupid way.  This is completely
noncontroversial and makes the C API better since apparently even C++ users
are using f2c/c2f to "(de)serialize" handles.
2. Why we need new callbacks to support third-party language bindings
(including Fortran in an implementation-agnostic way).  This is useful both
for Vapaa and mpi4py.  There's one unsolved issue to discuss.
3. Why we need some sort of sentinel registration.  This addresses the
hackish way the C implementation knows what MPI_IN_PLACE, .e.g, is in
Fortran.
4. Why we need Fortran datatype registration.  Basically, the Fortran API
tells the C API the mapping from Fortran to C types so the C implementation
doesn't need to know how the Fortran compiler works.  There might need to
be an init hook here since it is legal to initialize MPI from C and then
use Fortran features.

https://urldefense.us/v3/__https://github.com/pmodels/mpich/pull/6953__;!!G_uCfscf7eWS!Y8Y9PGp1laFaFd6qvWI1K8pndpap_TWzRDxDxf-wGPepSQVf0HiRTLBSU7CJAYfoky666z29rCt1lli3VB5voxiVAsN0zBE$
  and
https://urldefense.us/v3/__https://github.com/pmodels/mpich/pull/6965__;!!G_uCfscf7eWS!Y8Y9PGp1laFaFd6qvWI1K8pndpap_TWzRDxDxf-wGPepSQVf0HiRTLBSU7CJAYfoky666z29rCt1lli3VB5voxiVKTAwxdg$
  have some discussion.  You don't
have to read these or the thousands of Slack messages if you don't want, as
the purpose of this plenary is to summarize the situation and get guidance
on the text changes I should make.

Jeff

On Wed, Jun 12, 2024 at 1:59 AM Martin Schulz via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Hi all, We have our next MPI Forum meeting scheduled for next week, at the
> usual times for virtual meetings from 9am to 1pm CT on Monday, Tuesday and
> Thursday (as Wednesday is a US holiday that we seemed to have missed when
> scheduling this).
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hi all,
>
>
>
> We have our next MPI Forum meeting scheduled for next week, at the usual
> times for virtual meetings from 9am to 1pm CT on Monday, Tuesday and
> Thursday (as Wednesday is a US holiday that we seemed to have missed when
> scheduling this).
>
>
>
> My apologies, though, for not sending a reminder for this – I have to
> admit, this sneaked up on me a bit quicker than I thought and I did not see
> this in time to catch it at the 2 week mark. Unfortunately, no-one else
> happened to catch this either, which now leaves us with only ballot item,
> the change of rules we discussed last time. Wes and I thought about whether
> we could mitigate this with a “creative” use of no/no votes, but we both
> quickly came to the conclusion that would not be in the spirit of the forum
> and the rules and, hence, would like to refrain from this.
>
>
>
> Nevertheless, we can and should still use this meeting to make progress on
> some the larger points, in particular the ABI text. With the timeline for
> (what we commonly expect to be) MPI 4.2 not set, this should actually not
> set us back in time.
>
>
>
> Therefore, if you have any particular items, you would like to have on the
> agenda, please let Wes and me know, and we will add this to the agenda
> asap. Also, if there are WGs who would like to use larger blocks for
> dedicated discussion time, this would be a good opportunity as well.
>
>
>
> Wes has already created a registration link, so please register asap if
> you plan to attend, so we can the needed data for eligibility and quorums.
>
>
>
> https://urldefense.us/v3/__https://www.mpi-forum.org/meetings/2024/06/logistics__;!!G_uCfscf7eWS!Y8Y9PGp1laFaFd6qvWI1K8pndpap_TWzRDxDxf-wGPepSQVf0HiRTLBSU7CJAYfoky666z29rCt1lli3VB5voxiVhm8svz0$
>  
> 
>
>
>
> Thanks and my apologies again for the missing reminder – let’s still make
> the best out of it!
>
>
>
> Martin
>
>
>
>
>
> --
>
> Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel
> Systems
>
> Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching
>
> Member of the Board of Directors at the Leibniz Supercomputing Centre (LRZ)
>
> Email: schu...@in.tum.de
>
>
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://urldefense.us/v3/__https://lists.mpi-forum.org/mailman/listinfo/mpi-forum__;!!G_uCfscf7eWS!Y8Y9PGp1laFaFd6qvWI1K8pndpap_TWzRDxDxf-wGPepSQVf0HiRTLBSU7CJAYfoky666z29rCt1lli3VB5voxiVRG_Q8EM$
>  
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
https://urldefense.us/v3/__http://jeffhammond.github.io/__;!!G_uCfscf7eWS!Y8Y9PGp1laFaFd6qvWI1K8pndpap_TWzRDxDxf-wGPepSQVf0HiRTLBSU7CJAYfoky666z29rCt1lli3VB5voxiV4rWv

[Mpi-forum] reading announcement: fixed size logical types

2024-03-03 Thread Jeff Hammond via mpi-forum
https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-issues/issues/699__;!!G_uCfscf7eWS!c9x06o2gMDtc2G5YiP192MUkyBhfvR5VJiiOhsULmUEbl_3eYdQ6b19FiEpsCpq-dBiTeLOQL41V4WGnlCpNIxaEynCkl4w$
 
https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/963__;!!G_uCfscf7eWS!c9x06o2gMDtc2G5YiP192MUkyBhfvR5VJiiOhsULmUEbl_3eYdQ6b19FiEpsCpq-dBiTeLOQL41V4WGnlCpNIxaELL7qdI0$
 

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
https://urldefense.us/v3/__http://jeffhammond.github.io/__;!!G_uCfscf7eWS!c9x06o2gMDtc2G5YiP192MUkyBhfvR5VJiiOhsULmUEbl_3eYdQ6b19FiEpsCpq-dBiTeLOQL41V4WGnlCpNIxaE-TSmisc$
 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] reading announcement: PR 958 (ABI)

2024-02-29 Thread Jeff Hammond via mpi-forum
https://urldefense.us/v3/__https://github.com/mpi-forum/mpi-standard/pull/958__;!!G_uCfscf7eWS!csV9NcVVaVoEBuHUTB3UeaxVYRtLgLcxLX7jYbtSuIrEoRA3mKlnRXoFwQqEiO7Zr0yE3zlYM6GAfp98wOYUAApnhWlzx-g$
  is not the final version
but will be final on Sunday.

This fixes most of the prior complaints and adds what can be added about
Fortran.  However, a full standard ABI for Fortran is not included because
it's not practical at this time.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
https://urldefense.us/v3/__http://jeffhammond.github.io/__;!!G_uCfscf7eWS!csV9NcVVaVoEBuHUTB3UeaxVYRtLgLcxLX7jYbtSuIrEoRA3mKlnRXoFwQqEiO7Zr0yE3zlYM6GAfp98wOYUAApnqPiCKso$
 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] reading announcement for pull 875: standard ABI

2023-08-29 Thread Jeff Hammond via mpi-forum
See https://github.com/mpi-forum/mpi-standard/pull/875.

I expect to read this more than once but we have to start somewhere.

This PR defines the overall structure and specifies many of the ABI
features.  Many of the choices have strong consensus in the working group.

The constants are not listed in tables yet because that's a bunch of LaTeX
tedium for which I have not yet had time.  Nothing in this PR depends on
the values of the constants, and thus they can be added in a second PR.

There are open issues regarding Fortran that will be added later, possibly
as a different PR, depending on the working group discussion later today.

The current proposal assumes one choice of MPI_Aint, which is only relevant
in the context of exotic platforms like CHERI that have wide pointers
(pointers are bigger than addresses).

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] has anyone ever implemented MPI for a 16-bit OS or x86 real mode or DOS?

2023-08-25 Thread Jeff Hammond via mpi-forum
I am trying to figure if there is any justification for the continued
mention of segmented addressing in the standard.  Also, it is not clear to
me that we have ever had any implementation experience that gives us the
expertise to standardize features for it.

x86 real mode support was dropped in Windows 3.1.  Linux never supported it
because it was created for 386.  I cannot find any evidence that anyone has
ever supported MPI on a 286, or any operating system that supported 16-bit
real mode.

https://en.wikipedia.org/wiki/Coherent_(operating_system) ran on 286 but
didn't support TCP/IP so it's not clear how MPI would have achieved
parallelism.  https://en.wikipedia.org/wiki/Xenix was discontinued in
1987.  What operating system are we using as the reference for reasoning
about how MPI works on such systems?

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=06e491ccb3ce9747ae4754b8bdd8c249c3276733
has some relevant history.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


mpi-forum@lists.mpi-forum.org

2023-08-24 Thread Jeff Hammond via mpi-forum
I rewrote the text in response to feedback.

I just removed the part about segmented addressing, which is no longer
relevant.  If you think we need to say something about not supporting
16-bit x86 real mode, feel free to propose a change.

Jeff

On Thu, Aug 24, 2023 at 12:36 PM Jeff Hammond 
wrote:

> https://github.com/mpi-forum/mpi-standard/pull/872, which addresses
> https://github.com/mpiwg-abi/mpi-standard/tree/issue-738
>
>>
> If you want to fight for continued support for segmented addressing or
> deny the behavior of & in C99, now is your chance.
>
> Jeff
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


mpi-forum@lists.mpi-forum.org

2023-08-24 Thread Jeff Hammond via mpi-forum
https://github.com/mpi-forum/mpi-standard/pull/872, which addresses
https://github.com/mpiwg-abi/mpi-standard/tree/issue-738

>
If you want to fight for continued support for segmented addressing or deny
the behavior of & in C99, now is your chance.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] reading announcement for issue 693 / pull 816 (normative language references)

2023-08-21 Thread Jeff Hammond via mpi-forum
I am not sure what the exact deadline is, but I will read this one
virtually in Bristol.

https://github.com/mpi-forum/mpi-standard/pull/816, which addresses
https://github.com/mpi-forum/mpi-issues/issues/693.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI 4.1 Author Names and Institutions

2023-08-15 Thread Jeff Hammond via mpi-forum
Given that we do most of the work on Zoom, GitHub and Slack now, I wonder
if we need a more expansive filter.

For example, the following contributed to
https://github.com/mpi-forum/mpi-standard/pull/767 in some form.

Simon Byrne - California Institute of Technology
Lisandro Dalcin - KAUST

Jeff

On Wed, Aug 9, 2023 at 7:33 PM Wes Bland via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:
>
> Hi all,
>
> I just generated an updated list of author names and institutions as they
would be published in MPI 4.1. I’ve previously sent out a link with all of
the author names, but I forgot to include the institutions that time.
Please take a look at this PDF to make sure your information and that of
the places you’ve represented in the MPI 4.1 cycle is accurate. I’ve only
included the relevant 4 pages.
>
> https://github.com/mpi-forum/mpi-standard/files/12304747/mpi41-report.pdf
>
> Thanks,
> Wes
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum



--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] All MPI implementors: Implementation of MPI constants in C and Fortran / MPI meeting May 3, 2023 9am CDT discussion

2023-05-04 Thread Jeff Hammond via mpi-forum
Many constants cannot be used in "fixed array length in a declaration"
because they are negative numbers, which is not only legal but the only
choice for MPI_UNDEFINED, so you cannot use that criteria for anything.

We say "All named constants, with the exceptions noted below for Fortran,
can be used in initialization expressions or assignments," which means the
following is valid:

#include 
static int x = MPI_ANY_SOURCE;
int main(void) { return 0; }

C static global initialization of integers requires compile-time constants
(example below).  While we may have implied that all constants could be
link-time constants, this is only possible for non-integer constants, i.e.
handles and buffer address constants.

% cat init.c
extern int w;
static int* p = &w;
static int q = w;

% gcc -c init.c
init.c:3:16: error: initializer element is not a compile-time constant
static int q = w;
   ^
1 error generated.

Jeff

On Tue, May 2, 2023 at 9:51 PM Rolf Rabenseifner via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:
>
> Dear implementor of an MPI library,
>
> Based on the discussion today in the MPI forum meeting,
> I have the following urgent question to you:
>
>   Did you implement all MPI constants that are described in Annex A.1.1
> as of type "const int (or unnamed enum)"
>   as "C integer constant expression"
> (according to the specification of the C language)?
>
> If yes, then all is okay and the proposed change of the MPI standard in
> https://github.com/mpi-forum/mpi-standard/pull/821/files
> will not require any change of your MPI implementation.
>
> If not, then please can you tell which MPI constant(s) MPI_XXX
>
> - have you implemented in a way that
>
>  - it still can be used in an initialization expression,
>i.e., in a application statement like
>  int x=MPI_XXX;
>  - but it cannot be used as a case-label in a switch statement like
>  switch(MPI_XXX) {
>  case MPI_XXX: printf("XXX\n");
>  default: ;
>  }
>  - and cannot be used as a fixed array length in a declaration like
>  float x[MPI_XXX];
>
> - and how implemented you such an MPI constant that this problem arises?
>
> We'll discuss this again tomorrow May 3, 2023 at the beginning
> of the MPI forum meeting.
>
> See you hopefully at the meeting,
> and hopefully reporting that you have no problem with the proposal,
>
> kind regards
> Rolf
>
>
> - Original Message -
> > From: "Rolf Rabenseifner" 
> > To: "Main MPI Forum mailing list" 
> > Cc: "Puri Bangalore" , "Claudia Blaas-Schenner" <
claudia.blaas-schen...@tuwien.ac.at>, "Jeff
> > Hammond" 
> > Sent: Tuesday, May 2, 2023 2:04:01 PM
> > Subject: Re: Informal meeting announcement for the May 2023 meeting of
the MPI Forum
>
> > Der forum memebers, dear Wesley,
> >
> > please can we add in the agenda after
> >
> >  Informal Errata Reading  705  Fortran has only compile-time constants
  Rolf
> >
> > an additional slot
> >
> >  Errata Discussion  657 All C const int (or unnamed enum) as
compile-time
> >  constants   Rolf, Jeff H.
> >
> > Bestr regards
> > Rolf
> >
> >
> > - Original Message -
> >> From: "Rolf Rabenseifner" 
> >> To: "Main MPI Forum mailing list" 
> >> Cc: "Puri Bangalore" , "Claudia Blaas-Schenner"
> >> 
> >> Sent: Saturday, April 29, 2023 7:38:00 AM
> >> Subject: Informal meeting announcement for the May 2023 meeting of the
MPI Forum
> >
> >> Dear forum members,
> >>
> >> I would like to make the following announcements for the next MPI
Forum Meeting
> >> (May 2-5, 2023):
> >>
> >> - informal reading and discussion
> >>#705 Errata: Fortran has only compile-time constantsRolf
> >>  Issue https://github.com/mpi-forum/mpi-issues/issues/705
> >>  PRhttps://github.com/mpi-forum/mpi-standard/pull/819
> >>  PDF
> >>
https://github.com/mpi-forum/mpi-standard/files/11358444/mpi41-report_Issue705_PR819.pdf
> >>
> >> This errata should be completely discussed in next (May 2-5, 2023)
meeting.
> >> It is then planned to have an final errata reading plus errata vote at
next
> >> meeting (July 10-13, 2023).
> >>
> >> It is a bug since MPI-2.2 because a problem in C was directly mapped
to Fortran.
> >> This mapping was wrong.
> >> In Fortran, there is no distinction between link-time and compile-time
> >> constants.
> >>
> >> Best regards
> >> Rolf Rabenseifner
> >
> >
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. . . rabenseif...@hlrs.de .
> > High Performance Computing Center (HLRS) . . . ++49(0)711/685-65530 .
> > University of Stuttgart . . . . . . www.hlrs.de/people/rabenseifner .
> > Nobelstr. 19, 70569 Stuttgart, Germany
>
> --
> Dr. Rolf Rabenseifner . . . . . . . . . .. . . rabenseif...@hlrs.de .
> High Performance Computing Center (HLRS) . . . ++49(0)711/685-65530 .
> University of Stuttgart . . . . . . www.hlrs.de/people/rabenseifner .
> Nobelstr. 19, 70569 Stuttgart, Germany
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org

[Mpi-forum] question about MPI_UNDEFINED and MPI_DISPLACEMENT_CURRENT

2023-04-03 Thread Jeff Hammond via mpi-forum
Does anyone know the history for either of these values?

MPI_UNDEFINED = -32766
MPI_DISPLACEMENT_CURRENT = -54278278

MPI_UNDEFINED is close enough to SHRT_MIN but the other one doesn't make
sense to me.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] does anyone use MPI_Register_datarep and related?

2023-03-03 Thread Jeff Hammond via mpi-forum
I can't find these in the MPICH test suite or anywhere else on the
internet, except in contexts where they are included because they are part
of the MPI API.

These functions have been in MPI since 2.1, i.e. 2008.  If there is no
usage after 15 years, we should deprecate these functions.  As best I can
tell, they are nontrivial to implement, so their existence is a distraction
for anyone wishing to apply their creative energy to MPI I/O.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] MPI-4.1 announcements for May (not March) 2023 voting meeting

2023-03-02 Thread Jeff Hammond via mpi-forum
I’d like to announce a second vote for the upcoming *May 2023* meeting.



Issue: https://github.com/mpi-forum/mpi-issues/issues/645

PR: https://github.com/mpi-forum/mpi-standard/pull/767



Yes, this is for May, not March.  I am bad at managing deadlines so I am
announcing it two months out so I don't miss the two week deadline.


Jeff


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-18 Thread Jeff Hammond via mpi-forum
Fair but let’s not try to boil the ocean and just start with ‘every object 
needs attributes’. The forum tends to be more successful with modestly scoped 
efforts. 

Sent from my iPhone

> On 18. Jan 2023, at 19.40, Holmes, Daniel John  
> wrote:
> 
> 
> Hi all,
>  
> I believe “orthogonality” here is intended to mean “every object needs this 
> feature” rather than “every Forum member needs to be involved” (at least, I 
> hope so).
>  
> We have a single location for attributes-that-have-callbacks: section 7.7 in 
> MPI-4.0 – although there are also window attributes (in section 12.2.6), 
> which are something else entirely. It looks like that subsection (7.7) 
> could/should be moved into the External Interfaces chapter, which would mean 
> all the objects to which it applies have been lexically introduced before the 
> feature/aspect is described. It looks like subsection 7.8 should appear later 
> in the document too.
>  
> Best wishes,
> Dan.
>  
> From: mpi-forum  On Behalf Of Jeff 
> Hammond via mpi-forum
> Sent: 18 January 2023 17:28
> To: Main MPI Forum mailing list 
> Cc: Jeff Hammond 
> Subject: Re: [Mpi-forum] why do we only support caching on win/comm/datatype?
>  
> This isn’t an orthogonality issues. The issue is not linear dependence of 
> features but a null space caused by a surprising failure to pursue 
> consistency in the attributes feature set.
>  
> We have a chapter for attributes. I see no need for another WG to fix this. 
>  
> Sent from my iPhone
> 
> 
> On 18. Jan 2023, at 19.21, Skjellum, Anthony via mpi-forum 
>  wrote:
> 
> 
> We need a cross-cutting WG on "orthogonality" of feature set in MPI-5.
>  
>  
> Anthony Skjellum, PhD
> Professor of Computer Science and Chair of Excellence
> Director, SimCenter
> University of Tennessee at Chattanooga (UTC)
> tony-skjel...@utc.edu  [or skjel...@gmail.com]
> cell: 205-807-4968
>  
> From: mpi-forum  on behalf of Koziol, 
> Quincey via mpi-forum 
> Sent: Wednesday, January 18, 2023 12:14 PM
> To: Main MPI Forum mailing list 
> Cc: Koziol, Quincey 
> Subject: Re: [Mpi-forum] why do we only support caching on win/comm/datatype?
>  
> “third” on attributes are necessary for MPI.  HDF5 uses them to make certain 
> that cached file data gets written to the file (and it is closed properly) 
> before MPI_Finalize() in the world model.   Frankly, I wasn’t paying enough 
> attention to the sessions work ten years ago and didn’t realize that they 
> aren’t available as a mechanism for getting this same action when a session 
> is terminated.   This is a critical need to avoid corrupting user data.
> 
> Jeff - please add me to your work on adding attributes to requests and ops, 
> and I’ll write text for adding attributes to sessions.
> 
> Quincey
> 
> 
> > On Jan 16, 2023, at 2:10 PM, Jed Brown via mpi-forum 
> >  wrote:
> > 
> > CAUTION: This email originated from outside of the organization. Do not 
> > click links or open attachments unless you can confirm the sender and know 
> > the content is safe.
> > 
> > 
> > 
> > Second that MPI attributes do not suck. PETSc uses communicator attributes 
> > heavily to avoid lots of confusing or wasteful behavior when users pass 
> > communicators between libraries and similar comments would apply if other 
> > MPI objects were passed between libraries in that way.
> > 
> > It was before my time, but I think PETSc's use of attributes predates 
> > MPI-1.0 and MPI's early and pervasive support for attributes is one of the 
> > things I celebrate when discussing software engineering of libraries 
> > intended for use by other libraries versus those made for use by 
> > applications. Please don't dismiss attributes even if you don't enjoy them.
> > 
> > Jeff Hammond via mpi-forum  writes:
> > 
> >> The API is annoying but it really only gets used in library middleware by 
> >> people like us who can figure out the void* casting nonsense and use it 
> >> correctly.
> >> 
> >> Casper critically depends on window attributes.
> >> 
> >> Request attributes are the least intrusive way to allow libraries to do 
> >> completion callbacks. They give users a way to do this that adds zero 
> >> instructions to the critical path and is completely invisible unless 
> >> actually requires.
> >> 
> >> Attributes do not suck and people should stop preventing those of us who 
> >> write libraries to make the MPI ecosystem better from doing our jobs 
> >> because they want to whine about problems they’re too l

Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-18 Thread Jeff Hammond via mpi-forum
This isn’t an orthogonality issues. The issue is not linear dependence of 
features but a null space caused by a surprising failure to pursue consistency 
in the attributes feature set.

We have a chapter for attributes. I see no need for another WG to fix this. 

Sent from my iPhone

> On 18. Jan 2023, at 19.21, Skjellum, Anthony via mpi-forum 
>  wrote:
> 
> 
> We need a cross-cutting WG on "orthogonality" of feature set in MPI-5.
> 
> 
> Anthony Skjellum, PhD
> Professor of Computer Science and Chair of Excellence
> Director, SimCenter
> University of Tennessee at Chattanooga (UTC)
> tony-skjel...@utc.edu  [or skjel...@gmail.com]
> cell: 205-807-4968
> 
> From: mpi-forum  on behalf of Koziol, 
> Quincey via mpi-forum 
> Sent: Wednesday, January 18, 2023 12:14 PM
> To: Main MPI Forum mailing list 
> Cc: Koziol, Quincey 
> Subject: Re: [Mpi-forum] why do we only support caching on win/comm/datatype?
>  
> “third” on attributes are necessary for MPI.  HDF5 uses them to make certain 
> that cached file data gets written to the file (and it is closed properly) 
> before MPI_Finalize() in the world model.   Frankly, I wasn’t paying enough 
> attention to the sessions work ten years ago and didn’t realize that they 
> aren’t available as a mechanism for getting this same action when a session 
> is terminated.   This is a critical need to avoid corrupting user data.
> 
> Jeff - please add me to your work on adding attributes to requests and ops, 
> and I’ll write text for adding attributes to sessions.
> 
> Quincey
> 
> 
> > On Jan 16, 2023, at 2:10 PM, Jed Brown via mpi-forum 
> >  wrote:
> > 
> > CAUTION: This email originated from outside of the organization. Do not 
> > click links or open attachments unless you can confirm the sender and know 
> > the content is safe.
> > 
> > 
> > 
> > Second that MPI attributes do not suck. PETSc uses communicator attributes 
> > heavily to avoid lots of confusing or wasteful behavior when users pass 
> > communicators between libraries and similar comments would apply if other 
> > MPI objects were passed between libraries in that way.
> > 
> > It was before my time, but I think PETSc's use of attributes predates 
> > MPI-1.0 and MPI's early and pervasive support for attributes is one of the 
> > things I celebrate when discussing software engineering of libraries 
> > intended for use by other libraries versus those made for use by 
> > applications. Please don't dismiss attributes even if you don't enjoy them.
> > 
> > Jeff Hammond via mpi-forum  writes:
> > 
> >> The API is annoying but it really only gets used in library middleware by 
> >> people like us who can figure out the void* casting nonsense and use it 
> >> correctly.
> >> 
> >> Casper critically depends on window attributes.
> >> 
> >> Request attributes are the least intrusive way to allow libraries to do 
> >> completion callbacks. They give users a way to do this that adds zero 
> >> instructions to the critical path and is completely invisible unless 
> >> actually requires.
> >> 
> >> Attributes do not suck and people should stop preventing those of us who 
> >> write libraries to make the MPI ecosystem better from doing our jobs 
> >> because they want to whine about problems they’re too lazy to solve.
> >> 
> >> I guess I’ll propose request and op attributes because I need them and 
> >> people can either solve those problems better ways or get out of the way.
> >> 
> >> Jeff
> >> 
> >> Sent from my iPhone
> >> 
> >>> On 16. Jan 2023, at 20.27, Holmes, Daniel John 
> >>>  wrote:
> >>> 
> >>> 
> >>> Hi Jeff,
> >>> 
> >>> When adding session as an object to MPI, a deliberate choice was made not 
> >>> to support attributes for session objects because “attributes in MPI 
> >>> suck”.
> >>> 
> >>> This decision was made despite the usage (by some tools) of “at exit” 
> >>> attribute callbacks fired by the destruction of MPI_COMM_SELF during 
> >>> MPI_FINALIZE in the world model and the consequent obvious omission of a 
> >>> similar hook during MPI_SESSION_FINALIZE in the session model (there is 
> >>> also no MPI_COMM_SELF in the session model, so this is not a simple 
> >>> subject).
> >>> 
> >>> Removal of attributes entirely – blocked by back-compat because usage is 
> >>> known to exist.
>

Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-18 Thread Jeff Hammond via mpi-forum
https://github.com/mpi-forum/mpi-issues/issues/667

https://github.com/mpi-forum/mpi-issues/issues/664

You should create a new issue for sessions attributes. 

We can merge into a single meta issue later if appropriate. 

Sent from my iPhone

> On 18. Jan 2023, at 19.17, Koziol, Quincey via mpi-forum 
>  wrote:
> 
> “third” on attributes are necessary for MPI.  HDF5 uses them to make certain 
> that cached file data gets written to the file (and it is closed properly) 
> before MPI_Finalize() in the world model.   Frankly, I wasn’t paying enough 
> attention to the sessions work ten years ago and didn’t realize that they 
> aren’t available as a mechanism for getting this same action when a session 
> is terminated.   This is a critical need to avoid corrupting user data.
> 
> Jeff - please add me to your work on adding attributes to requests and ops, 
> and I’ll write text for adding attributes to sessions.
> 
>Quincey
> 
> 
>> On Jan 16, 2023, at 2:10 PM, Jed Brown via mpi-forum 
>>  wrote:
>> 
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you can confirm the sender and know 
>> the content is safe.
>> 
>> 
>> 
>> Second that MPI attributes do not suck. PETSc uses communicator attributes 
>> heavily to avoid lots of confusing or wasteful behavior when users pass 
>> communicators between libraries and similar comments would apply if other 
>> MPI objects were passed between libraries in that way.
>> 
>> It was before my time, but I think PETSc's use of attributes predates 
>> MPI-1.0 and MPI's early and pervasive support for attributes is one of the 
>> things I celebrate when discussing software engineering of libraries 
>> intended for use by other libraries versus those made for use by 
>> applications. Please don't dismiss attributes even if you don't enjoy them.
>> 
>> Jeff Hammond via mpi-forum  writes:
>> 
>>> The API is annoying but it really only gets used in library middleware by 
>>> people like us who can figure out the void* casting nonsense and use it 
>>> correctly.
>>> 
>>> Casper critically depends on window attributes.
>>> 
>>> Request attributes are the least intrusive way to allow libraries to do 
>>> completion callbacks. They give users a way to do this that adds zero 
>>> instructions to the critical path and is completely invisible unless 
>>> actually requires.
>>> 
>>> Attributes do not suck and people should stop preventing those of us who 
>>> write libraries to make the MPI ecosystem better from doing our jobs 
>>> because they want to whine about problems they’re too lazy to solve.
>>> 
>>> I guess I’ll propose request and op attributes because I need them and 
>>> people can either solve those problems better ways or get out of the way.
>>> 
>>> Jeff
>>> 
>>> Sent from my iPhone
>>> 
>>>>> On 16. Jan 2023, at 20.27, Holmes, Daniel John 
>>>>>  wrote:
>>>>> 
>>>>> 
>>>>> Hi Jeff,
>>>>> 
>>>>> When adding session as an object to MPI, a deliberate choice was made not 
>>>>> to support attributes for session objects because “attributes in MPI 
>>>>> suck”.
>>>>> 
>>>>> This decision was made despite the usage (by some tools) of “at exit” 
>>>>> attribute callbacks fired by the destruction of MPI_COMM_SELF during 
>>>>> MPI_FINALIZE in the world model and the consequent obvious omission of a 
>>>>> similar hook during MPI_SESSION_FINALIZE in the session model (there is 
>>>>> also no MPI_COMM_SELF in the session model, so this is not a simple 
>>>>> subject).
>>>>> 
>>>>> Removal of attributes entirely – blocked by back-compat because usage is 
>>>>> known to exist.
>>>>> Expansion of attributes orthogonally – blocked by “attributes in MPI 
>>>>> suck” accusations.
>>>>> 
>>>>> Result – inconsistency in the interface that no-one wants to tackle.
>>>>> 
>>>>> Best wishes,
>>>>> Dan.
>>>>> 
>>>>> From: mpi-forum  On Behalf Of Jeff 
>>>>> Hammond via mpi-forum
>>>>> Sent: 16 January 2023 14:40
>>>>> To: MPI Forum 
>>>>> Cc: Jeff Hammond 
>>>>> Subject: [Mpi-forum] why do we only support caching on win/comm/datatype?
>>&g

Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-16 Thread Jeff Hammond via mpi-forum
The API is annoying but it really only gets used in library middleware by 
people like us who can figure out the void* casting nonsense and use it 
correctly. 

Casper critically depends on window attributes.

Request attributes are the least intrusive way to allow libraries to do 
completion callbacks. They give users a way to do this that adds zero 
instructions to the critical path and is completely invisible unless actually 
requires. 

Attributes do not suck and people should stop preventing those of us who write 
libraries to make the MPI ecosystem better from doing our jobs because they 
want to whine about problems they’re too lazy to solve. 

I guess I’ll propose request and op attributes because I need them and people 
can either solve those problems better ways or get out of the way. 

Jeff

Sent from my iPhone

> On 16. Jan 2023, at 20.27, Holmes, Daniel John  
> wrote:
> 
> 
> Hi Jeff,
>  
> When adding session as an object to MPI, a deliberate choice was made not to 
> support attributes for session objects because “attributes in MPI suck”.
>  
> This decision was made despite the usage (by some tools) of “at exit” 
> attribute callbacks fired by the destruction of MPI_COMM_SELF during 
> MPI_FINALIZE in the world model and the consequent obvious omission of a 
> similar hook during MPI_SESSION_FINALIZE in the session model (there is also 
> no MPI_COMM_SELF in the session model, so this is not a simple subject).
>  
> Removal of attributes entirely – blocked by back-compat because usage is 
> known to exist.
> Expansion of attributes orthogonally – blocked by “attributes in MPI suck” 
> accusations.
>  
> Result – inconsistency in the interface that no-one wants to tackle.
>  
> Best wishes,
> Dan.
>  
> From: mpi-forum  On Behalf Of Jeff 
> Hammond via mpi-forum
> Sent: 16 January 2023 14:40
> To: MPI Forum 
> Cc: Jeff Hammond 
> Subject: [Mpi-forum] why do we only support caching on win/comm/datatype?
>  
> I am curious if there is a good reason from the past as to why we only 
> support caching on win, comm and datatype, and no other handles?
>  
> I have a good use case for request attributes and have found that the 
> implementation overhead in MPICH appears to be zero.  The implementation in 
> MPICH requires adding a single pointer to an internal struct.  This struct 
> member will never be accessed except when the user needs it, and it can be 
> placed at the end of the struct so that it doesn't even pollute the cache.
>  
> I wondered if callbacks were a hidden overhead, but they only called 
> explicitly and synchronously, so they would not interfere with critical path 
> uses of requests.
>  
> https://github.com/mpi-forum/mpi-issues/issues/664 has some details but since 
> I do not understand how MPICH generates the MPI bindings, I only implemented 
> the back-end MPIR code.
>  
> It would make MPI more consistent if all opaque handles supported attributes. 
>  In particular, I'd love to have a built-in MPI_Op attribute for the function 
> pointer the user provided (which is similar to how one can query input args 
> associated with MPI_Win) because that appears to be the only way I can 
> implement certain corner cases of MPI F08.
>  
> Thanks,
>  
> Jeff
>  
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-16 Thread Jeff Hammond via mpi-forum
I am curious if there is a good reason from the past as to why we only
support caching on win, comm and datatype, and no other handles?

I have a good use case for request attributes and have found that the
implementation overhead in MPICH appears to be zero.  The implementation in
MPICH requires adding a single pointer to an internal struct.  This struct
member will never be accessed except when the user needs it, and it can be
placed at the end of the struct so that it doesn't even pollute the cache.

I wondered if callbacks were a hidden overhead, but they only called
explicitly and synchronously, so they would not interfere with critical
path uses of requests.

https://github.com/mpi-forum/mpi-issues/issues/664 has some details but
since I do not understand how MPICH generates the MPI bindings, I only
implemented the back-end MPIR code.

It would make MPI more consistent if all opaque handles supported
attributes.  In particular, I'd love to have a built-in MPI_Op attribute
for the function pointer the user provided (which is similar to how one can
query input args associated with MPI_Win) because that appears to be the
only way I can implement certain corner cases of MPI F08.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] ABI working group is live

2022-12-08 Thread Jeff Hammond via mpi-forum
The MPI ABI workgroup was approved.  You can sign up for the mailing list
now: https://lists.mpi-forum.org/mailman/listinfo/mpiwg-abi.

If you want access to the GitHub group (https://github.com/mpiwg-abi/),
send me your GitHub username.

If you want access to the Slack channel, DM me on Slack.  If you are not on
Slack already, send me your email address.

Jeff

On Wed, Nov 16, 2022 at 9:54 AM Jeff Hammond  wrote:

> I don't know what we do to create new working groups with the post-COVID
> rules, but I would like to create and chair a WG focused on ABI
> standardization.
>
> There is strong support for this effort in many user communities,
> including developers and maintainers of Spack, mpi4py, Julia MPI (MPI.jl),
> Rust MPI (rsmpi), PETSc and NVHPC SDK, to name a few.  There are even a few
> implementers who have expressed support, but I won't name them for their
> own protection.
>
> The problem is so exasperating for our users that there are at least two
> different projects devoted to mitigating ABI problems (not including shims
> built in to the aforementioned MPI wrappers):
>
> https://github.com/cea-hpc/wi4mpi
> https://github.com/eschnett/MPItrampoline
>
> I've written about this a bit already, for those who are interested.  More
> material will be forthcoming once I have time for more experiments.
>
> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md
> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md
> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md
> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md
>
> I understand this is a controversial topic, particularly for
> implementers.  I hope that we can proceed objectively.
>
> Thanks,
>
> Jeff
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Upcoming Meeting Schedule for the MPI Forum

2022-11-24 Thread Jeff Hammond via mpi-forum
I don’t see any major conflicts.

Perth as a virtual meeting is going to be rough, because Europeans will be 
waking up around 2 AM and people in the New World will be working in the 
evening and into the night.  It’s also the longest flight one can take from 
North America.  It’s less than the flight time for Apollo 11 to get to the 
moon, but close.

Jeff

> On 24Nov 2022, at 2:12 AM, Martin Schulz via mpi-forum 
>  wrote:
> 
> Hi all,
>  
> We need to discuss the schedule for the upcoming meetings in 2023/24. 
> Formally, we will decide on this during the MPI forum meeting, but I wanted 
> to send out the following proposal ahead of time, so everyone can check for 
> major conflicts and or other issues. Note, we decided to hold the spring 
> meeting in person/hybrid (preferably ET/CT), summer and winter fully virtual 
> and Fall together with EuroMPI (and keep co-locating EuroMPI and IWOMP).
>  
> For the spring meeting, we have an offer to host this in Boston – location 
> and dates are confirmed:
>  
> Spring Meeting:
> March 13 to 16, 2023, Boston (AWS Offices, thanks Quincey)
> Start noon, end noon
>  
> Assuming we finish all final reading in that spring meeting, and we want to 
> hit SC for MPI 4.1, we need two voting meetings instead of one summer meeting 
> (both virtual):
>  
> Suggestions (decent spacing plus avoiding summer conflicts, I hope):
> May 1 to 4, 2023, virtual
> Could be fewer days if votes are the core items
>  
> July 10 to 13, 2023, virtual
> Could be fewer days if votes are the core items
>  
> The fall meeting will be with EuroMPI in Bristol – this will be the RCS 
> meeting (the one we read the entire standard page by page – hence, onsite 
> will make things a lot easier):
>  
> Fall Meeting:
> September 13-15, 2023, Bristol (right after EuroMPI, parallel 
> to IWOMP)
> Three full days
>  
> Again, assuming we intend to hit SC23 for MPI 4.1, we need an additional 
> voting meeting.
>  
> Suggestion: two weeks before SC23
> October 31 to November 3, 2023, virtual
> Could be fewer days depending on agenda
>  
> Starting after SC23 we would be back on a normal schedule:
>  
> Winter Meeting:
> December 4-7, 2023, virtual
> 
> Spring Meeting (space reserved):
> February 26-March 1, 2024, Chicago (Big Ten Center in 
> Chicago, thanks to Maria and Bill)
> Start noon, end Noon
>  
> Summer Meeting:
> June timeframe, virtual
> Let’s wait for 2024 summer conflicts before fixing dates
>  
> Fall Meeting:
> September, again with EuroMPI and IWOMP 
> Location would be Pawsey Supercomputing Center near Perth, 
> Australia
> (the location was already picked by IWOMP - I know this is a 
> stretch for many, but we will have a hybrid option and it would help cover an 
> area that is traditionally underrepresented in the forum and – especially – 
> the conference)
>  
> Please take a look at the dates and send me any information about major 
> conflicts.
>  
> Thanks!
>  
> Martin
>  
>  
>  
> --
> Prof. Dr. Martin Schulz, Chair of Computer Architecture and Parallel Systems
> Department of Informatics, TU-Munich, Boltzmannstraße 3, D-85748 Garching
> Member of the Board of Directors at the Leibniz Supercomputing Centre (LRZ)
> Email: schu...@in.tum.de 
>  
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org 
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum 
> 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] December Meeting Draft Agenda Posted

2022-11-23 Thread Jeff Hammond via mpi-forum
Why are 407, 429, 551 and 553 listed under both "NO NO" and "FIRST"?  Is
this because if they fail the "NO NO" vote then they proceed down the 2
vote path?

Jeff


On Tue, Nov 22, 2022 at 4:45 PM Wes Bland via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Hi all,
>
> After the slew of announcements yesterday, I hope that Martin and I have
> gotten everything posted on the agenda page correctly.
>
> MPI Forum 
> mpi-forum.org 
> [image: favicon.ico] 
> 
>
>
> Please have a look and make sure that everything you think should be on
> there is present. If there’s an issue, please let us know.
>
> Registration for the meeting is also still open.
>
> [image:
> FLoEzmTA4p6jm4fCJyGgHCdDChv7c10Sc0wkZULNX15uzObyxcx0XcfQ-ru6nkCt9P0blzRU0vQ=w1200-h630-p.png]
>
> December 2022 MPI Forum Meeting Registration
> 
> forms.gle 
> 
>
> Thanks,
> Wes
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/


favicon.ico
Description: Binary data
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] why is MPI_Status not opaque?

2022-11-21 Thread Jeff Hammond via mpi-forum
Yes, but every time the MPI Forum breaks backwards compatibility, an angel 
loses its wings, so we can’t fix design flaws like this.  Everyone must suffer 
forever.

More seriously, Lisandro pointer out that we’d need allocate and deallocate 
functions for status if they were handles to hidden state, so we can’t just fix 
this by making Status opaque.

I guess we have no choice but to standardize the MPI_Status object size and 
layout.

Jeff

> On 21Nov 2022, at 2:43 PM, Skjellum, Anthony  wrote:
> 
> It should be an opaque object :-) 
> 
> Anthony Skjellum, PhD
> 205-807-4968
> 
> 
> On Nov 21, 2022, at 7:29 AM, Jeff Hammond via mpi-forum 
>  wrote:
> 
> 
> I assume that MPI_Status is not opaque because somebody asserted that 
> function call overhead was too much for some use cases.  Was there more to it 
> than this?
> 
> Why does the standard say there is an opaque part for elements and cancelled, 
> but not make those visible?  The lack of consistency here doesn't make a lot 
> of sense to me.
> 
> MPI_Status not being opaque was a horrible mistake but I would like to be 
> less mad about it by learning what possible reasons for it existed in 1995.
> 
> Thanks,
> 
> Jeff 
> 
> -- 
> Jeff Hammond
> jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
> http://jeffhammond.github.io/ 
> <https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fjeffhammond.github.io%2F&data=05%7C01%7Ctony-skjellum%40utc.edu%7Cf1d5f0680d544ee0772408dacbbc0e58%7C515813d9717d45dd9eca9aa19c09d6f9%7C0%7C0%7C638046305807053236%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=qWvXenIAdNgUWL74kx0UX%2ByQj6uApmpMFmPCi3aqjH0%3D&reserved=0>___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] why is MPI_Status not opaque?

2022-11-21 Thread Jeff Hammond via mpi-forum
I assume that MPI_Status is not opaque because somebody asserted that
function call overhead was too much for some use cases.  Was there more to
it than this?

Why does the standard say there is an opaque part for elements and
cancelled, but not make those visible?  The lack of consistency here
doesn't make a lot of sense to me.

MPI_Status not being opaque was a horrible mistake but I would like to be
less mad about it by learning what possible reasons for it existed in 1995.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] propose ABI working group

2022-11-16 Thread Jeff Hammond via mpi-forum


> On 16Nov 2022, at 5:11 PM, Wes Bland  wrote:
> 
> The rules say you need to get four IMOVE (voting) orgs to support creating a 
> WG at a meeting:
> 
> > Working groups can be established at MPI Forum meetings once at least four 
> > IMOVE organizations indicate support for that proposed Working Group.
> 
> So feel free to propose it at the December meeting and have some folks lined 
> up to give it a thumbs up. Of course, in the meantime, folks are free to 
> start getting together and discussing the topic. The old mailing list from 
> 2008 is still around <https://lists.mpi-forum.org/mailman/listinfo/mpi3-abi>, 
> but I’d recommend you not use it since it uses our old naming scheme. Once 
> the group is official, I’ll make the new list and GitHub org.

Thanks, I’ll rally the votes.

Interested parties should ping me on Slack to be added to #wg-abi.

Somebody was optimistic enough to create https://github.com/mpiwg-abi/ this 
morning but it’s a mystery who 😉

Jeff

> Thanks,
> Wes
> 
>> On Nov 16, 2022, at 1:54 AM, Jeff Hammond via mpi-forum 
>>  wrote:
>> 
>> I don't know what we do to create new working groups with the post-COVID 
>> rules, but I would like to create and chair a WG focused on ABI 
>> standardization.
>> 
>> There is strong support for this effort in many user communities, including 
>> developers and maintainers of Spack, mpi4py, Julia MPI (MPI.jl), Rust MPI 
>> (rsmpi), PETSc and NVHPC SDK, to name a few.  There are even a few 
>> implementers who have expressed support, but I won't name them for their own 
>> protection.
>> 
>> The problem is so exasperating for our users that there are at least two 
>> different projects devoted to mitigating ABI problems (not including shims 
>> built in to the aforementioned MPI wrappers):
>> 
>> https://github.com/cea-hpc/wi4mpi <https://github.com/cea-hpc/wi4mpi>
>> https://github.com/eschnett/MPItrampoline 
>> <https://github.com/eschnett/MPItrampoline>
>> 
>> I've written about this a bit already, for those who are interested.  More 
>> material will be forthcoming once I have time for more experiments.
>> 
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md>
>> 
>> I understand this is a controversial topic, particularly for implementers.  
>> I hope that we can proceed objectively.
>> 
>> Thanks,
>> 
>> Jeff
>> 
>> -- 
>> Jeff Hammond
>> jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
>> http://jeffhammond.github.io/ 
>> <http://jeffhammond.github.io/>___
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> 

___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] propose ABI working group

2022-11-15 Thread Jeff Hammond via mpi-forum
I don't know what we do to create new working groups with the post-COVID
rules, but I would like to create and chair a WG focused on ABI
standardization.

There is strong support for this effort in many user communities, including
developers and maintainers of Spack, mpi4py, Julia MPI (MPI.jl), Rust MPI
(rsmpi), PETSc and NVHPC SDK, to name a few.  There are even a few
implementers who have expressed support, but I won't name them for their
own protection.

The problem is so exasperating for our users that there are at least two
different projects devoted to mitigating ABI problems (not including shims
built in to the aforementioned MPI wrappers):

https://github.com/cea-hpc/wi4mpi
https://github.com/eschnett/MPItrampoline

I've written about this a bit already, for those who are interested.  More
material will be forthcoming once I have time for more experiments.

https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md
https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md
https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md
https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md

I understand this is a controversial topic, particularly for implementers.
I hope that we can proceed objectively.

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] September MPI Forum Registration Closing

2022-09-22 Thread Jeff Hammond via mpi-forum
do i need to register to dial into WG discussions?  for time zone reasons,
it is really hard for me to connect most of the time, so i'd prefer not to
spend $100/hour to connect for the brief times i can.

thanks

jeff

On Tue, Sep 20, 2022 at 11:39 PM Wes Bland via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Hi folks,
>
> Tony let us know that we’re going to need to close the in-person
> registration tonight for all of the conference events next week due to
> catering needs. *If you intend to register for the in-person conference,
> that will be closing at midnight US Eastern tonight.*
>
> Registration for remote participation will remain open until around the
> time the meeting starts next Wednesday. Please don’t wait until the last
> minute as I’m not 100% sure what time it will close.
>
> Thanks,
> Wes
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI Languages Working Group Announcement

2022-09-02 Thread Jeff Hammond via mpi-forum
Yes and already done. 

Sent from my iPhone

> On 2. Sep 2022, at 21.20, Brandon Cook via mpi-forum 
>  wrote:
> 
> 
> Hi Martin,
> 
> Is this active currently? Could I get an invite to the Slack channel?
> 
> Thanks,
> Brandon
> 
>> On Mon, Jul 19, 2021 at 9:16 AM Ruefenacht, Martin via mpi-forum 
>>  wrote:
>> Hello all,
>>  
>> We are announcing the start of our Languages WG meetings for Thursday 22nd 
>> of July at 15:00 CEST, 09:00 EST, 08:00 CST continuing every other week.
>>  
>> Please join with this link: https://meet.lrz.de/MPILanguagesWG
>>  
>> We have a GitHub organization at https://github.com/mpiwg-languages. Please 
>> ping me directly to be added to the slack channel.
>>  
>> We will be considering a set of MPI-4.1 and MPI-5 proposals, so please bring 
>> your topics and ideas to this meeting.
>>  
>> Thank you,
>>  
>> Martin Ruefenacht
>> ___
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI 4.1 Plans and Progress

2022-07-06 Thread Jeff Hammond via mpi-forum
This conflicts with the Fortran WG5 meeting, although I guess I’m on the only 
affected person, since Bill Long doesn’t appear to have any outstanding issues 
with MPI right now.

https://github.com/mpi-forum/mpi-issues/issues/17 
 is a good one for 4.1, 
since it is merely a clarification.  I took care of some of the work, but have 
no time to work on the LaTeX changes to implement it.

I’ve asked the RMA WG to make forward progress on 
https://github.com/mpi-forum/mpi-issues/issues/23 
, which is one of a handful 
of issues that are reasonable to do in 4.1.

While I like to be active in the discussion, I no longer have the ability to 
work on the LaTeX document, particularly since everything about it has changed 
since I last edited it, and because I no longer use LaTeX regularly (sob 
#CorporateLife).

Jeff

> On 6Jul 2022, at 6:00 PM, Wes Bland via mpi-forum 
>  wrote:
> 
> Hi all,
> 
> We’re going to discuss this at a virtual meeting on July 20th, but as there 
> is some potential pre-work for the owners of these issues, I wanted to give 
> people a preview so they could get started early.
> 
> As discussed during the run up to MPI 4.0, we intend MPI 4.1 to be a 
> (relatively) quick update to the MPI Standard to fix more minor issues with 
> unclear semantics, terms, general formatting, etc. This release is not 
> intended to include a large number of new features (though there will of 
> course be some).
> 
> During MPI 4.0, we started using the GitHub Projects features to keep track 
> of how our work was progressing and to understand what our targets were. We 
> also made boards for MPI 4.1 and MPI 5.0. I’d like to draw your attention to 
> the MPI 4.1 project board for the discussion in a few weeks. Both of the 
> boards show the same data. They just use different layouts. Feel free to look 
> at whichever you like best.
> 
> * https://github.com/mpi-forum/mpi-issues/projects/3 
>  (GitHub’s old project 
> boards version)
> * https://github.com/orgs/mpi-forum/projects/1/views/2 
>  (GitHub’s new project 
> boards version)
> 
> I want to point out that we currently have 148 open issues marked for MPI 
> 4.1, which means there is a lot of work that we hope to have done in the next 
> year or so in order to release MPI 4.1. Each of these items has an owner that 
> should be responsible for moving the issue forward. To find out which issues 
> belong to you, take a look here 
> .
>  If there is something with your name on it, that means there is something in 
> your chapter that needs work. Please take a look so we can prioritize our 
> work for MPI 4.1. Please also plan to attend the virtual meeting on July 20th 
> to participate in the discussion.
> 
> If you no longer plan to actively participate in the MPI Forum, please let 
> Martin and I know so we can help make sure that someone else can pick up this 
> work. We’re not out to shame people here, but holding on to an issue and 
> never working on it prevents someone else from doing the work (I’m guilty of 
> this myself here).
> 
> Thanks to everyone for contributing! Being involved in the MPI Forum is a 
> volunteer position, and all of the work that each of you do is greatly 
> appreciated to help the MPI community. 
> 
> Thanks,
> Wes
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] binding tool docs?

2021-11-30 Thread Jeff Hammond via mpi-forum
I can't figure out how to use the binding tool.  Running various Python
scripts with -h hasn't helped me.

Is there any documentation, or can someone point me to a working example
that I can inspect?

Thanks,

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI_Get_elements_c

2021-07-07 Thread Jeff Hammond via mpi-forum
Related: do we have an F08 overload for this or not?

Jeff

> On Jul 7, 2021, at 5:37 PM, Raffenetti, Kenneth J. via mpi-forum 
>  wrote:
> 
> Hi,
> 
> This issue was raised in the MPICH GitHub 
> https://github.com/pmodels/mpich/issues/5413.
> 
> In MPI-3.1, we have C bindings for:
> 
> int MPI_Get_elements(const MPI_Status *status, MPI_Datatype datatype, int 
> *count)
> int MPI_Get_elements_x(const MPI_Status *status, MPI_Datatype datatype, 
> MPI_Count *count)
> int MPI_Status_set_elements(MPI_Status *status, MPI_Datatype datatype, int 
> count)
> int MPI_Status_set_elements_x(MPI_Status *status, MPI_Datatype datatype, 
> MPI_Count count)
> 
> MPI-4.0 added:
> 
> int MPI_Get_elements_c(const MPI_Status *status, MPI_Datatype datatype, 
> MPI_Count *count)
> 
> But no binding was added for an _c version of MPI_Status_set_elements. Was 
> this intended?
> 
> Thanks,
> Ken
> 
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Big Fortran hole in MPI-4.0 Embiggening

2021-01-11 Thread Jeff Hammond via mpi-forum
Given how many implementations claimed MPI 3.0 support when their RMA 
implementations were garbage piles of incorrectness or pathetic performance, it 
borders on the absurd to think that logical rigor about feature support will 
stop anyone from claiming MPI 4.0 support.

Sent from my iPhone

> On Jan 11, 2021, at 7:22 AM, Wesley Bland via mpi-forum 
>  wrote:
> 
> I agree with Jeff that this is a non-issue. Bill’s assertion that adding 
> Fortran support for large count once you have C support has been true in 
> MPICH development at least and no vender would drop anything just to put MPI 
> 4.0 compatibility on a marketing slide. They’d just put it on the slide 
> anyway. Every implementation out there has some sort of exception, 
> intentional or otherwise, that makes them technically non-compliant. In the 
> end, their users determine 1) whether they care enough to request the feature 
> and 2) whether they care enough to switch to a competing library.
> 
> I think the best way forward is to do something like what had been proposed 
> when the embiggening/pythonization started and separate bindings from 
> specification and just say that an MPI implementation has MPI 4.0 C bindings 
> and MPI 3.1 F08 bindings and MPI 5.0 Python bindings. That means each binding 
> can be updated in whatever way makes sense for them. For 4.0, I don’t think 
> this is an issue that needs to be resolved. Us telling the implementations 
> what they have to do will have essentially no impact anyway.
> 
> Thanks,
> Wes
> 
>> On Jan 11, 2021, at 6:26 AM, Rolf Rabenseifner via mpi-forum 
>>  wrote:
>> 
>> Dear Jeff and all,
>> 
>>> I think their users would be very unhappy.
>> 
>> Yes, users can be unhappy and compute centers may definitely recommend 
>> an other MPI library - at least for any software development.
>> 
>> I want to remember that MPI-3.1 requires for the mpi module:
>> 
>> "Provide explicit interfaces according to the Fortran routine interface 
>> specications.
>> This module therefore guarantees compile-time argument checking and allows 
>> positional
>> and keyword-based argument lists. If an implementation is paired with a
>> compiler that either does not support TYPE(*), DIMENSION(..) from TS 29113, 
>> or
>> is otherwise unable to ignore the types of choice buffers, then the 
>> implementation must
>> provide explicit interfaces only for MPI routines with no choice buer 
>> arguments. See
>> Section 17.1.6 for more details."
>> 
>> Although there are no more such compilers that do not provide one of the two 
>> methods,
>> there are more than 5 years after MPI-3.1 still MPI libraries that do not
>> provide keyword-based argument lists with the mpi module.
>> And those libraries provide such support for the mpi_f08 module.
>> This means, they prove that they are not MPI-3.1 compliant :-)
>> 
>> 
>> Sometimes implementors ignore the goals and the wording of the MPI standard. 
>> 
>> 
>> And there is also no request that users want to have a less-quality
>> mpi module definition.
>> 
>> Best regards
>> Rolf
>> 
>> 
>> - Original Message -
>>> From: "Main MPI Forum mailing list" 
>>> To: "Main MPI Forum mailing list" 
>>> Cc: "Jeff Squyres" 
>>> Sent: Sunday, January 10, 2021 5:09:55 PM
>>> Subject: Re: [Mpi-forum] Big Fortran hole in MPI-4.0 Embiggening
>> 
>>> I can't imagine a vendor would:
>>> 
>>> - support mpi_f08
>>> - then stop supporting mpi_f08 just so that they can get an "MPI-4.0" 
>>> release
>>> out
>>> - then support mpi_f08 again
>>> 
>>> I think their users would be very unhappy.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Jan 10, 2021, at 10:55 AM, William Gropp via mpi-forum < [
>>> mailto:mpi-forum@lists.mpi-forum.org | mpi-forum@lists.mpi-forum.org ] > 
>>> wrote:
>>> 
>>> I agree with Dan that this is a big change from the RCM. Further, the 
>>> approach
>>> in MPI has always been to encourage the users to make it clear to the 
>>> vendors
>>> what is acceptable in implementations, especially implementation schedules.
>>> Nothing in the standard prohibits implementors from continuing to provide 
>>> MPI
>>> 3.x implementations while they work to provide a full MPI 4.0 
>>> implementation.
>>> The MPI forum has no enforcement power on the implementors, and I believe 
>>> this
>>> text is unnecessary and will not provide the guarantee that Rolf wants.
>>> Further, frankly once the C embiggened interface is implemented, creating 
>>> the
>>> mpi_f08 version is relatively straightforward.
>>> 
>>> Bill
>>> 
>>> William Gropp
>>> Director, NCSA
>>> Thomas M. Siebel Chair in Computer Science
>>> University of Illinois Urbana-Champaign
>>> IEEE-CS President-Elect
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Jan 10, 2021, at 7:44 AM, HOLMES Daniel via mpi-forum < [
>>> mailto:mpi-forum@lists.mpi-forum.org | mpi-forum@lists.mpi-forum.org ] > 
>>> wrote:
>>> 
>>> Hi Rolf,
>>> 
>>> This is a (somewhat contrived, arguably) reason for taking another tiny step
>>> towards removing the “mpif.h” method of Fortra

Re: [Mpi-forum] Question about MPI_Alloc_mem

2020-10-15 Thread Jeff Hammond via mpi-forum
Just implement MPI allocation sanity once and forget about it :-)

#include 
void * MPIX_Malloc(MPI_Aint size, MPI_Info info, int * status)
{
  void * ptr = NULL;
  *status = MPI_Alloc_mem(size, info, &ptr);
  return ptr;
}

int MPIX_Free(void * ptr)
{
  return MPI_Free_mem(ptr);
}


Jeff

On Wed, Oct 14, 2020 at 12:53 PM Skjellum, Anthony via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Bill, I thank you much for your input too.
>
> POSIX has no problem making you do the cast in C 🙂
>
> *int posix_memalign(void memptr**, size_t* *alignment**, size_t* 
> *size**);*
>
> I get why its that way now in MPI, I still don't like it 🙂
>
> Tony
>
> cf,
> https://linux.die.net/man/3/posix_memalign
> posix_memalign(3): allocate aligned memory - Linux man page
> 
> The function posix_memalign() allocates size bytes and places the address
> of the allocated memory in *memptr. The address of the allocated memory
> will be a ...
> linux.die.net
>
>
>
>
>
> Anthony Skjellum, PhD
>
> Professor of Computer Science and Chair of Excellence
>
> Director, SimCenter
>
> University of Tennessee at Chattanooga (UTC)
>
> tony-skjel...@utc.edu  [or skjel...@gmail.com]
>
> cell: 205-807-4968
>
>
> --
> *From:* William Gropp 
> *Sent:* Wednesday, October 14, 2020 3:48 PM
> *To:* Main MPI Forum mailing list 
> *Cc:* Rolf Rabenseifner ; Skjellum, Anthony <
> tony-skjel...@utc.edu>
> *Subject:* Re: [Mpi-forum] Question about MPI_Alloc_mem
>
> My previous email (probably crossing paths with Tony’s reply) addressed
> most of this.  The one thing that I want to add is that the API choice made
> in MPI for the C binding, both in the MPI-1 attribute get functions and
> here, has nothing to do with the Fortran binding.  These were made to match
> C’s limitations on anonymous pointers, and to avoid requiring pointer
> casts.
>
> Bill
>
> William Gropp
> Director and Chief Scientist, NCSA
> Thomas M. Siebel Chair in Computer Science
> University of Illinois Urbana-Champaign
>
>
>
>
>
>
> On Oct 14, 2020, at 2:42 PM, Skjellum, Anthony via mpi-forum <
> mpi-forum@lists.mpi-forum.org> wrote:
>
> Rolf, the rationale is not clear at all to me: "to facilitate type
> casting" is self-understood or a canonical term of art. I've been
> programming in C for 40 years... and I never would write the API as we have
> done it there.
>
> For pointers, out arguments are literally truthful: void **ptr is an out
> argument of a void pointer, and void *ptr is an in argument of a void
> pointer.
>
> {Years ago, before there was void *, we would have written char **ptr for
> a char out argument (and cast it to whatever type we wanted), and char *ptr
> for an in point argument. }
>
> The fact that a void ** can stand in the place of a void * is clearly a
> weakened typing in the language, because void * can point at anything
> (unknown).  Since a void * is a pointer, it can also hold a pointer to a
> pointer to void, sure.   But the intent of the second * is to remind you
> that you have an out argument.  And you *must* pass a pointer to a pointer
> for the API to work in C.
>
> Somehow this is helping the Fortran and C_PTR API of Fortran, as it
> states.  So the C API is made this way for convenience of Fortran.
>
> The rationale shows the argument with the & to pass the pointer to the API
> by reference.  But, the rationale is not necessarily normative.  A
> reasonable person reads the standard,  sees void * and passes the baseptr
> without the &.
>
> Seems like a bad API to me.
>
> Tony
>
>
> Anthony Skjellum, PhD
> Professor of Computer Science and Chair of Excellence
> Director, SimCenter
> University of Tennessee at Chattanooga (UTC)
> tony-skjel...@utc.edu  [or skjel...@gmail.com]
> cell: 205-807-4968
>
> --
> *From:* mpi-forum  on behalf of
> Rolf Rabenseifner via mpi-forum 
> *Sent:* Wednesday, October 14, 2020 1:32 PM
> *To:* Main MPI Forum mailing list 
> *Cc:* Rolf Rabenseifner 
> *Subject:* Re: [Mpi-forum] Question about MPI_Alloc_mem
>
> Dear Tony,
>
> please read MPI-3.1 page 338, lines 36-41.
> Do these lines resolve your question?
>
> Best regards
> Rolf
>
> - Original Message -
> > From: "Main MPI Forum mailing list" 
> > To: "Main MPI Forum mailing list" 
> > Cc: "Anthony Skjellum" 
> > Sent: Wednesday, October 14, 2020 7:01:54 PM
> > Subject: [Mpi-forum] Question about MPI_Alloc_mem
>
> > Folks, I know we have had this function for a long time, and I've
> implemented
> > ports of MPI that actually use it (e.g., with pre-pinned memory). But, I
> am
> > trying to understand the logic for why baseptr is passed by value,
> instead of
> > by reference. In C, everything is by value, so the last argument in
> normal C
> > programs would be void **baseptr.
> >
> > The standard has:
> > int MPI_Alloc_mem(MPI_Aint size, MPI_Info info, void *baseptr);
> > Now, MPICC/GCC takes this with
> > void *memory = (void *)0;
> > int error = MPI_Alloc_

Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-15 Thread Jeff Hammond via mpi-forum
You are right that I am assuming that if an NBC completes at one rank, I
can reason about the state of that operation at another.  I can certainly
come up with scenarios where that is dangerous.

Jeff

On Sat, Aug 15, 2020 at 9:57 AM Richard Graham 
wrote:

> I think you start going down a slippery slope when you start to guess
> about the state of a resource that has been put under the control on
> another entity, without that entity explicitly declaring something.
>
> Rich
>
>
>
> Sent via the Samsung Galaxy S9, an AT&T 5G Evolution capable smartphone
>
>
>
> ---- Original message 
> From: Jeff Hammond via mpi-forum 
> Date: 8/15/20 12:51 (GMT-05:00)
> To: "Skjellum, Anthony" 
> Cc: Jeff Hammond , Main MPI Forum mailing list <
> mpi-forum@lists.mpi-forum.org>
> Subject: Re: [Mpi-forum] MPI_Request_free restrictions
>
> *External email: Use caution opening links or attachments*
> Yes, but do we think the use case of never detecting completion is
> likely?  I am not arguing for that, but rather that users might free the
> request but then detect completion another way, such as by waiting on the
> request on a subset of processes.
>
> Jeff
>
> On Wed, Aug 12, 2020 at 5:40 PM Skjellum, Anthony 
> wrote:
>
>> FYI, one argument (also used to force us to add restrictions on MPI
>> persistent collective initialization to be blocking)... The
>> MPI_Request_free on an NBC poses a problem for the cases where there are
>> array types
>> posed (e.g., Alltoallv/w)... It will not be knowable to the application
>> if the vectors are in use by MPI still after
>> the  free on an active request.  We do *not* mandate that the MPI
>> implementation copy such arrays currently, so they are effectively "held as
>> unfreeable" by the MPI implementation till MPI_Finalize.  The user
>> cannot deallocate them in a correct program till after MPI_Finalize.
>>
>> Another effect for NBC of releasing an active request, IMHO,  is that you
>> don't know when send buffers are free to be deallocated or receive buffers
>> are free to be deallocated... since you don't know when the transfer is
>> complete OR the buffers are no longer used by MPI (till after MPI_Finalize).
>>
>> Tony
>>
>>
>>
>>
>> Anthony Skjellum, PhD
>>
>> Professor of Computer Science and Chair of Excellence
>>
>> Director, SimCenter
>>
>> University of Tennessee at Chattanooga (UTC)
>>
>> tony-skjel...@utc.edu  [or skjel...@gmail.com]
>>
>> cell: 205-807-4968
>>
>>
>> --
>> *From:* mpi-forum  on behalf of
>> Jeff Hammond via mpi-forum 
>> *Sent:* Saturday, August 8, 2020 12:07 PM
>> *To:* Main MPI Forum mailing list 
>> *Cc:* Jeff Hammond 
>> *Subject:* Re: [Mpi-forum] MPI_Request_free restrictions
>>
>> We should fix the RMA chapter with an erratum. I care less about NBC but
>> share your ignorance of why it was done that way.
>>
>> Sent from my iPhone
>>
>> On Aug 8, 2020, at 6:51 AM, Balaji, Pavan via mpi-forum <
>> mpi-forum@lists.mpi-forum.org> wrote:
>>
>>  Folks,
>>
>> Does someone remember why we disallowed users from calling
>> MPI_Request_free on nonblocking collective requests?  I remember the
>> reasoning for not allowing cancel (i.e., the operation might have completed
>> on some processes, but not all), but not for Request_free.  AFAICT,
>> allowing the users to free the request doesn’t make any difference to the
>> MPI library.  The MPI library would simply maintain its own refcount to the
>> request and continue forward till the operation completes.  One of our
>> users would like to free NBC requests so they don’t have to wait for the
>> operation to complete in some situations.
>>
>> Unfortunately, when I added the Rput/Rget operations in the RMA chapter,
>> I copy-pasted that text into RMA as well without thinking too hard about
>> it.  My bad!  Either the RMA committee missed it too, or they thought of a
>> reason that I can’t think of now.
>>
>> Can someone clarify or remind me what the reason was?
>>
>> Regards,
>>
>>   — Pavan
>>
>> MPI-3.1 standard, page 197, lines 26-27:
>>
>> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request
>> associated with a nonblocking collective operation.”
>>
>> ___
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>> <h

Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-15 Thread Jeff Hammond via mpi-forum
Yes, but do we think the use case of never detecting completion is likely?
I am not arguing for that, but rather that users might free the request but
then detect completion another way, such as by waiting on the request on a
subset of processes.

Jeff

On Wed, Aug 12, 2020 at 5:40 PM Skjellum, Anthony 
wrote:

> FYI, one argument (also used to force us to add restrictions on MPI
> persistent collective initialization to be blocking)... The
> MPI_Request_free on an NBC poses a problem for the cases where there are
> array types
> posed (e.g., Alltoallv/w)... It will not be knowable to the application if
> the vectors are in use by MPI still after
> the  free on an active request.  We do *not* mandate that the MPI
> implementation copy such arrays currently, so they are effectively "held as
> unfreeable" by the MPI implementation till MPI_Finalize.  The user cannot
> deallocate them in a correct program till after MPI_Finalize.
>
> Another effect for NBC of releasing an active request, IMHO,  is that you
> don't know when send buffers are free to be deallocated or receive buffers
> are free to be deallocated... since you don't know when the transfer is
> complete OR the buffers are no longer used by MPI (till after MPI_Finalize).
>
> Tony
>
>
>
>
> Anthony Skjellum, PhD
>
> Professor of Computer Science and Chair of Excellence
>
> Director, SimCenter
>
> University of Tennessee at Chattanooga (UTC)
>
> tony-skjel...@utc.edu  [or skjel...@gmail.com]
>
> cell: 205-807-4968
>
>
> --
> *From:* mpi-forum  on behalf of
> Jeff Hammond via mpi-forum 
> *Sent:* Saturday, August 8, 2020 12:07 PM
> *To:* Main MPI Forum mailing list 
> *Cc:* Jeff Hammond 
> *Subject:* Re: [Mpi-forum] MPI_Request_free restrictions
>
> We should fix the RMA chapter with an erratum. I care less about NBC but
> share your ignorance of why it was done that way.
>
> Sent from my iPhone
>
> On Aug 8, 2020, at 6:51 AM, Balaji, Pavan via mpi-forum <
> mpi-forum@lists.mpi-forum.org> wrote:
>
>  Folks,
>
> Does someone remember why we disallowed users from calling
> MPI_Request_free on nonblocking collective requests?  I remember the
> reasoning for not allowing cancel (i.e., the operation might have completed
> on some processes, but not all), but not for Request_free.  AFAICT,
> allowing the users to free the request doesn’t make any difference to the
> MPI library.  The MPI library would simply maintain its own refcount to the
> request and continue forward till the operation completes.  One of our
> users would like to free NBC requests so they don’t have to wait for the
> operation to complete in some situations.
>
> Unfortunately, when I added the Rput/Rget operations in the RMA chapter, I
> copy-pasted that text into RMA as well without thinking too hard about it.
> My bad!  Either the RMA committee missed it too, or they thought of a
> reason that I can’t think of now.
>
> Can someone clarify or remind me what the reason was?
>
> Regards,
>
>   — Pavan
>
> MPI-3.1 standard, page 197, lines 26-27:
>
> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request
> associated with a nonblocking collective operation.”
>
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-12 Thread Jeff Hammond via mpi-forum
Does anybody implement RMA and NBC requests differently from P2P ones such
that this matters?  How would an implementation support a different type of
request for NBC+RMA than P2P?  If an MPI_Request has to be a reference for
P2P, how is it going to also support the not-a-reference case?  I can
imagine a union of P2P_Reference, RMA_Object_itself, and NBC_Object_itself,
but seems like it would blow up the memory requirements for a vector of
MPI_Requests if RMA_Object_itself or NBC_Object_itself are larger than 8
bytes.

I wonder if it is even a good idea to not maintain a reference to the
request.  In implementations that do not implement passive progress on
everything, how does the progress engine know to poke an NBC request if
there isn't an implementation of it in the queue?

Jeff

On Wed, Aug 12, 2020 at 1:51 PM Jim Dinan  wrote:

> IIRC, the argument for RMA was that we had not wanted to require the MPI
> library to maintain a reference to the request. There could be many RMA
> operations pending and the MPI library maintaining a list of pending
> operations could be a significant source of overhead. I seem to recall this
> having been the general sentiment at the time and it may have also applied
> to NBC.
>
>  ~Jim.
>
> On Sat, Aug 8, 2020 at 7:15 PM Jeff Hammond via mpi-forum <
> mpi-forum@lists.mpi-forum.org> wrote:
>
>> We should fix the RMA chapter with an erratum. I care less about NBC but
>> share your ignorance of why it was done that way.
>>
>> Sent from my iPhone
>>
>> On Aug 8, 2020, at 6:51 AM, Balaji, Pavan via mpi-forum <
>> mpi-forum@lists.mpi-forum.org> wrote:
>>
>>  Folks,
>>
>> Does someone remember why we disallowed users from calling
>> MPI_Request_free on nonblocking collective requests?  I remember the
>> reasoning for not allowing cancel (i.e., the operation might have completed
>> on some processes, but not all), but not for Request_free.  AFAICT,
>> allowing the users to free the request doesn’t make any difference to the
>> MPI library.  The MPI library would simply maintain its own refcount to the
>> request and continue forward till the operation completes.  One of our
>> users would like to free NBC requests so they don’t have to wait for the
>> operation to complete in some situations.
>>
>> Unfortunately, when I added the Rput/Rget operations in the RMA chapter,
>> I copy-pasted that text into RMA as well without thinking too hard about
>> it.  My bad!  Either the RMA committee missed it too, or they thought of a
>> reason that I can’t think of now.
>>
>> Can someone clarify or remind me what the reason was?
>>
>> Regards,
>>
>>   — Pavan
>>
>> MPI-3.1 standard, page 197, lines 26-27:
>>
>> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request
>> associated with a nonblocking collective operation.”
>>
>> ___
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>>
>> ___
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>>
>

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-08 Thread Jeff Hammond via mpi-forum
Correct, but that’s not what I was suggesting. I was saying that the root alone 
could track completion of any number of collectives individually and then 
broadcast notice of those completions to all ranks.

This is but one example use case where freeing requests associated with NBCs 
makes sense.

Jeff

Sent from my iPhone

> On Aug 8, 2020, at 2:14 PM, Richard Graham  wrote:
> 
> 
> The non-blocking reductions are required to start in the same order, 
> completion order is not specified.  So I don’t think you can infer from the 
> completion of the last non-blocking operation (ibarrier in this case), that 
> all others have completed.
>  
> Also, completion is local, and even in a synchronized operation like a 
> reduction, not all ranks will complete at exactly the same time.  All have to 
> enter the operation before any can complete, but the fact that one completed 
> does not guarantee the other have.
>  
> Rich
>  
> From: mpi-forum  On Behalf Of Jeff 
> Hammond via mpi-forum
> Sent: Saturday, August 8, 2020 3:56 PM
> To: Main MPI Forum mailing list 
> Cc: Jeff Hammond 
> Subject: Re: [Mpi-forum] MPI_Request_free restrictions
>  
> External email: Use caution opening links or attachments
>  
> The argument that there is no portable way to detect completion is false.  It 
> is completely portable to detect completion using only a subset of processes, 
> and there may even be use cases for it.
>  
> For example, an application can use Ibarrier to determine whether all 
> processes have reached a phase of the program but only detect this at one 
> rank.
>  
> MPI_Request req; 
> {
>   foo();
>   MPI_Ibarrier(comm, &req);
>   if (rank) MPI_Request_free(&req);
>   ..
> }
> if (!rank) {
>   MPI_Wait(&req);
>   printf("foo finished everywhere\n");
> }
>  
> Similarly, one might have a bunch of Ireduce operations where the root is the 
> only rank that tests completion, after which it notifies all processes that 
> the bunch has completed.
>  
> Are there any implementations that can't trivially support this?  I recall 
> that MPICH implements requests as nothing more than pointers to internal 
> state, and I assume that Open-MPI does something similar.
>  
> Jeff
>  
> On Sat, Aug 8, 2020 at 9:45 AM Bangalore, Purushotham via mpi-forum 
>  wrote:
> I see discussion of this issue here:
>  
> https://github.com/mpi-forum/mpi-forum-historic/issues/83
>  
> Puri
> From: mpi-forum  on behalf of Balaji, 
> Pavan via mpi-forum 
> Sent: Saturday, August 8, 2020 8:51 AM
> To: mpi-forum@lists.mpi-forum.org 
> Cc: Balaji, Pavan 
> Subject: [Mpi-forum] MPI_Request_free restrictions
>  
> Folks,
>  
> Does someone remember why we disallowed users from calling MPI_Request_free 
> on nonblocking collective requests?  I remember the reasoning for not 
> allowing cancel (i.e., the operation might have completed on some processes, 
> but not all), but not for Request_free.  AFAICT, allowing the users to free 
> the request doesn’t make any difference to the MPI library.  The MPI library 
> would simply maintain its own refcount to the request and continue forward 
> till the operation completes.  One of our users would like to free NBC 
> requests so they don’t have to wait for the operation to complete in some 
> situations.
>  
> Unfortunately, when I added the Rput/Rget operations in the RMA chapter, I 
> copy-pasted that text into RMA as well without thinking too hard about it.  
> My bad!  Either the RMA committee missed it too, or they thought of a reason 
> that I can’t think of now.
>  
> Can someone clarify or remind me what the reason was?
>  
> Regards,
>  
>   — Pavan
>  
> MPI-3.1 standard, page 197, lines 26-27:
>  
> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request 
> associated with a nonblocking collective operation.”
>  
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> 
>  
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-08 Thread Jeff Hammond via mpi-forum
The argument that there is no portable way to detect completion is false.
It is completely portable to detect completion using only a subset of
processes, and there may even be use cases for it.

For example, an application can use Ibarrier to determine whether all
processes have reached a phase of the program but only detect this at one
rank.

MPI_Request req;
{
  foo();
  MPI_Ibarrier(comm, &req);
  if (rank) MPI_Request_free(&req);
  ..
}
if (!rank) {
  MPI_Wait(&req);
  printf("foo finished everywhere\n");
}

Similarly, one might have a bunch of Ireduce operations where the root is
the only rank that tests completion, after which it notifies all processes
that the bunch has completed.

Are there any implementations that can't trivially support this?  I recall
that MPICH implements requests as nothing more than pointers to internal
state, and I assume that Open-MPI does something similar.

Jeff

On Sat, Aug 8, 2020 at 9:45 AM Bangalore, Purushotham via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> I see discussion of this issue here:
>
> https://github.com/mpi-forum/mpi-forum-historic/issues/83
>
> Puri
> --
> *From:* mpi-forum  on behalf of
> Balaji, Pavan via mpi-forum 
> *Sent:* Saturday, August 8, 2020 8:51 AM
> *To:* mpi-forum@lists.mpi-forum.org 
> *Cc:* Balaji, Pavan 
> *Subject:* [Mpi-forum] MPI_Request_free restrictions
>
> Folks,
>
> Does someone remember why we disallowed users from calling
> MPI_Request_free on nonblocking collective requests?  I remember the
> reasoning for not allowing cancel (i.e., the operation might have completed
> on some processes, but not all), but not for Request_free.  AFAICT,
> allowing the users to free the request doesn’t make any difference to the
> MPI library.  The MPI library would simply maintain its own refcount to the
> request and continue forward till the operation completes.  One of our
> users would like to free NBC requests so they don’t have to wait for the
> operation to complete in some situations.
>
> Unfortunately, when I added the Rput/Rget operations in the RMA chapter, I
> copy-pasted that text into RMA as well without thinking too hard about it.
> My bad!  Either the RMA committee missed it too, or they thought of a
> reason that I can’t think of now.
>
> Can someone clarify or remind me what the reason was?
>
> Regards,
>
>   — Pavan
>
> MPI-3.1 standard, page 197, lines 26-27:
>
> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request
> associated with a nonblocking collective operation.”
>
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI_Request_free restrictions

2020-08-08 Thread Jeff Hammond via mpi-forum
We should fix the RMA chapter with an erratum. I care less about NBC but share 
your ignorance of why it was done that way. 

Sent from my iPhone

> On Aug 8, 2020, at 6:51 AM, Balaji, Pavan via mpi-forum 
>  wrote:
> 
>  Folks,
> 
> Does someone remember why we disallowed users from calling MPI_Request_free 
> on nonblocking collective requests?  I remember the reasoning for not 
> allowing cancel (i.e., the operation might have completed on some processes, 
> but not all), but not for Request_free.  AFAICT, allowing the users to free 
> the request doesn’t make any difference to the MPI library.  The MPI library 
> would simply maintain its own refcount to the request and continue forward 
> till the operation completes.  One of our users would like to free NBC 
> requests so they don’t have to wait for the operation to complete in some 
> situations.
> 
> Unfortunately, when I added the Rput/Rget operations in the RMA chapter, I 
> copy-pasted that text into RMA as well without thinking too hard about it.  
> My bad!  Either the RMA committee missed it too, or they thought of a reason 
> that I can’t think of now.
> 
> Can someone clarify or remind me what the reason was?
> 
> Regards,
> 
>   — Pavan
> 
> MPI-3.1 standard, page 197, lines 26-27:
> 
> “It is erroneous to call MPI_REQUEST_FREE or MPI_CANCEL for a request 
> associated with a nonblocking collective operation.”
> 
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


[Mpi-forum] adapting to the COVID-19 situation

2020-03-12 Thread Jeff Hammond via mpi-forum
I assume everyone is aware that all the events and all the travel are being
cancelled and why.  This includes standards body activities like ISO C++.
Many companies, including Intel, are strongly discouraging if not outright
banning certain classes of travel.

Given that it may be required to amend the MPI rules or obtain
extraordinary dispensation from the steering committee, it seems prudent to
start discussing our mitigation plans for the June meeting in Munich.

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Giving up on C11 _Generic

2019-08-08 Thread Jeff Hammond via mpi-forum
That you use code that does unsafe conversions really has nothing to do with 
the business of the MPI Forum. 

Again, if you think C89 is the best C, then use and teach that. No one here is 
trying to make you use C11.

Jeff

> On Aug 8, 2019, at 5:56 AM, N.M. Maclaren via mpi-forum 
>  wrote:
> 
>> On Aug 7 2019, Jeff Hammond via mpi-forum wrote:
>> 
>> "silently truncated at run time" is trivially addressed with -Wconversion
>> or -Wshorten-64-to-32.  The example program below is addressed by this.
> 
> Unfortunately, no.  While I have no trouble using such options on MY
> code, I have frequently found them unusable on imported packages, because
> of the flood of non-errors they generate.  For example, the following
> code is both common and reasonable, and includes narrowing conversions:
> 
>   int i;
>   i = sizeof(double);
>   float x;
>   x = 1.0;
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Giving up on C11 _Generic

2019-08-07 Thread Jeff Hammond via mpi-forum
You can't do that forwarding for vectors of counts.

On Wed, Aug 7, 2019 at 2:10 PM Jim Dinan via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Even simpler than this, we could just forward all calls to the MPI_Count
> interface (see below).  The int count argument should type convert to
> MPI_Count without issue.  Note that it still needs to be a function-like
> macro so that function pointers work.
>
> Don't give up yet!  :D
>
>  ~Jim.
>
> #include 
>
>
>
> typedef int MPI_Datatype;
>
> typedef int MPI_Comm;
>
>
> int MPI_Send(const void* buf, int count, MPI_Datatype datatype, int dest,
>
>  int tag, MPI_Comm comm)
>
> {
>
> printf("MPI_Send(count = %d)\n", count);
>
> return 0;
>
> }
>
>
> int MPI_Send_x(const void* buf, long long count, MPI_Datatype datatype,
> int dest,
>
>int tag, MPI_Comm comm)
>
> {
>
> printf("MPI_Send_x(count = %lld)\n", count);
>
> return 0;
>
> }
>
>
> #define MPI_Send(buf, count, datatype, dest, tag, comm) MPI_Send_x(buf,
> count, datatype, dest, tag, comm)
>
>
> int main(int argc, char *argv[]) {
>
> /* 8589934592LL == 2^33 */
>
> long long i = 8589934592LL + 11;
>
> int ret;
>
> int (*snd_ptr)(const void*, int, MPI_Datatype, int, int, MPI_Comm) =
> &MPI_Send;
>
> ret = MPI_Send(NULL, i, 0, 0, 0, 0);
>
> ret = MPI_Send(NULL, 5, 0, 0, 0, 0);
>
> ret = (*snd_ptr)(NULL, i, 0, 0, 0, 0);
>
> ret = (*snd_ptr)(NULL, 5, 0, 0, 0, 0);
>
> return 0;
>
> }
>
>
> MPI_Send_x(count = 8589934603)
>
> MPI_Send_x(count = 5)
>
> MPI_Send(count = 11)
>
> MPI_Send(count = 5)
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Giving up on C11 _Generic

2019-08-07 Thread Jeff Hammond via mpi-forum
I don't care that much about C11 _Generic, which is why I have always
focused on a C99 solution to the large-count problem, but I disagree with
your reasons for abandoning it.

"silently truncated at run time" is trivially addressed with -Wconversion
or -Wshorten-64-to-32.  The example program below is addressed by this.

$ clang -Wshorten-64-to-32 -c truncation.c
truncation.c:10:9: warning: implicit conversion loses integer precision:
'long long' to 'int' [-Wshorten-64-to-32]
foo(i);
~~~ ^
1 warning generated.

$ gcc-9 -Wconversion -c truncation.c
truncation.c: In function 'main':
truncation.c:10:9: warning: conversion from 'long long int' to 'int' may
change value [-Wconversion]
   10 | foo(i);
  | ^

$ icc -Wconversion -c truncation.c
truncation.c(10): warning #1682: implicit conversion of a 64-bit integral
type to a smaller integral type (potential portability problem)
  foo(i);
  ^

In any case, incorrect programs are incorrect.  It is a
quality-of-implementation issue whether C compilers and MPI libraries
detect incorrect usage.  We have never designed MPI around people who can't
follow directions and we should not start now.

Jeff

On Wed, Aug 7, 2019 at 6:59 AM Jeff Squyres (jsquyres) via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> SHORT VERSION
> =
>
> Due to the possibility of silently introducing errors into user
> applications, the BigCount WG no longer thinks that C11 _Generic is a good
> idea.  We are therefore dropping that from our proposal.  The new proposal
> will therefore essentially just be the addition of a bunch of
> MPI_Count-enabled "_x" functions in C, combined with the addition of a
> bunch of polymorphic MPI_Count-enabled interfaces in Fortran.
>
> MORE DETAIL
> ===
>
> Joseph Schuchart raised a very important point in a recent mailing thread:
> the following C/C++ code does not raise a compiler warning:
>
> -
> #include 
>
> static void foo(int j) {
> printf("foo(j) = %d\n", j);
> }
>
> int main(int argc, char *argv[]) {
> /* 8589934592LL == 2^33 */
> long long i = 8589934592LL + 11;
> foo(i);
> return 0;
> }
> -
>
> If you compile and run this program on a commodity x86-64 platform, a) you
> won't get a warning from the compiler, and b) you'll see "11" printed out.
> I tried with gcc 9 and clang 8 -- both with the C and C++ compilers.  I
> even tried with "-Wall -pedantic".  No warnings.
>
> This is because casting from a larger int type to a smaller int type is
> perfectly valid C/C++.
>
> Because of this, there is a possibility that we could be silently
> introducing errors into user applications.  Consider:
>
> 1. An application upgrades its "count" parameters to type MPI_Count for
> all calls to MPI_Send.
>--> Recall that "MPI_Count" already exists in MPI-3.1, and is likely of
> type (long long) on commodity x86-64 platforms
> 2. The application then uses values in that "count" parameter that are
> greater than 2^32.
>
> If the user's MPI implementation and compiler both support C11 _Generic,
> everything is great.
>
> But if either the MPI implementation or the compiler do not support C11
> _Generic, ***the "count" value will be silently truncated at run time***.
>
> This seems like a very bad idea, from a design standpoint.
>
> We have therefore come full circle: we are back to adding a bunch of "_x"
> functions for C, and there will be no polymorphism (in C).  Sorry, folks.
>
> Note that Fortran does not have similar problems:
>
> 1. Fortran compilers have supported polymorphism for 20+ years
> 2. Fortran does not automatically cast between INTEGER values of different
> sizes
>
> After much debate, the BigCount WG has decided that C11 _Generic just
> isn't worth it.  That's no reason to penalize Fortran, though.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-08-01 Thread Jeff Hammond via mpi-forum
Are you saying that a C99 compiler won’t complaint if the user passes a 64b
int to a 32b int argument? That’s a pretty stupid compiler if you ask me.

I’m fine with putting MPI C11 in separate header that can #error if C11
isn’t supported. That’s a pretty obvious user experience win that costs
nothing. We are basically doing that for Fortran already.

Maclaren can just skip C11 and teach MPI in Fortran 2008 or C99 with the _x
symbols if the C11 is too scary.

In any case, we can’t design MPI around ignorant users who don’t read about
features they’re using. Doing so makes it impossible to do anything at all
because there’s always somebody too ignorant to use some feature correctly,
and the union of the ignorance covers all nontrivial changes.

Jeff

On Thu, Aug 1, 2019 at 7:28 AM Joseph Schuchart via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> I think the point he wanted to make was that you won't see a
> compile-time error if you /think/ you're using the MPI_Count overloads
> but are in fact not, i.e., you are modernizing a legacy code base that
> is stuck in the nineties and you introduce MPI_Count for size arguments
> because the standard says you may do so now without a clear warning that
> this is only possible if you bump to C11. The compiler won't warn you,
> the MPI standard may mention it in a subsection somewhere, which of
> course you didn't read because you haven't had the time to read all ~1k
> pages of 4.0, yet. Having a clear note attached to listings that these
> overloads are constrained to C11 may help reduce the risk of introducing
> another caveat. MPI developers may be (more) familiar with the
> differences between C99 and C11 but the average developer of HPC
> software likely is not. And there are plenty of codes out there that
> constrain themselves to ancient language versions, for reasons unknown
> to me...
>
> Joseph
>
> On 8/1/19 12:56 PM, Jeff Hammond via mpi-forum wrote:
> > That’s why there will be C90/C99 compatible symbols as well. If you
> don’t like C11, don’t use it. Nothing will happen. BigCount will still work.
> >
> > C11 has been the default in GCC and Clang for a while. What compilers
> are going to limit users to C99 for years to come?
> >
> > Jeff
> >
> >> On Aug 1, 2019, at 3:23 AM, N.M. Maclaren via mpi-forum <
> mpi-forum@lists.mpi-forum.org> wrote:
> >>
> >>> On Jul 30 2019, Jeff Squyres (jsquyres) via mpi-forum wrote:
> >>>
> >>> B. C11 _Generic polymorphism kinda sucks, *and* we're in a transition
> period where not all C compilers are C11-capable. Hence, we're exposing up
> to *3* C bindings per MPI procedure to applications (including explicitly
> exposing the "_x" variant where relevant).
> >>
> >> This following point may have already been debated, so please excuse me
> if
> >> has.  Let's skip the politics, for the sake of all our blood pressures.
> >>
> >> The executive summary is that the failure mode pointed out by Joseph
> >> Schuchart is going to be a very, very serious problem for BigCount
> users,
> >> and continue for the forseeable future.  I would guess until at least
> >> 2025, and possibly long after that.
> >>
> >> Speaking as someone who may be teaching MPI programming again, with an
> >> emphasis on reliability and portability, I would almost certainly add
> >> warnings that would be, roughly: "Don't touch BigCount if you can find a
> >> way round it; and be paranoid about it if you use it."  I already do
> that
> >> about I/O attributes, for similar reasons.  That isn't good.
> >>
> >> I don't know how you would document that, but the MPI standard already
> >> has gotchas that aren't easy to find, and adding another one isn't good,
> >> either.
> >>
> >> The explanation:
> >>
> >> C99 was not received favourably by most of the C-using community (I
> don't
> >> mean compilers here).  I tracked a dozen important, active projects, and
> >> it was 2010/11 (yes, over a decade) before even half of them converted
> >> from C90 to C99 as a base.  I last checked a few years ago, but quite a
> >> few C99 features were still not reliably available in compilers; I know
> >> that many of the ones I found still aren't.  Courses are another
> problem,
> >> because they rarely include warnings about gotchas caused by standards
> >> differences (and there are lots between C90 and C99).
> >>
> >> I haven't tracked C11 as carefully, but the evidence I have seen is
> that it
> >

Re: [Mpi-forum] "BigCount" rendering in PDF

2019-08-01 Thread Jeff Hammond via mpi-forum
That’s why there will be C90/C99 compatible symbols as well. If you don’t like 
C11, don’t use it. Nothing will happen. BigCount will still work.

C11 has been the default in GCC and Clang for a while. What compilers are going 
to limit users to C99 for years to come?

Jeff

> On Aug 1, 2019, at 3:23 AM, N.M. Maclaren via mpi-forum 
>  wrote:
> 
>> On Jul 30 2019, Jeff Squyres (jsquyres) via mpi-forum wrote:
>> 
>> B. C11 _Generic polymorphism kinda sucks, *and* we're in a transition period 
>> where not all C compilers are C11-capable. Hence, we're exposing up to *3* C 
>> bindings per MPI procedure to applications (including explicitly exposing 
>> the "_x" variant where relevant).
> 
> This following point may have already been debated, so please excuse me if
> has.  Let's skip the politics, for the sake of all our blood pressures.
> 
> The executive summary is that the failure mode pointed out by Joseph
> Schuchart is going to be a very, very serious problem for BigCount users,
> and continue for the forseeable future.  I would guess until at least
> 2025, and possibly long after that.
> 
> Speaking as someone who may be teaching MPI programming again, with an
> emphasis on reliability and portability, I would almost certainly add
> warnings that would be, roughly: "Don't touch BigCount if you can find a
> way round it; and be paranoid about it if you use it."  I already do that
> about I/O attributes, for similar reasons.  That isn't good.
> 
> I don't know how you would document that, but the MPI standard already
> has gotchas that aren't easy to find, and adding another one isn't good,
> either.
> 
> The explanation:
> 
> C99 was not received favourably by most of the C-using community (I don't
> mean compilers here).  I tracked a dozen important, active projects, and
> it was 2010/11 (yes, over a decade) before even half of them converted
> from C90 to C99 as a base.  I last checked a few years ago, but quite a
> few C99 features were still not reliably available in compilers; I know
> that many of the ones I found still aren't.  Courses are another problem,
> because they rarely include warnings about gotchas caused by standards
> differences (and there are lots between C90 and C99).
> 
> I haven't tracked C11 as carefully, but the evidence I have seen is that it
> received even less interest and acceptance than C99, so people are going
> to be using C99 compilers for a LONG time yet.  There is also the problem
> that C is not a language that is upwards compatible between versions, but
> C++ takes more notice, so C++ compilers' C support (which is arguably more
> important than direct C support, because people call MPI using C++'s C
> interface) is often in conflict.  This is almost certainly a case where
> that will be true, but it may not affect these interfaces - I can't say.
> 
> The result is that C code (and, worse, libraries) often require specific
> versions (i.e. not just 'any standard later than'). I agree that it looks
> likely that generic interfaces are going to be one of the more widely
> implemented parts of C99, but don't discount the problem of people using
> multiple libraries where other ones constrain the C version for other
> reasons.
> 
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-07-31 Thread Jeff Hammond via mpi-forum



> On Jul 31, 2019, at 9:50 AM, Jeff Squyres (jsquyres)  
> wrote:
> 
>> On Jul 31, 2019, at 12:14 PM, Jeff Hammond  wrote:
>> 
>>> You're ignoring the long tail of consequences here -- what about 
>>> PMPI/tools?  What about other C++ features that we should be using, too?  
>>> ...?
>> 
>> No scope creep. No slippery slope. Do the one thing we need to go and stop. 
>> Leave the rest for MPI-5. 
> 
> So PMPI/tools are out of scope?
> 

“C++ compilers shall produce the same result as C11 generic.” Why does this 
need to say anything different for profiling and tools? Is this impossible?

> Looking forward to your pull request.

I won’t lose any sleep if we don’t get both C11 and C++ overloads. I’m just 
saying it shouldn’t be hard to get C++ if we do C11.

Jeff
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-07-31 Thread Jeff Hammond via mpi-forum
> 
>> If you don’t say C++, there’s no reason OMPI and MPICH can’t do the obvious, 
>> trivial and intelligent thing.
> 
> I guess I disagree with all three of those hyperbolic assertions.  :-)
> 
> You're ignoring the long tail of consequences here -- what about PMPI/tools?  
> What about other C++ features that we should be using, too?  ...?

No scope creep. No slippery slope. Do the one thing we need to go and stop. 
Leave the rest for MPI-5. 

Jeff
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-07-31 Thread Jeff Hammond via mpi-forum
It’s a long email to read on my phone while on vacation. 

You just need a sentence that says C++ compilers support C bindings, including 
the C11 generic stuff, just using a very different mechanism. Is that going to 
delay MPI-4?

In any case, all the issues with polymorphism is exactly why it’s so important 
to have explicit symbols for C89/C99 usage so that implementations can add 
extensions that do the polymorphism stuff if it doesn’t get voted in. If you 
don’t say C++, there’s no reason OMPI and MPICH can’t do the obvious, trivial 
and intelligent thing.

Jeff

Sent from my iPhone

> On Jul 31, 2019, at 8:03 AM, Jeff Squyres (jsquyres)  
> wrote:
> 
>> On Jul 31, 2019, at 10:52 AM, Jeff Hammond  wrote:
>> 
>> You’re going to have to mention C++. You can’t just pretend that C++ 
>> supports C11 generic, because it explicitly doesn’t.
> 
> We are mentioning C++.  Please re-read my prior email.  
> 
>> And you really should do this because it’s ridiculous not to use C++ 
>> polymorphism if we use C11’s.
> 
> There are three options:
> 
> 1. Re-introduce C++ bindings, delay MPI-4.
> 2. Re-introduce C++ bindings, BigCount misses the MPI-4 train.
> 4. Do not re-introduce C++ bindings, BigCount catches the MPI-4 train.
> 
> The feedback from the Forum was that BigCount was a blocker/gating issue for 
> MPI-4.  Hence, this is why the BigCount WG is not planning at this time to 
> re-introduce C++ bindings via BigCount.
> 
> There is a longer term plan (think: MPI-5) to introduce a full-featured set 
> of C++ bindings to MPI -- one that does not necessarily have a 1:1 
> correspondence to the C bindings.  That is a different, much longer effort, 
> and will definitely not make it into MPI-4.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-07-31 Thread Jeff Hammond via mpi-forum
You’re going to have to mention C++. You can’t just pretend that C++ supports 
C11 generic, because it explicitly doesn’t. And you really should do this 
because it’s ridiculous not to use C++ polymorphism if we use C11’s.

Jeff

> On Jul 31, 2019, at 6:14 AM, Jeff Squyres (jsquyres) via mpi-forum 
>  wrote:
> 
>> On Jul 31, 2019, at 4:31 AM, Joseph Schuchart via mpi-forum 
>>  wrote:
>> 
>> Should we mark in the interface the fact that the MPI_Count overloads are 
>> only available in C11? I'm thinking about something similar to 
>> cppreference's distinction between C/C++ standard versions, e.g.,
>> 
>> 
>> ```
>> int MPI_Send(const void *buf, MPI_Count count, MPI_Datatype datatype, int 
>> dest, int tag, MPI_Comm comm) [>C11|C++]
>> ```
> 
> These are exactly the kind of discussions that I'd like to have before the 
> September meeting: what is the best way to render the output to convey the 
> information in an aesthetic way?  I'm terrible at this kind of stuff.
> 
> -
> 
> That being said, I would not want to mark anything as "C++", because the MPI 
> spec does not explicitly support C++ at all.  The text that will support this 
> ticket will only have an Advice to Implementors for those implementations who 
> want to continue to have an  that supports both C and C++.  Per 
> feedback we got in the Virtual Meeting last week, the advice is to *not* use 
> C++ polymorphism, and, instead, treat C++ as a complier that does not support 
> C11 _Generic (i.e., the MPI_Count-enabled version of MPI_Foo() will simply 
> not be available -- see Froozle).
> 
> The rationale here is that if an implementation supports C++ polymorphism, 
> there are three options:
> 
> 1. The standard needs to make official statements about C++, which basically 
> re-introduces formal C++ support in the MPI spec (which is a Big Deal)
> 2. PMPI-enabled tools will need to support non-standard C++ polymorphism in 
> order to guarantee that they can intercept all MPI APIs when the application 
> is written in C++
> 3. PMPI-enabled tools do not support intercepting all MPI APIs when the 
> application is written in C++
> 
> None of those 3 options are attractive.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Virtual Meeting Information Moving

2019-03-28 Thread Jeff Hammond via mpi-forum
I am happy to blame Outlook, but all of the following ICAL files show up as
the March 27 event when I open them.  Are others seeing the same thing?

April 24, 10 AM - 12 PM Central US [Your Time]  [Google
Calendar]

 [ICAL]
Virtual
Meeting: Cartesian Topologies / Sessions (Rolf / Dan)
Meeting Information 
Registration
April
17, 10 AM - 12 PM Central US [Your Time]  [Google
Calendar]

 [ICAL]
Virtual
Meeting: Finepoints / Function Pointers (Ryan / Jeff)
Meeting Information 
Registration
April
10, 10 AM - 11 AM Central US [Your Time]  [Google
Calendar]

 [ICAL]
Virtual
Meeting: Fault Tolerance (Aurelien)
Meeting Information 
Registration
April
3, 10 AM - 12 PM Central US [Your Time]  [Google
Calendar]

 [ICAL]
Virtual
Meeting: Split Types / Sessions (Guillaume / Sessions)
Meeting Information 
Registration
March
27, 10 AM - 12 PM Central US [Your Time]  [Google
Calendar]

 [ICAL]
Virtual
Meeting: MPI_T Events / Terms and Conventions (Marc-Andre / Puri)
Meeting Information 
Registration


On Wed, Mar 27, 2019 at 12:21 PM Wesley Bland via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Hi folks,
>
> As you'll have noticed recently, the bots have become a major problem for
> our WebEx calls. Jeff and I got together and we think the best solution to
> try next is to move the WebEx information out from behind the reCAPTCHA and
> into the wiki for the GitHub repository.
>
> This now means that in order to get the WebEx information, you will need
> to be signed into GitHub and you will need to have been added to the MPI
> Forum organization (otherwise you will see a 404 page). Instructions for
> that are here:
>
>
> https://github.com/mpi-forum/mpi-issues/wiki/Access-to-the-MPI-Forum-private-repository
>
> Once you've done that and we've added you (if you haven't done this
> before, which most of you have), you should be able to go to the link that
> will be posted here:
>
> https://www.mpi-forum.org/meetings/
>
> One other thing that came up during the call today as the world shifts
> between Daylight Savings Time is that if you want the best way to know what
> time our meetings are (in addition to knowing if we have a meeting at all),
> we have a Google Calendar to which you can subscribe and always be up to
> date:
>
>
> https://calendar.google.com/calendar/ical/g5ms8r3iaj3u3un3cjndqmjbc0%40group.calendar.google.com/public
>
> If you have trouble getting in, please let me or Jeff Squyres know.
>
> Thanks,
> Wesley
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] MPI International Survey

2018-12-06 Thread Jeff Hammond via mpi-forum
Please stop perpetuating the myth that MPI and PGAS are opposed by asking
questions that support this false dichotomy.

Lots of folks use MPI via other APIs, whether they be PETSc or Global
Arrays.  Thanks to MPI-3 RMA, PGAS can be just another abstraction layer on
top of MPI.

*Do you have any plan (to investigate) to switch from using MPI to using
any other parallel language/library? **
*A PGAS language (UPC, Coarray Fortran, OpenSHMEM, XcalableMP, ...)*

GCC/OpenCoarrays and Intel MPI both use MPI-3 RMA as a/the communication
layer.  OSHMPI is very close to a 1:1 mapping between OpenSHMEM and MPI-3
RMA.  UPC could be ported to MPI-3 RMA with dynamic windows if someone
cared.

Jeff

On Thu, Dec 6, 2018 at 3:19 PM Atsushi HORI via mpi-forum <
mpi-forum@lists.mpi-forum.org> wrote:

> Hello, MPI Forum members,
>
> I, George Bosilca (UT/ICL) and Emmanuel Jeannot (Inria) are working on
> conducting international MPI survey to reveal the differences among
> countries and/or regions around the world.  I had a chance to talk on this
> project at the MPI Forum meeting in San Jose and many attendees agreed to
> help our project. You can find the attached PDF file which was presented at
> the MPI Forum meeting.
>
> I would like you to go to the GoogleDoc (Google account is required)
>
>
> https://docs.google.com/forms/d/e/1FAIpQLSfWLWsuhr4opqvn1FL5c8p7Ysz-oclZEMlBAzEGnBZkaaIiKQ/viewform
>
> to fill in your answers.  At the end of the survey, I added some questions
> only for MPI forum members so that you can leave your comments on our
> survey.
>
> It is designed to be short and easy. I am quite sure you can answer all
> questions in 5 minutes. I hope some MPI Forum attendees can answer the
> survey at airport while you are waiting for your flight.
>
> The deadline for MPI forum members is next  Tuesday 11, Dec .
>
> The survey in production will be conducted in the upcoming January 2019,
> hopefully. The first draft report will be available to all of you in April
> 2019.
>
> -
> Atsushi HORI
> ah...@riken.jp
> https://www.sys.r-ccs.riken.jp
>
>
>
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
>


-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Updates to the MPI implementations slide

2018-11-06 Thread Jeff Hammond via mpi-forum
What does it mean to implement MPI_Comm_create_group partially?

Jeff

> On Nov 6, 2018, at 5:41 PM, Balaji, Pavan via mpi-forum 
>  wrote:
> 
> All,
> 
> I just got a last-minute update from Fujitsu.  I've attached the slide with 
> this update.  Sorry for the change.
> 
> I also got a suggestion from Dan that the next version of this slide include 
> the voted-in MPI-4 features.  I haven't added those features in yet, but I 
> plan to do that for the June 2019 version.
> 
> Regards,
> 
>   -- Pavan
> 
> 
> 
> > On Nov 6, 2018, at 2:41 PM, Balaji, Pavan  wrote:
> > 
> > All,
> > 
> > Based on the updates so far, please use the attached MPI implementations 
> > slide for your talks.  I've also added today's date on the slide because 
> > there seem to be multiple versions of this slide flying around.
> > 
> > Regards,
> > 
> >   -- Pavan
> > 
> > 
> > 
> > > On Nov 2, 2018, at 11:48 AM, Balaji, Pavan via mpi-forum 
> > >  wrote:
> > > 
> > > All,
> > > 
> > > It's the time of the year again.  If you have any updates to the MPI 
> > > implementations slide, please send it to me, so we can use it at the MPI 
> > > Forum and other MPI events during SC.  This slide is also publicized on 
> > > the Forum website.
> > > 
> > > I've attached the current version.  The following implementations are 
> > > incomplete with respect to MPI-3.1, so if you have any updates, please 
> > > let me know: TH-MPI, IBM BG/Q MPI, IBM PE MPI, Fujitsu MPI, MS MPI, MPC, 
> > > Sunway MPI, and AMPI.
> > > 
> > > Regards,
> > > 
> > >  -- Pavan
> > > 
> > > ___
> > > mpi-forum mailing list
> > > mpi-forum@lists.mpi-forum.org
> > > https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> > 
> > 
> 
> 
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum