Re: [Mpi-forum] why do we only support caching on win/comm/datatype?

2023-01-16 Thread Jed Brown via mpi-forum
Second that MPI attributes do not suck. PETSc uses communicator attributes 
heavily to avoid lots of confusing or wasteful behavior when users pass 
communicators between libraries and similar comments would apply if other MPI 
objects were passed between libraries in that way.

It was before my time, but I think PETSc's use of attributes predates MPI-1.0 
and MPI's early and pervasive support for attributes is one of the things I 
celebrate when discussing software engineering of libraries intended for use by 
other libraries versus those made for use by applications. Please don't dismiss 
attributes even if you don't enjoy them.

Jeff Hammond via mpi-forum  writes:

> The API is annoying but it really only gets used in library middleware by 
> people like us who can figure out the void* casting nonsense and use it 
> correctly. 
>
> Casper critically depends on window attributes.
>
> Request attributes are the least intrusive way to allow libraries to do 
> completion callbacks. They give users a way to do this that adds zero 
> instructions to the critical path and is completely invisible unless actually 
> requires. 
>
> Attributes do not suck and people should stop preventing those of us who 
> write libraries to make the MPI ecosystem better from doing our jobs because 
> they want to whine about problems they’re too lazy to solve. 
>
> I guess I’ll propose request and op attributes because I need them and people 
> can either solve those problems better ways or get out of the way. 
>
> Jeff
>
> Sent from my iPhone
>
>> On 16. Jan 2023, at 20.27, Holmes, Daniel John 
>>  wrote:
>> 
>> 
>> Hi Jeff,
>>  
>> When adding session as an object to MPI, a deliberate choice was made not to 
>> support attributes for session objects because “attributes in MPI suck”.
>>  
>> This decision was made despite the usage (by some tools) of “at exit” 
>> attribute callbacks fired by the destruction of MPI_COMM_SELF during 
>> MPI_FINALIZE in the world model and the consequent obvious omission of a 
>> similar hook during MPI_SESSION_FINALIZE in the session model (there is also 
>> no MPI_COMM_SELF in the session model, so this is not a simple subject).
>>  
>> Removal of attributes entirely – blocked by back-compat because usage is 
>> known to exist.
>> Expansion of attributes orthogonally – blocked by “attributes in MPI suck” 
>> accusations.
>>  
>> Result – inconsistency in the interface that no-one wants to tackle.
>>  
>> Best wishes,
>> Dan.
>>  
>> From: mpi-forum  On Behalf Of Jeff 
>> Hammond via mpi-forum
>> Sent: 16 January 2023 14:40
>> To: MPI Forum 
>> Cc: Jeff Hammond 
>> Subject: [Mpi-forum] why do we only support caching on win/comm/datatype?
>>  
>> I am curious if there is a good reason from the past as to why we only 
>> support caching on win, comm and datatype, and no other handles?
>>  
>> I have a good use case for request attributes and have found that the 
>> implementation overhead in MPICH appears to be zero.  The implementation in 
>> MPICH requires adding a single pointer to an internal struct.  This struct 
>> member will never be accessed except when the user needs it, and it can be 
>> placed at the end of the struct so that it doesn't even pollute the cache.
>>  
>> I wondered if callbacks were a hidden overhead, but they only called 
>> explicitly and synchronously, so they would not interfere with critical path 
>> uses of requests.
>>  
>> https://github.com/mpi-forum/mpi-issues/issues/664 has some details but 
>> since I do not understand how MPICH generates the MPI bindings, I only 
>> implemented the back-end MPIR code.
>>  
>> It would make MPI more consistent if all opaque handles supported 
>> attributes.  In particular, I'd love to have a built-in MPI_Op attribute for 
>> the function pointer the user provided (which is similar to how one can 
>> query input args associated with MPI_Win) because that appears to be the 
>> only way I can implement certain corner cases of MPI F08.
>>  
>> Thanks,
>>  
>> Jeff
>>  
>> --
>> Jeff Hammond
>> jeff.scie...@gmail.com
>> http://jeffhammond.github.io/
> ___
> mpi-forum mailing list
> mpi-forum@lists.mpi-forum.org
> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] Giving up on C11 _Generic

2019-08-12 Thread Jed Brown via mpi-forum
"Jeff Squyres \(jsquyres\) via mpi-forum"  
writes:

> Let me ask a simple question: how will users to write portable MPI programs 
> in C with large count values?
>
> Answer: they will explicitly call MPI_Send_x(), and not rely on C11 _Generic.

Few packages will accept a hard dependency on MPI-4 for at least 10
years.  MS-MPI still doesn't fully support MPI-2.1, for example, and
PETSc only recently began requiring MPI-2.0.

Instead, each package that wishes to upgrade to MPI_Count args will
write configure tests (autoconf, etc.) to detect availability of
MPI_Send_x (as individual functions, not as MPI_VERSION == 4) and define
macros/wrappers OurMPI_Send() that forwards to MPI_Send_x (when
available) or MPI_Send (otherwise).  When the implementation doesn't
provide MPI_Send_x, they'll either have a smart wrapper that errors at
run-time on truncation or (likely most apps) will silently fail with an
FAQ that suggests using a compliant MPI-4 implementation.

I'm not taking a position on C11 _Generic in the standard, but it would
significantly reduce the configure complexity for apps to upgrade to
MPI_Count without dropping support for previous standards.

> Which then raises the question: what's the point of using C11
> _Generic?  Its main functionality can lead to [potentially] silent
> run-time errors, and/or require additional error checking in every
> single code path by the implementation.  That just seems like bad
> design, IMNSHO.  That's why the WG decided to bring this to the Forum
> list (especially given the compressed timeframe for MPI-4).

___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum


Re: [Mpi-forum] "BigCount" rendering in PDF

2019-07-31 Thread Jed Brown via mpi-forum
"Jeff Squyres \(jsquyres\) via mpi-forum"  
writes:

> On Jul 31, 2019, at 12:59 PM, Jeff Hammond  wrote:
>> 
>> “C++ compilers shall produce the same result as C11 generic.” Why does this 
>> need to say anything different for profiling and tools? Is this impossible?
>
> Is there a way to have C++ overloading call the same symbols that we'll 
> dispatch to from C11 _Generic?  (i.e., not-symbol-munged MPI_Send and 
> MPI_Send_x)

You have MPI_Send and MPI_Send_x declared extern "C", so why not:

#ifdef __cplusplus
static inline int MPI_Send(const void *buf, MPI_Count count,
  MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
{
  return MPI_Send_x(buf, count, datatype, dest, tag, comm);
}
#endif

When compiled with any optimization, this yields a direct call to
MPI_Send_x.  In any case, it doesn't add any mangled symbols to libmpi.

Note that you can't have an overload of the extern "C" symbol MPI_Send
taking "int count", but the extern "C" version works just fine for that.

  C++14 §7.5.5: If two declarations declare functions with the same name
  and parameter-type-list (8.3.5) to be members of the same namespace or
  declare objects with the same name to be members of the same namespace
  and the declarations give the names different language linkages, the
  program is ill-formed
___
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum