Hi Jerry,

I can not compile the example from Toon. I get two issues:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120843

and

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120847

Can you reproduce?

- Andre

On Thu, 26 Jun 2025 16:49:59 -0700
Jerry Delisle <jvdelis...@gmail.com> wrote:

> Toon, thank you! I will give it a try here so we can have some data points.
> 
> Jerry
> 
> On Thu, Jun 26, 2025, 2:08 PM Toon Moene <t...@moene.org> wrote:
> 
> > On 6/26/25 21:34, Thomas Koenig wrote:
> >  
> > > Am 26.06.25 um 10:15 schrieb Andre Vehreschild:  
> >  
> > >> Hi Thomas,
> > >>  
> > >>> I have a few questions.
> > >>>
> > >>> First, I see that your patch series does not use gfortran's descriptors
> > >>> for accessing coarrays via shared memory, as the original work by
> > >>> Nicolas did.  Can you comment on that?  
> > >>
> > >> The ABI for invoking coarray functionality is sufficient for doing the
> > >> job.
> > >> Modifying the compiler to access coarrays directly, i.e., having
> > >> implementation
> > >> detail on a certain library in the compiler did not appeal to me.
> > >> Furthermore
> > >> has the new library in conjunction with the other library available the
> > >> potential to get to a stable and maintained ABI. Having another ABI in
> > >> the
> > >> compiler would have lead to two badly maintained ones (in my opinion). I
> > >> therefore decided to just have one ABI and figured that all that is
> > >> needed can
> > >> be done in a library. This also allows link-time polymorphism. And
> > >> last but not
> > >> least, there is a major rework on the array descriptor going on and
> > >> that would
> > >> have had the potential to conflict with my work.  
> > >
> > > We are very probably not going to get performance out of it that is
> > > comparable with the original design; I would be quite surprised if it
> > > was appreciably better than using shared memory MPI, and in that
> > > case I don't see an advantage of your patch over a better wrapper.  
> >
> > When I learned about the "Koenig" implementation of "native coarrays" in
> > 2018 (as they called it at that time, and I since noticed also Intel
> > calls it), I wrote a "mock weather forecasting program" using coarrays
> > to test it against the then working implementation of OpenCoarrays using
> > MPI calls.
> >
> > You can find the program here: https://moene.org/~toon/random-weather
> >
> > [ note that I improved on this program until early in 2021. ]
> >
> > When I compared the run time of the two implementations with the same
> > input parameters on a 128 Gbyte RAM Intel machine, the "native"
> > implementation was around a factor of 5 faster. Of course, the
> > OpenCoarrays based MPI implementation (using OpenMPI) used shared memory
> > MPI (which OpenMPI calls "vader" for reasons that escape me).
> >
> > So I am certainly interested to compare Andre's implementation against
> > OpenCoarrays.
> >
> > Kind regards,
> >
> > --
> > Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
> > Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
> >  


-- 
Andre Vehreschild * Email: vehre ad gmx dot de 

Reply via email to