Dear all,
Thanks for the talk,
it was amazing and, as always, I have learned a lot.



Diego


On 12 August 2018 at 02:02, Jeff Hammond <jeff.scie...@gmail.com> wrote:

> The MPI Forum email lists and GitHub are not secret.  Please feel free to
> follow the GitHub project linked below and/or sign up for the MPI Forum
> email lists if you are interested in the evolution of the MPI standard.
>
> What MPI Forum members should avoid is creating FUD about MPI by
> speculating about the removal of useful features.  There is plenty of time
> to have those debates in both public and private after formal proposals are
> made.
>
> Jeff
>
> On Fri, Aug 10, 2018 at 11:11 AM, Gus Correa <g...@ldeo.columbia.edu>
> wrote:
>
>> Hmmm ... no, no, no!
>> Keep it secret why!?!?
>>
>> Diego Avesani's questions and questioning
>> may have saved us users from getting a
>> useful feature deprecated in the name of code elegance.
>> Code elegance may be very cherished by developers,
>> but it is not necessarily helpful to users,
>> specially if it strips off useful functionality.
>>
>> My cheap 2 cents from a user.
>> Gus Correa
>>
>>
>> On 08/10/2018 01:52 PM, Jeff Hammond wrote:
>>
>>> This thread is a perfect illustration of why MPI Forum participants
>>> should not flippantly discuss feature deprecation in discussion with
>>> users.  Users who are not familiar with the MPI Forum process are not able
>>> to evaluate whether such proposals are serious or have any hope of
>>> succeeding and therefore may be unnecessarily worried about their code
>>> breaking in the future, when that future is 5 to infinity years away.
>>>
>>> If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
>>> discussion on https://github.com/mpi-forum/mpi-issues/issues or
>>> https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
>>>
>>> Jeff
>>>
>>> On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users <
>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
>>>
>>>     It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time
>>>     soon.
>>>
>>>     As far as I know, Nathan hasn't advanced a proposal to kill them in
>>>     MPI-4, meaning that they'll likely continue to be in MPI for at
>>>     least another 10 years.  :-)
>>>
>>>     (And even if they did get killed in MPI-4, implementations like Open
>>>     MPI would continue to keep them in our implementations for quite a
>>>     while -- i.e., years)
>>>
>>>
>>>      > On Aug 10, 2018, at 1:13 PM, Diego Avesani
>>>     <diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>>      >
>>>      > I agree about the names, it is very similar to MIN_LOC and
>>>     MAX_LOC in fortran 90.
>>>      > However, I find difficult to define some algorithm able to do the
>>>     same things.
>>>      >
>>>      >
>>>      >
>>>      > Diego
>>>      >
>>>      >
>>>      > On 10 August 2018 at 19:03, Nathan Hjelm via users
>>>     <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
>>>      > They do not fit with the rest of the predefined operations (which
>>>     operate on a single basic type) and can easily be implemented as
>>>     user defined operations and get the same performance. Add to that
>>>     the fixed number of tuple types and the fact that some of them are
>>>     non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
>>>     kill them in MPI-4 I would.
>>>      >
>>>      > On Aug 10, 2018, at 9:47 AM, Diego Avesani
>>>     <diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>>      >
>>>      >> Dear all,
>>>      >> I have just implemented MAXLOC, why should they  go away?
>>>      >> it seems working pretty well.
>>>      >>
>>>      >> thanks
>>>      >>
>>>      >> Diego
>>>      >>
>>>      >>
>>>      >> On 10 August 2018 at 17:39, Nathan Hjelm via users
>>>     <users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>> wrote:
>>>      >> The problem is minloc and maxloc need to go away. better to use
>>>     a custom op.
>>>      >>
>>>      >> On Aug 10, 2018, at 9:36 AM, George Bosilca <bosi...@icl.utk.edu
>>>     <mailto:bosi...@icl.utk.edu>> wrote:
>>>      >>
>>>      >>> You will need to create a special variable that holds 2
>>>     entries, one for the max operation (with whatever type you need) and
>>>     an int for the rank of the process. The MAXLOC is described on the
>>>     OMPI man page [1] and you can find an example on how to use it on
>>>     the MPI Forum [2].
>>>      >>>
>>>      >>> George.
>>>      >>>
>>>      >>>
>>>      >>> [1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
>>>     <https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
>>>      >>> [2]
>>>     https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
>>>     <https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
>>>      >>>
>>>      >>> On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
>>>     <diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>>      >>>  Dear all,
>>>      >>> I have probably understood.
>>>      >>> The trick is to use a real vector and to memorize also the rank.
>>>      >>>
>>>      >>> Have I understood correctly?
>>>      >>> thanks
>>>      >>>
>>>      >>> Diego
>>>      >>>
>>>      >>>
>>>      >>> On 10 August 2018 at 17:19, Diego Avesani
>>>     <diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>> wrote:
>>>      >>> Deal all,
>>>      >>> I do not understand how MPI_MINLOC works. it seem locate the
>>>     maximum in a vector and not the CPU to which the valur belongs to.
>>>      >>>
>>>      >>> @ray: and if two has the same value?
>>>      >>>
>>>      >>> thanks
>>>      >>>
>>>      >>>
>>>      >>> Diego
>>>      >>>
>>>      >>>
>>>      >>> On 10 August 2018 at 17:03, Ray Sheppard <rshep...@iu.edu
>>>     <mailto:rshep...@iu.edu>> wrote:
>>>      >>> As a dumb scientist, I would just bcast the value I get back to
>>>     the group and ask whoever owns it to kindly reply back with its rank.
>>>      >>>      Ray
>>>      >>>
>>>      >>>
>>>      >>> On 8/10/2018 10:49 AM, Reuti wrote:
>>>      >>> Hi,
>>>      >>>
>>>      >>> Am 10.08.2018 um 16:39 schrieb Diego Avesani
>>>     <diego.aves...@gmail.com <mailto:diego.aves...@gmail.com>>:
>>>      >>>
>>>      >>> Dear all,
>>>      >>>
>>>      >>> I have a problem:
>>>      >>> In my parallel program each CPU compute a value, let's say eff.
>>>      >>>
>>>      >>> First of all, I would like to know the maximum value. This for
>>>     me is quite simple,
>>>      >>> I apply the following:
>>>      >>>
>>>      >>> CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
>>>     MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
>>>      >>> Would MPI_MAXLOC be sufficient?
>>>      >>>
>>>      >>> -- Reuti
>>>      >>>
>>>      >>>
>>>      >>> However, I would like also to know to which CPU that value
>>>     belongs. Is it possible?
>>>      >>>
>>>      >>> I have set-up a strange procedure but it works only when all
>>>     the CPUs has different values but fails when two of then has the
>>>     same eff value.
>>>      >>>
>>>      >>> Is there any intrinsic MPI procedure?
>>>      >>> in anternative,
>>>      >>> do you have some idea?
>>>      >>>
>>>      >>> really, really thanks.
>>>      >>> Diego
>>>      >>>
>>>      >>>
>>>      >>> Diego
>>>      >>>
>>>      >>> _______________________________________________
>>>      >>> users mailing list
>>>      >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >>> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>> _______________________________________________
>>>      >>> users mailing list
>>>      >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >>> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>>
>>>      >>> _______________________________________________
>>>      >>> users mailing list
>>>      >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >>> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>>
>>>      >>>
>>>      >>> _______________________________________________
>>>      >>> users mailing list
>>>      >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >>> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>> _______________________________________________
>>>      >>> users mailing list
>>>      >>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >>> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>
>>>      >> _______________________________________________
>>>      >> users mailing list
>>>      >> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >>
>>>      >> _______________________________________________
>>>      >> users mailing list
>>>      >> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      >> https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >
>>>      > _______________________________________________
>>>      > users mailing list
>>>      > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      > https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>      >
>>>      > _______________________________________________
>>>      > users mailing list
>>>      > users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>      > https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>>
>>>     --     Jeff Squyres
>>>     jsquy...@cisco.com <mailto:jsquy...@cisco.com>
>>>
>>>     _______________________________________________
>>>     users mailing list
>>>     users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>>     https://lists.open-mpi.org/mailman/listinfo/users
>>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>>
>>>
>>>
>>>
>>> --
>>> Jeff Hammond
>>> jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
>>> http://jeffhammond.github.io/
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to