It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.

As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4, 
meaning that they'll likely continue to be in MPI for at least another 10 
years.  :-)

(And even if they did get killed in MPI-4, implementations like Open MPI would 
continue to keep them in our implementations for quite a while -- i.e., years)


> On Aug 10, 2018, at 1:13 PM, Diego Avesani <diego.aves...@gmail.com> wrote:
> 
> I agree about the names, it is very similar to MIN_LOC and MAX_LOC in fortran 
> 90.
> However, I find difficult to define some algorithm able to do the same things.
> 
> 
> 
> Diego
> 
> 
> On 10 August 2018 at 19:03, Nathan Hjelm via users <users@lists.open-mpi.org> 
> wrote:
> They do not fit with the rest of the predefined operations (which operate on 
> a single basic type) and can easily be implemented as user defined operations 
> and get the same performance. Add to that the fixed number of tuple types and 
> the fact that some of them are non-contiguous (MPI_SHORT_INT) plus the 
> terrible names. If I could kill them in MPI-4 I would. 
> 
> On Aug 10, 2018, at 9:47 AM, Diego Avesani <diego.aves...@gmail.com> wrote:
> 
>> Dear all,
>> I have just implemented MAXLOC, why should they  go away?
>> it seems working pretty well.
>> 
>> thanks
>> 
>> Diego
>> 
>> 
>> On 10 August 2018 at 17:39, Nathan Hjelm via users 
>> <users@lists.open-mpi.org> wrote:
>> The problem is minloc and maxloc need to go away. better to use a custom op. 
>> 
>> On Aug 10, 2018, at 9:36 AM, George Bosilca <bosi...@icl.utk.edu> wrote:
>> 
>>> You will need to create a special variable that holds 2 entries, one for 
>>> the max operation (with whatever type you need) and an int for the rank of 
>>> the process. The MAXLOC is described on the OMPI man page [1] and you can 
>>> find an example on how to use it on the MPI Forum [2].
>>> 
>>> George.
>>> 
>>> 
>>> [1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
>>> [2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
>>> 
>>> On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani <diego.aves...@gmail.com> 
>>> wrote:
>>>  Dear all,
>>> I have probably understood.
>>> The trick is to use a real vector and to memorize also the rank.
>>> 
>>> Have I understood correctly?
>>> thanks
>>> 
>>> Diego
>>> 
>>> 
>>> On 10 August 2018 at 17:19, Diego Avesani <diego.aves...@gmail.com> wrote:
>>> Deal all,
>>> I do not understand how MPI_MINLOC works. it seem locate the maximum in a 
>>> vector and not the CPU to which the valur belongs to.
>>> 
>>> @ray: and if two has the same value?
>>> 
>>> thanks 
>>> 
>>> 
>>> Diego
>>> 
>>> 
>>> On 10 August 2018 at 17:03, Ray Sheppard <rshep...@iu.edu> wrote:
>>> As a dumb scientist, I would just bcast the value I get back to the group 
>>> and ask whoever owns it to kindly reply back with its rank.
>>>      Ray
>>> 
>>> 
>>> On 8/10/2018 10:49 AM, Reuti wrote:
>>> Hi,
>>> 
>>> Am 10.08.2018 um 16:39 schrieb Diego Avesani <diego.aves...@gmail.com>:
>>> 
>>> Dear all,
>>> 
>>> I have a problem:
>>> In my parallel program each CPU compute a value, let's say eff.
>>> 
>>> First of all, I would like to know the maximum value. This for me is quite 
>>> simple,
>>> I apply the following:
>>> 
>>> CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, 
>>> MPI_MASTER_COMM, MPIworld%iErr)
>>> Would MPI_MAXLOC be sufficient?
>>> 
>>> -- Reuti
>>> 
>>> 
>>> However, I would like also to know to which CPU that value belongs. Is it 
>>> possible?
>>> 
>>> I have set-up a strange procedure but it works only when all the CPUs has 
>>> different values but fails when two of then has the same eff value.
>>> 
>>> Is there any intrinsic MPI procedure?
>>> in anternative,
>>> do you have some idea?
>>> 
>>> really, really thanks.
>>> Diego
>>> 
>>> 
>>> Diego
>>> 
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> 
>>> 
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>> _______________________________________________
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> 
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> 
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to