dear all,

"allreduce should be in MPI_COMM_WORLD"

I think that you have find the problem.
However, in my original code, the counter information belongs only to the
master group.
should I share that information with the slaves of each masters?

thanks again



Diego


On 20 August 2018 at 09:17, Gilles Gouaillardet <gil...@rist.or.jp> wrote:

> Diego,
>
>
> first, try using MPI_IN_PLACE when sendbuffer and recvbuffer are identical
>
>
> at first glance, the second allreduce should be in MPI_COMM_WORLD (with
> counter=0 when master_comm is null),
>
> or you have to add an extra broadcast in local_comm
>
>
> Cheers,
>
>
> Gilles
>
>
>
> On 8/20/2018 3:56 PM, Diego Avesani wrote:
>
>> Dear George, Dear Gilles, Dear Jeff, Deal all,
>>
>> Thank for all the suggestions.
>> The problem is that I do not want to FINALIZE, but only to exit from a
>> cycle.
>> This is my code:
>> I have:
>> master_group;
>> each master sends to its slaves only some values;
>> the slaves perform something;
>> according to a counter, every processor has to leave a cycle.
>>
>> Here an example, if you want I can give you more details.
>>
>> DO iRun=1,nRun
>>    !
>>    IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
>>       VARS(1) = REAL(iRun+1)
>>       VARS(2) = REAL(iRun+100)
>>       VARS(3) = REAL(iRun+200)
>>       VARS(4) = REAL(iRun+300)
>>    ENDIF
>>    !
>>    CALL MPI_BCAST(VARS,4,MPI_DOUBLE_PRECISION,0,MPI_LOCAL_COMM,iErr)
>>    !
>>    test = SUM(VARS)
>>    !
>>    CALL MPI_ALLREDUCE(test, test, 1, MPI_DOUBLE_PRECISION, MPI_SUM,
>> MPI_LOCAL_COMM,iErr)
>>    !
>>    !
>>    counter = test
>>    !
>>    CALL MPI_ALLREDUCE(counter, counter, 1, MPI_DOUBLE_PRECISION, MPI_SUM,
>> MPI_MASTER_COMM,iErr)
>>    !
>>    IF(counter.GT.10000)THEN
>>       EXIT
>>    ENDIF
>> ENDDO
>>
>> My original code stucks on the cycle and I do not know why.
>>
>> Thanks
>>
>>
>>
>>
>>
>> Diego
>>
>>
>> On 13 August 2018 at 23:44, George Reeke <re...@mail.rockefeller.edu
>> <mailto:re...@mail.rockefeller.edu>> wrote:
>>
>>
>>     >         On Aug 12, 2018, at 2:18 PM, Diego Avesani
>>     >         <diego.aves...@gmail.com
>>     <mailto:diego.aves...@gmail.com>> wrote:
>>     >         >
>>     >         > For example, I have to exit to a cycle, according to a
>>     >         check:
>>     >         >
>>     >         > IF(counter.GE.npercstop*nParticles)THEN
>>     >         >         flag2exit=1
>>     >         >         WRITE(*,*) '-Warning PSO has been exit'
>>     >         >         EXIT pso_cycle
>>     >         >      ENDIF
>>     >         >
>>     >         > But this is difficult to do since I have to exit only
>>     after
>>     >         all the threats inside a set have finish their task.
>>     >         >
>>     >         > Do you have some suggestions?
>>     >         > Do you need other information?
>>     >
>>     Dear Diego et al,
>>     Assuming I understand your problem:
>>     The way I do this is set up one process that is responsible for normal
>>     and error exits.  It sits looking for messages from all the other
>>     ranks
>>     that are doing work.  Certain messages are defined to indicate an
>>     error
>>     exit with an error number or some text.  The exit process is
>>     spawned by
>>     the master process at startup and is told how many working
>>     processes are
>>     there.  Each process either sends an OK exit when it is done or an
>>     error
>>     message.  The exit process counts these exit messages and when the
>>     count
>>     equals the number of working processes, it prints any/all errors, then
>>     sends messages back to all the working processes, which, at this time,
>>     should be waiting for these and they can terminate with MPI_Finalize.
>>        Of course it is more complicated than that to handle special cases
>>     like termination before everything has really started or when the
>>     protocol is not followed, debug messages that do not initiate
>>     termination, etc. but maybe this will give you an idea for one
>>     way to deal with this issue.
>>     George Reeke
>>
>>
>>
>>
>>     _______________________________________________
>>     users mailing list
>>     users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
>>     https://lists.open-mpi.org/mailman/listinfo/users
>>     <https://lists.open-mpi.org/mailman/listinfo/users>
>>
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to