dear all,
"allreduce should be in MPI_COMM_WORLD"
I think that you have find the problem.
However, in my original code, the counter information belongs only to the
master group.
should I share that information with the slaves of each masters?
thanks again
Diego
On 20 August 2018 at 09:17,
Diego,
first, try using MPI_IN_PLACE when sendbuffer and recvbuffer are identical
at first glance, the second allreduce should be in MPI_COMM_WORLD (with
counter=0 when master_comm is null),
or you have to add an extra broadcast in local_comm
Cheers,
Gilles
On 8/20/2018 3:56 PM,
Dear George, Dear Gilles, Dear Jeff, Deal all,
Thank for all the suggestions.
The problem is that I do not want to FINALIZE, but only to exit from a
cycle.
This is my code:
I have:
master_group;
each master sends to its slaves only some values;
the slaves perform something;
according to a
> On Aug 12, 2018, at 2:18 PM, Diego Avesani
> wrote:
> >
> > For example, I have to exit to a cycle, according to a
> check:
> >
> > IF(counter.GE.npercstop*nParticles)THEN
> > flag2exit=1
> > WRITE(*,*)
Diego,
Since this question is not Open MPI specific, Stack Overflow (or similar
forum) is a better place to ask.
Make sure you first read https://stackoverflow.com/help/mcve
Feel free to post us a link to your question.
Cheers,
Gilles
On Monday, August 13, 2018, Diego Avesani wrote:
> dear
dear Jeff, dear all,
its my fault.
Can I send an attachment?
thanks
Diego
On 13 August 2018 at 19:06, Jeff Squyres (jsquyres)
wrote:
> On Aug 12, 2018, at 2:18 PM, Diego Avesani
> wrote:
> >
> > Dear all, Dear Jeff,
> > I have three communicator:
> >
> > the standard one:
> >
On Aug 12, 2018, at 2:18 PM, Diego Avesani wrote:
>
> Dear all, Dear Jeff,
> I have three communicator:
>
> the standard one:
> MPI_COMM_WORLD
>
> and other two:
> MPI_LOCAL_COMM
> MPI_MASTER_COMM
>
> a sort of two-level MPI.
>
> Suppose to have 8 threats,
> I use 4 threats for run the same
Dear all, Dear Jeff,
I have three communicator:
the standard one:
MPI_COMM_WORLD
and other two:
MPI_LOCAL_COMM
MPI_MASTER_COMM
a sort of two-level MPI.
Suppose to have 8 threats,
I use 4 threats for run the same problem with different value. These are
the LOCAL_COMM.
In addition I have a
On Aug 10, 2018, at 6:27 PM, Diego Avesani wrote:
>
> The question is:
> Is it possible to have a barrier for all CPUs despite they belong to
> different group?
> If the answer is yes I will go in more details.
By "CPUs", I assume you mean "MPI processes", right? (i.e., not threads inside
an
Dear Jeff,
you are right.
The question is:
Is it possible to have a barrier for all CPUs despite they belong to
different group?
If the answer is yes I will go in more details.
Thank a lot
Diego
On 10 August 2018 at 19:49, Jeff Squyres (jsquyres) via users <
users@lists.open-mpi.org> wrote:
I'm not quite clear what the problem is that you're running in to -- you just
said that there is "some problem with MPI_barrier".
What problem, exactly, is happening with your code? Be as precise and specific
as possible.
It's kinda hard to tell what is happening in the code snippet below
Dear all,
I have a MPI program with three groups with some CPUs in common.
I have some problem with MPI_barrier.
I try to make my self clear. I have three communicator:
INTEGER :: MPI_GROUP_WORLD
INTEGER :: MPI_LOCAL_COMM
INTEGER :: MPI_MASTER_COMM
when I apply:
IF(MPIworld%rank.EQ.0)
12 matches
Mail list logo